00:00:00.002 Started by upstream project "autotest-per-patch" build number 124215 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.035 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.036 The recommended git tool is: git 00:00:00.036 using credential 00000000-0000-0000-0000-000000000002 00:00:00.038 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.054 Fetching changes from the remote Git repository 00:00:00.055 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.075 Using shallow fetch with depth 1 00:00:00.075 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.075 > git --version # timeout=10 00:00:00.107 > git --version # 'git version 2.39.2' 00:00:00.107 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.158 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.158 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.544 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.557 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.571 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:04.571 > git config core.sparsecheckout # timeout=10 00:00:04.583 > git read-tree -mu HEAD # timeout=10 00:00:04.600 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:04.624 Commit message: "pool: fixes for VisualBuild class" 00:00:04.624 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:04.723 [Pipeline] Start of Pipeline 00:00:04.737 [Pipeline] library 00:00:04.738 Loading library shm_lib@master 00:00:04.738 Library shm_lib@master is cached. Copying from home. 00:00:04.751 [Pipeline] node 00:00:04.762 Running on VM-host-SM4 in /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:04.764 [Pipeline] { 00:00:04.775 [Pipeline] catchError 00:00:04.777 [Pipeline] { 00:00:04.788 [Pipeline] wrap 00:00:04.797 [Pipeline] { 00:00:04.805 [Pipeline] stage 00:00:04.808 [Pipeline] { (Prologue) 00:00:04.823 [Pipeline] echo 00:00:04.824 Node: VM-host-SM4 00:00:04.830 [Pipeline] cleanWs 00:00:04.837 [WS-CLEANUP] Deleting project workspace... 00:00:04.837 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.842 [WS-CLEANUP] done 00:00:04.997 [Pipeline] setCustomBuildProperty 00:00:05.068 [Pipeline] nodesByLabel 00:00:05.069 Found a total of 2 nodes with the 'sorcerer' label 00:00:05.077 [Pipeline] httpRequest 00:00:05.080 HttpMethod: GET 00:00:05.081 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:05.081 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:05.083 Response Code: HTTP/1.1 200 OK 00:00:05.083 Success: Status code 200 is in the accepted range: 200,404 00:00:05.084 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:05.847 [Pipeline] sh 00:00:06.167 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:06.246 [Pipeline] httpRequest 00:00:06.263 HttpMethod: GET 00:00:06.263 URL: http://10.211.164.101/packages/spdk_d88da79a382876f8eebd9ba88e3cf1cd34bb4992.tar.gz 00:00:06.264 Sending request to url: http://10.211.164.101/packages/spdk_d88da79a382876f8eebd9ba88e3cf1cd34bb4992.tar.gz 00:00:06.265 Response Code: HTTP/1.1 200 OK 00:00:06.266 Success: Status code 200 is in the accepted range: 200,404 00:00:06.266 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/spdk_d88da79a382876f8eebd9ba88e3cf1cd34bb4992.tar.gz 00:00:28.033 [Pipeline] sh 00:00:28.312 + tar --no-same-owner -xf spdk_d88da79a382876f8eebd9ba88e3cf1cd34bb4992.tar.gz 00:00:30.857 [Pipeline] sh 00:00:31.134 + git -C spdk log --oneline -n5 00:00:31.134 d88da79a3 test: Run go tests 00:00:31.134 a3efa13e1 examples: Update hello_gorpc 00:00:31.134 3c7f5112b go/rpc: Implementation of wrapper for go-rpc client 00:00:31.134 0a5aebcde go/rpc: Initial implementation of rpc call generator 00:00:31.134 8b1e208cc python/rpc: Python rpc docs generator. 00:00:31.152 [Pipeline] writeFile 00:00:31.168 [Pipeline] sh 00:00:31.449 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:31.460 [Pipeline] sh 00:00:31.738 + cat autorun-spdk.conf 00:00:31.738 SPDK_TEST_UNITTEST=1 00:00:31.738 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.738 SPDK_TEST_NVME=1 00:00:31.738 SPDK_TEST_BLOCKDEV=1 00:00:31.738 SPDK_RUN_ASAN=1 00:00:31.738 SPDK_RUN_UBSAN=1 00:00:31.738 SPDK_TEST_RAID5=1 00:00:31.738 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.756 RUN_NIGHTLY=0 00:00:31.758 [Pipeline] } 00:00:31.772 [Pipeline] // stage 00:00:31.785 [Pipeline] stage 00:00:31.787 [Pipeline] { (Run VM) 00:00:31.800 [Pipeline] sh 00:00:32.093 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:32.093 + echo 'Start stage prepare_nvme.sh' 00:00:32.093 Start stage prepare_nvme.sh 00:00:32.093 + [[ -n 6 ]] 00:00:32.093 + disk_prefix=ex6 00:00:32.093 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest ]] 00:00:32.093 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf ]] 00:00:32.093 + source /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf 00:00:32.093 ++ SPDK_TEST_UNITTEST=1 00:00:32.093 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.093 ++ SPDK_TEST_NVME=1 00:00:32.093 ++ SPDK_TEST_BLOCKDEV=1 00:00:32.093 ++ SPDK_RUN_ASAN=1 00:00:32.093 ++ SPDK_RUN_UBSAN=1 00:00:32.093 ++ SPDK_TEST_RAID5=1 00:00:32.093 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.093 ++ RUN_NIGHTLY=0 00:00:32.093 + cd /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:32.093 + nvme_files=() 00:00:32.093 + declare -A nvme_files 00:00:32.093 + backend_dir=/var/lib/libvirt/images/backends 00:00:32.093 + nvme_files['nvme.img']=5G 00:00:32.093 + nvme_files['nvme-cmb.img']=5G 00:00:32.093 + nvme_files['nvme-multi0.img']=4G 00:00:32.093 + nvme_files['nvme-multi1.img']=4G 00:00:32.093 + nvme_files['nvme-multi2.img']=4G 00:00:32.093 + nvme_files['nvme-openstack.img']=8G 00:00:32.093 + nvme_files['nvme-zns.img']=5G 00:00:32.093 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:32.093 + (( SPDK_TEST_FTL == 1 )) 00:00:32.093 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:32.093 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:32.093 + for nvme in "${!nvme_files[@]}" 00:00:32.093 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:00:32.093 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.093 + for nvme in "${!nvme_files[@]}" 00:00:32.093 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:00:32.093 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.093 + for nvme in "${!nvme_files[@]}" 00:00:32.093 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:00:32.093 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:32.093 + for nvme in "${!nvme_files[@]}" 00:00:32.093 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:00:32.093 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.093 + for nvme in "${!nvme_files[@]}" 00:00:32.093 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:00:32.351 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.351 + for nvme in "${!nvme_files[@]}" 00:00:32.351 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:00:32.351 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.351 + for nvme in "${!nvme_files[@]}" 00:00:32.351 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:00:33.285 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.285 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:00:33.285 + echo 'End stage prepare_nvme.sh' 00:00:33.285 End stage prepare_nvme.sh 00:00:33.301 [Pipeline] sh 00:00:33.583 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:33.583 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -H -a -v -f ubuntu2204 00:00:33.583 00:00:33.583 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant 00:00:33.583 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk 00:00:33.583 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest 00:00:33.583 HELP=0 00:00:33.583 DRY_RUN=0 00:00:33.583 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img, 00:00:33.583 NVME_DISKS_TYPE=nvme, 00:00:33.583 NVME_AUTO_CREATE=0 00:00:33.583 NVME_DISKS_NAMESPACES=, 00:00:33.583 NVME_CMB=, 00:00:33.583 NVME_PMR=, 00:00:33.583 NVME_ZNS=, 00:00:33.583 NVME_MS=, 00:00:33.583 NVME_FDP=, 00:00:33.583 SPDK_VAGRANT_DISTRO=ubuntu2204 00:00:33.583 SPDK_VAGRANT_VMCPU=10 00:00:33.583 SPDK_VAGRANT_VMRAM=12288 00:00:33.583 SPDK_VAGRANT_PROVIDER=libvirt 00:00:33.583 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:33.583 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:33.583 SPDK_OPENSTACK_NETWORK=0 00:00:33.583 VAGRANT_PACKAGE_BOX=0 00:00:33.583 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:33.583 FORCE_DISTRO=true 00:00:33.583 VAGRANT_BOX_VERSION= 00:00:33.583 EXTRA_VAGRANTFILES= 00:00:33.583 NIC_MODEL=e1000 00:00:33.583 00:00:33.583 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt' 00:00:33.583 /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:36.887 Bringing machine 'default' up with 'libvirt' provider... 00:00:37.146 ==> default: Creating image (snapshot of base box volume). 00:00:37.146 ==> default: Creating domain with the following settings... 00:00:37.146 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1718018648_8b7b802a615fe3e5e3a9 00:00:37.146 ==> default: -- Domain type: kvm 00:00:37.146 ==> default: -- Cpus: 10 00:00:37.146 ==> default: -- Feature: acpi 00:00:37.146 ==> default: -- Feature: apic 00:00:37.146 ==> default: -- Feature: pae 00:00:37.146 ==> default: -- Memory: 12288M 00:00:37.146 ==> default: -- Memory Backing: hugepages: 00:00:37.146 ==> default: -- Management MAC: 00:00:37.146 ==> default: -- Loader: 00:00:37.146 ==> default: -- Nvram: 00:00:37.146 ==> default: -- Base box: spdk/ubuntu2204 00:00:37.146 ==> default: -- Storage pool: default 00:00:37.146 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1718018648_8b7b802a615fe3e5e3a9.img (20G) 00:00:37.146 ==> default: -- Volume Cache: default 00:00:37.146 ==> default: -- Kernel: 00:00:37.146 ==> default: -- Initrd: 00:00:37.146 ==> default: -- Graphics Type: vnc 00:00:37.146 ==> default: -- Graphics Port: -1 00:00:37.146 ==> default: -- Graphics IP: 127.0.0.1 00:00:37.146 ==> default: -- Graphics Password: Not defined 00:00:37.146 ==> default: -- Video Type: cirrus 00:00:37.146 ==> default: -- Video VRAM: 9216 00:00:37.146 ==> default: -- Sound Type: 00:00:37.146 ==> default: -- Keymap: en-us 00:00:37.146 ==> default: -- TPM Path: 00:00:37.146 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:37.146 ==> default: -- Command line args: 00:00:37.146 ==> default: -> value=-device, 00:00:37.146 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:37.146 ==> default: -> value=-drive, 00:00:37.146 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:00:37.146 ==> default: -> value=-device, 00:00:37.146 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.405 ==> default: Creating shared folders metadata... 00:00:37.405 ==> default: Starting domain. 00:00:39.306 ==> default: Waiting for domain to get an IP address... 00:00:49.344 ==> default: Waiting for SSH to become available... 00:00:51.877 ==> default: Configuring and enabling network interfaces... 00:00:57.143 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:02.411 ==> default: Mounting SSHFS shared folder... 00:01:02.669 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:01:02.669 ==> default: Checking Mount.. 00:01:03.604 ==> default: Folder Successfully Mounted! 00:01:03.604 ==> default: Running provisioner: file... 00:01:03.863 default: ~/.gitconfig => .gitconfig 00:01:04.122 00:01:04.122 SUCCESS! 00:01:04.122 00:01:04.122 cd to /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:01:04.122 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:04.122 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt" to destroy all trace of vm. 00:01:04.122 00:01:04.131 [Pipeline] } 00:01:04.150 [Pipeline] // stage 00:01:04.158 [Pipeline] dir 00:01:04.159 Running in /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt 00:01:04.161 [Pipeline] { 00:01:04.174 [Pipeline] catchError 00:01:04.175 [Pipeline] { 00:01:04.190 [Pipeline] sh 00:01:04.470 + + vagrantsed ssh-config -ne --host /^Host/,$p vagrant 00:01:04.470 00:01:04.470 + tee ssh_conf 00:01:08.656 Host vagrant 00:01:08.656 HostName 192.168.121.56 00:01:08.656 User vagrant 00:01:08.656 Port 22 00:01:08.656 UserKnownHostsFile /dev/null 00:01:08.656 StrictHostKeyChecking no 00:01:08.656 PasswordAuthentication no 00:01:08.656 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:01:08.656 IdentitiesOnly yes 00:01:08.656 LogLevel FATAL 00:01:08.656 ForwardAgent yes 00:01:08.656 ForwardX11 yes 00:01:08.656 00:01:08.671 [Pipeline] withEnv 00:01:08.673 [Pipeline] { 00:01:08.688 [Pipeline] sh 00:01:08.966 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:08.966 source /etc/os-release 00:01:08.966 [[ -e /image.version ]] && img=$(< /image.version) 00:01:08.966 # Minimal, systemd-like check. 00:01:08.966 if [[ -e /.dockerenv ]]; then 00:01:08.966 # Clear garbage from the node's name: 00:01:08.966 # agt-er_autotest_547-896 -> autotest_547-896 00:01:08.966 # $HOSTNAME is the actual container id 00:01:08.966 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:08.966 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:08.966 # We can assume this is a mount from a host where container is running, 00:01:08.966 # so fetch its hostname to easily identify the target swarm worker. 00:01:08.966 container="$(< /etc/hostname) ($agent)" 00:01:08.966 else 00:01:08.966 # Fallback 00:01:08.966 container=$agent 00:01:08.966 fi 00:01:08.966 fi 00:01:08.966 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:08.966 00:01:09.234 [Pipeline] } 00:01:09.252 [Pipeline] // withEnv 00:01:09.260 [Pipeline] setCustomBuildProperty 00:01:09.273 [Pipeline] stage 00:01:09.275 [Pipeline] { (Tests) 00:01:09.292 [Pipeline] sh 00:01:09.592 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:09.862 [Pipeline] sh 00:01:10.137 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:10.408 [Pipeline] timeout 00:01:10.408 Timeout set to expire in 1 hr 30 min 00:01:10.410 [Pipeline] { 00:01:10.425 [Pipeline] sh 00:01:10.701 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:11.266 HEAD is now at d88da79a3 test: Run go tests 00:01:11.279 [Pipeline] sh 00:01:11.556 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:11.829 [Pipeline] sh 00:01:12.107 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:12.379 [Pipeline] sh 00:01:12.710 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo 00:01:12.969 ++ readlink -f spdk_repo 00:01:12.969 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:12.969 + [[ -n /home/vagrant/spdk_repo ]] 00:01:12.969 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:12.969 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:12.969 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:12.969 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:12.969 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:12.969 + [[ ubuntu22-vg-autotest == pkgdep-* ]] 00:01:12.969 + cd /home/vagrant/spdk_repo 00:01:12.969 + source /etc/os-release 00:01:12.969 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:01:12.969 ++ NAME=Ubuntu 00:01:12.969 ++ VERSION_ID=22.04 00:01:12.969 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:01:12.969 ++ VERSION_CODENAME=jammy 00:01:12.969 ++ ID=ubuntu 00:01:12.969 ++ ID_LIKE=debian 00:01:12.969 ++ HOME_URL=https://www.ubuntu.com/ 00:01:12.969 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:12.969 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:12.969 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:12.969 ++ UBUNTU_CODENAME=jammy 00:01:12.969 + uname -a 00:01:12.969 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:12.969 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:13.228 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:01:13.228 Hugepages 00:01:13.228 node hugesize free / total 00:01:13.228 node0 1048576kB 0 / 0 00:01:13.228 node0 2048kB 0 / 0 00:01:13.228 00:01:13.228 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:13.228 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:13.228 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:13.228 + rm -f /tmp/spdk-ld-path 00:01:13.228 + source autorun-spdk.conf 00:01:13.487 ++ SPDK_TEST_UNITTEST=1 00:01:13.488 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.488 ++ SPDK_TEST_NVME=1 00:01:13.488 ++ SPDK_TEST_BLOCKDEV=1 00:01:13.488 ++ SPDK_RUN_ASAN=1 00:01:13.488 ++ SPDK_RUN_UBSAN=1 00:01:13.488 ++ SPDK_TEST_RAID5=1 00:01:13.488 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:13.488 ++ RUN_NIGHTLY=0 00:01:13.488 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:13.488 + [[ -n '' ]] 00:01:13.488 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:13.488 + for M in /var/spdk/build-*-manifest.txt 00:01:13.488 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:13.488 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:13.488 + for M in /var/spdk/build-*-manifest.txt 00:01:13.488 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:13.488 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:13.488 ++ uname 00:01:13.488 + [[ Linux == \L\i\n\u\x ]] 00:01:13.488 + sudo dmesg -T 00:01:13.488 + sudo dmesg --clear 00:01:13.488 + dmesg_pid=2150 00:01:13.488 + [[ Ubuntu == FreeBSD ]] 00:01:13.488 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.488 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.488 + sudo dmesg -Tw 00:01:13.488 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:13.488 + [[ -x /usr/src/fio-static/fio ]] 00:01:13.488 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:13.488 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:13.488 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:13.488 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:13.488 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:13.488 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:13.488 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:13.488 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:13.488 Test configuration: 00:01:13.488 SPDK_TEST_UNITTEST=1 00:01:13.488 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.488 SPDK_TEST_NVME=1 00:01:13.488 SPDK_TEST_BLOCKDEV=1 00:01:13.488 SPDK_RUN_ASAN=1 00:01:13.488 SPDK_RUN_UBSAN=1 00:01:13.488 SPDK_TEST_RAID5=1 00:01:13.488 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:13.488 RUN_NIGHTLY=0 11:24:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:13.488 11:24:44 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:13.488 11:24:44 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:13.488 11:24:44 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:13.488 11:24:44 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:13.488 11:24:44 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:13.488 11:24:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:13.488 11:24:44 -- paths/export.sh@5 -- $ export PATH 00:01:13.488 11:24:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:13.488 11:24:44 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:13.488 11:24:44 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:13.488 11:24:44 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718018684.XXXXXX 00:01:13.488 11:24:44 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718018684.oGbwUH 00:01:13.488 11:24:44 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:13.488 11:24:44 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:13.488 11:24:44 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:13.488 11:24:44 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:13.488 11:24:44 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:13.488 11:24:44 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:13.488 11:24:44 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:13.488 11:24:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.488 11:24:44 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:01:13.488 11:24:44 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:13.488 11:24:44 -- pm/common@17 -- $ local monitor 00:01:13.488 11:24:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.488 11:24:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.488 11:24:44 -- pm/common@25 -- $ sleep 1 00:01:13.488 11:24:44 -- pm/common@21 -- $ date +%s 00:01:13.488 11:24:44 -- pm/common@21 -- $ date +%s 00:01:13.488 11:24:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1718018684 00:01:13.488 11:24:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1718018684 00:01:13.746 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1718018684_collect-vmstat.pm.log 00:01:13.746 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1718018684_collect-cpu-load.pm.log 00:01:13.746 Traceback (most recent call last): 00:01:13.746 File "/home/vagrant/spdk_repo/spdk/scripts/rpc.py", line 24, in 00:01:13.746 import spdk.rpc as rpc # noqa 00:01:13.746 File "/home/vagrant/spdk_repo/spdk/python/spdk/rpc/__init__.py", line 13, in 00:01:13.746 from . import bdev 00:01:13.746 File "/home/vagrant/spdk_repo/spdk/python/spdk/rpc/bdev.py", line 6, in 00:01:13.746 from spdk.rpc.rpc import * 00:01:13.746 ModuleNotFoundError: No module named 'spdk.rpc.rpc' 00:01:14.680 11:24:45 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:14.680 11:24:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:14.680 11:24:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:14.680 11:24:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:14.680 11:24:45 -- spdk/autobuild.sh@16 -- $ date -u 00:01:14.680 Mon Jun 10 11:24:45 UTC 2024 00:01:14.680 11:24:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:14.680 v24.09-pre-66-gd88da79a3 00:01:14.680 11:24:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:14.680 11:24:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:14.680 11:24:45 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:14.680 11:24:45 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:14.680 11:24:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.680 ************************************ 00:01:14.680 START TEST asan 00:01:14.680 ************************************ 00:01:14.680 using asan 00:01:14.680 11:24:45 asan -- common/autotest_common.sh@1124 -- $ echo 'using asan' 00:01:14.680 00:01:14.680 real 0m0.000s 00:01:14.680 user 0m0.000s 00:01:14.680 sys 0m0.000s 00:01:14.680 11:24:45 asan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:14.680 ************************************ 00:01:14.680 END TEST asan 00:01:14.680 ************************************ 00:01:14.680 11:24:45 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.680 11:24:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:14.680 11:24:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:14.680 11:24:45 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:14.680 11:24:45 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:14.680 11:24:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.680 ************************************ 00:01:14.680 START TEST ubsan 00:01:14.680 ************************************ 00:01:14.680 using ubsan 00:01:14.680 11:24:45 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:01:14.680 00:01:14.680 real 0m0.000s 00:01:14.680 user 0m0.000s 00:01:14.680 sys 0m0.000s 00:01:14.680 11:24:45 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:14.680 11:24:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.680 ************************************ 00:01:14.680 END TEST ubsan 00:01:14.681 ************************************ 00:01:14.681 11:24:46 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:14.681 11:24:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:14.681 11:24:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:14.681 11:24:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:14.681 11:24:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:14.681 11:24:46 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:14.681 11:24:46 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:14.681 11:24:46 -- common/autobuild_common.sh@413 -- $ run_test unittest_build _unittest_build 00:01:14.681 11:24:46 -- common/autotest_common.sh@1100 -- $ '[' 2 -le 1 ']' 00:01:14.681 11:24:46 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:14.681 11:24:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.681 ************************************ 00:01:14.681 START TEST unittest_build 00:01:14.681 ************************************ 00:01:14.681 11:24:46 unittest_build -- common/autotest_common.sh@1124 -- $ _unittest_build 00:01:14.681 11:24:46 unittest_build -- common/autobuild_common.sh@404 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:01:14.939 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:14.939 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:15.201 Using 'verbs' RDMA provider 00:01:31.036 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:45.914 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:45.914 Creating mk/config.mk...done. 00:01:45.914 Creating mk/cc.flags.mk...done. 00:01:45.914 Type 'make' to build. 00:01:45.914 11:25:17 unittest_build -- common/autobuild_common.sh@405 -- $ make -j10 00:02:00.814 The Meson build system 00:02:00.814 Version: 1.4.0 00:02:00.814 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:00.814 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:00.814 Build type: native build 00:02:00.814 Program cat found: YES (/usr/bin/cat) 00:02:00.814 Project name: DPDK 00:02:00.814 Project version: 24.03.0 00:02:00.814 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:02:00.814 C linker for the host machine: cc ld.bfd 2.38 00:02:00.814 Host machine cpu family: x86_64 00:02:00.814 Host machine cpu: x86_64 00:02:00.814 Message: ## Building in Developer Mode ## 00:02:00.814 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:00.814 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:00.814 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:00.814 Program python3 found: YES (/usr/bin/python3) 00:02:00.814 Program cat found: YES (/usr/bin/cat) 00:02:00.814 Compiler for C supports arguments -march=native: YES 00:02:00.814 Checking for size of "void *" : 8 00:02:00.814 Checking for size of "void *" : 8 (cached) 00:02:00.814 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:00.814 Library m found: YES 00:02:00.814 Library numa found: YES 00:02:00.814 Has header "numaif.h" : YES 00:02:00.814 Library fdt found: NO 00:02:00.814 Library execinfo found: NO 00:02:00.814 Has header "execinfo.h" : YES 00:02:00.814 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:02:00.814 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:00.814 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:00.814 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:00.814 Run-time dependency openssl found: YES 3.0.2 00:02:00.814 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:00.814 Library pcap found: NO 00:02:00.814 Compiler for C supports arguments -Wcast-qual: YES 00:02:00.814 Compiler for C supports arguments -Wdeprecated: YES 00:02:00.814 Compiler for C supports arguments -Wformat: YES 00:02:00.814 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:00.814 Compiler for C supports arguments -Wformat-security: YES 00:02:00.814 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:00.814 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:00.814 Compiler for C supports arguments -Wnested-externs: YES 00:02:00.814 Compiler for C supports arguments -Wold-style-definition: YES 00:02:00.814 Compiler for C supports arguments -Wpointer-arith: YES 00:02:00.814 Compiler for C supports arguments -Wsign-compare: YES 00:02:00.814 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:00.814 Compiler for C supports arguments -Wundef: YES 00:02:00.814 Compiler for C supports arguments -Wwrite-strings: YES 00:02:00.814 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:00.814 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:00.814 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:00.814 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:00.814 Program objdump found: YES (/usr/bin/objdump) 00:02:00.814 Compiler for C supports arguments -mavx512f: YES 00:02:00.814 Checking if "AVX512 checking" compiles: YES 00:02:00.814 Fetching value of define "__SSE4_2__" : 1 00:02:00.814 Fetching value of define "__AES__" : 1 00:02:00.814 Fetching value of define "__AVX__" : 1 00:02:00.814 Fetching value of define "__AVX2__" : 1 00:02:00.814 Fetching value of define "__AVX512BW__" : 1 00:02:00.814 Fetching value of define "__AVX512CD__" : 1 00:02:00.814 Fetching value of define "__AVX512DQ__" : 1 00:02:00.814 Fetching value of define "__AVX512F__" : 1 00:02:00.814 Fetching value of define "__AVX512VL__" : 1 00:02:00.814 Fetching value of define "__PCLMUL__" : 1 00:02:00.814 Fetching value of define "__RDRND__" : 1 00:02:00.814 Fetching value of define "__RDSEED__" : 1 00:02:00.814 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:00.814 Fetching value of define "__znver1__" : (undefined) 00:02:00.814 Fetching value of define "__znver2__" : (undefined) 00:02:00.814 Fetching value of define "__znver3__" : (undefined) 00:02:00.815 Fetching value of define "__znver4__" : (undefined) 00:02:00.815 Library asan found: YES 00:02:00.815 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:00.815 Message: lib/log: Defining dependency "log" 00:02:00.815 Message: lib/kvargs: Defining dependency "kvargs" 00:02:00.815 Message: lib/telemetry: Defining dependency "telemetry" 00:02:00.815 Library rt found: YES 00:02:00.815 Checking for function "getentropy" : NO 00:02:00.815 Message: lib/eal: Defining dependency "eal" 00:02:00.815 Message: lib/ring: Defining dependency "ring" 00:02:00.815 Message: lib/rcu: Defining dependency "rcu" 00:02:00.815 Message: lib/mempool: Defining dependency "mempool" 00:02:00.815 Message: lib/mbuf: Defining dependency "mbuf" 00:02:00.815 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:00.815 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:00.815 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:00.815 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:00.815 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:00.815 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:00.815 Compiler for C supports arguments -mpclmul: YES 00:02:00.815 Compiler for C supports arguments -maes: YES 00:02:00.815 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.815 Compiler for C supports arguments -mavx512bw: YES 00:02:00.815 Compiler for C supports arguments -mavx512dq: YES 00:02:00.815 Compiler for C supports arguments -mavx512vl: YES 00:02:00.815 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:00.815 Compiler for C supports arguments -mavx2: YES 00:02:00.815 Compiler for C supports arguments -mavx: YES 00:02:00.815 Message: lib/net: Defining dependency "net" 00:02:00.815 Message: lib/meter: Defining dependency "meter" 00:02:00.815 Message: lib/ethdev: Defining dependency "ethdev" 00:02:00.815 Message: lib/pci: Defining dependency "pci" 00:02:00.815 Message: lib/cmdline: Defining dependency "cmdline" 00:02:00.815 Message: lib/hash: Defining dependency "hash" 00:02:00.815 Message: lib/timer: Defining dependency "timer" 00:02:00.815 Message: lib/compressdev: Defining dependency "compressdev" 00:02:00.815 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:00.815 Message: lib/dmadev: Defining dependency "dmadev" 00:02:00.815 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:00.815 Message: lib/power: Defining dependency "power" 00:02:00.815 Message: lib/reorder: Defining dependency "reorder" 00:02:00.815 Message: lib/security: Defining dependency "security" 00:02:00.815 Has header "linux/userfaultfd.h" : YES 00:02:00.815 Has header "linux/vduse.h" : YES 00:02:00.815 Message: lib/vhost: Defining dependency "vhost" 00:02:00.815 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:00.815 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:00.815 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:00.815 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:00.815 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:00.815 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:00.815 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:00.815 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:00.815 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:00.815 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:00.815 Program doxygen found: YES (/usr/bin/doxygen) 00:02:00.815 Configuring doxy-api-html.conf using configuration 00:02:00.815 Configuring doxy-api-man.conf using configuration 00:02:00.815 Program mandb found: YES (/usr/bin/mandb) 00:02:00.815 Program sphinx-build found: NO 00:02:00.815 Configuring rte_build_config.h using configuration 00:02:00.815 Message: 00:02:00.815 ================= 00:02:00.815 Applications Enabled 00:02:00.815 ================= 00:02:00.815 00:02:00.815 apps: 00:02:00.815 00:02:00.815 00:02:00.815 Message: 00:02:00.815 ================= 00:02:00.815 Libraries Enabled 00:02:00.815 ================= 00:02:00.815 00:02:00.815 libs: 00:02:00.815 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:00.815 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:00.815 cryptodev, dmadev, power, reorder, security, vhost, 00:02:00.815 00:02:00.815 Message: 00:02:00.815 =============== 00:02:00.815 Drivers Enabled 00:02:00.815 =============== 00:02:00.815 00:02:00.815 common: 00:02:00.815 00:02:00.815 bus: 00:02:00.815 pci, vdev, 00:02:00.815 mempool: 00:02:00.815 ring, 00:02:00.815 dma: 00:02:00.815 00:02:00.815 net: 00:02:00.815 00:02:00.815 crypto: 00:02:00.815 00:02:00.815 compress: 00:02:00.815 00:02:00.815 vdpa: 00:02:00.815 00:02:00.815 00:02:00.815 Message: 00:02:00.815 ================= 00:02:00.815 Content Skipped 00:02:00.815 ================= 00:02:00.815 00:02:00.815 apps: 00:02:00.815 dumpcap: explicitly disabled via build config 00:02:00.815 graph: explicitly disabled via build config 00:02:00.815 pdump: explicitly disabled via build config 00:02:00.815 proc-info: explicitly disabled via build config 00:02:00.815 test-acl: explicitly disabled via build config 00:02:00.815 test-bbdev: explicitly disabled via build config 00:02:00.815 test-cmdline: explicitly disabled via build config 00:02:00.815 test-compress-perf: explicitly disabled via build config 00:02:00.815 test-crypto-perf: explicitly disabled via build config 00:02:00.815 test-dma-perf: explicitly disabled via build config 00:02:00.815 test-eventdev: explicitly disabled via build config 00:02:00.815 test-fib: explicitly disabled via build config 00:02:00.815 test-flow-perf: explicitly disabled via build config 00:02:00.815 test-gpudev: explicitly disabled via build config 00:02:00.815 test-mldev: explicitly disabled via build config 00:02:00.815 test-pipeline: explicitly disabled via build config 00:02:00.815 test-pmd: explicitly disabled via build config 00:02:00.815 test-regex: explicitly disabled via build config 00:02:00.815 test-sad: explicitly disabled via build config 00:02:00.815 test-security-perf: explicitly disabled via build config 00:02:00.815 00:02:00.815 libs: 00:02:00.815 argparse: explicitly disabled via build config 00:02:00.815 metrics: explicitly disabled via build config 00:02:00.815 acl: explicitly disabled via build config 00:02:00.815 bbdev: explicitly disabled via build config 00:02:00.815 bitratestats: explicitly disabled via build config 00:02:00.815 bpf: explicitly disabled via build config 00:02:00.815 cfgfile: explicitly disabled via build config 00:02:00.815 distributor: explicitly disabled via build config 00:02:00.815 efd: explicitly disabled via build config 00:02:00.815 eventdev: explicitly disabled via build config 00:02:00.815 dispatcher: explicitly disabled via build config 00:02:00.815 gpudev: explicitly disabled via build config 00:02:00.815 gro: explicitly disabled via build config 00:02:00.815 gso: explicitly disabled via build config 00:02:00.815 ip_frag: explicitly disabled via build config 00:02:00.815 jobstats: explicitly disabled via build config 00:02:00.815 latencystats: explicitly disabled via build config 00:02:00.815 lpm: explicitly disabled via build config 00:02:00.815 member: explicitly disabled via build config 00:02:00.815 pcapng: explicitly disabled via build config 00:02:00.815 rawdev: explicitly disabled via build config 00:02:00.815 regexdev: explicitly disabled via build config 00:02:00.815 mldev: explicitly disabled via build config 00:02:00.815 rib: explicitly disabled via build config 00:02:00.815 sched: explicitly disabled via build config 00:02:00.815 stack: explicitly disabled via build config 00:02:00.815 ipsec: explicitly disabled via build config 00:02:00.815 pdcp: explicitly disabled via build config 00:02:00.815 fib: explicitly disabled via build config 00:02:00.815 port: explicitly disabled via build config 00:02:00.815 pdump: explicitly disabled via build config 00:02:00.815 table: explicitly disabled via build config 00:02:00.815 pipeline: explicitly disabled via build config 00:02:00.815 graph: explicitly disabled via build config 00:02:00.815 node: explicitly disabled via build config 00:02:00.815 00:02:00.815 drivers: 00:02:00.815 common/cpt: not in enabled drivers build config 00:02:00.815 common/dpaax: not in enabled drivers build config 00:02:00.815 common/iavf: not in enabled drivers build config 00:02:00.815 common/idpf: not in enabled drivers build config 00:02:00.815 common/ionic: not in enabled drivers build config 00:02:00.815 common/mvep: not in enabled drivers build config 00:02:00.815 common/octeontx: not in enabled drivers build config 00:02:00.815 bus/auxiliary: not in enabled drivers build config 00:02:00.815 bus/cdx: not in enabled drivers build config 00:02:00.815 bus/dpaa: not in enabled drivers build config 00:02:00.815 bus/fslmc: not in enabled drivers build config 00:02:00.815 bus/ifpga: not in enabled drivers build config 00:02:00.815 bus/platform: not in enabled drivers build config 00:02:00.815 bus/uacce: not in enabled drivers build config 00:02:00.815 bus/vmbus: not in enabled drivers build config 00:02:00.815 common/cnxk: not in enabled drivers build config 00:02:00.815 common/mlx5: not in enabled drivers build config 00:02:00.815 common/nfp: not in enabled drivers build config 00:02:00.815 common/nitrox: not in enabled drivers build config 00:02:00.815 common/qat: not in enabled drivers build config 00:02:00.815 common/sfc_efx: not in enabled drivers build config 00:02:00.815 mempool/bucket: not in enabled drivers build config 00:02:00.815 mempool/cnxk: not in enabled drivers build config 00:02:00.815 mempool/dpaa: not in enabled drivers build config 00:02:00.815 mempool/dpaa2: not in enabled drivers build config 00:02:00.815 mempool/octeontx: not in enabled drivers build config 00:02:00.815 mempool/stack: not in enabled drivers build config 00:02:00.815 dma/cnxk: not in enabled drivers build config 00:02:00.815 dma/dpaa: not in enabled drivers build config 00:02:00.815 dma/dpaa2: not in enabled drivers build config 00:02:00.815 dma/hisilicon: not in enabled drivers build config 00:02:00.815 dma/idxd: not in enabled drivers build config 00:02:00.815 dma/ioat: not in enabled drivers build config 00:02:00.815 dma/skeleton: not in enabled drivers build config 00:02:00.815 net/af_packet: not in enabled drivers build config 00:02:00.815 net/af_xdp: not in enabled drivers build config 00:02:00.816 net/ark: not in enabled drivers build config 00:02:00.816 net/atlantic: not in enabled drivers build config 00:02:00.816 net/avp: not in enabled drivers build config 00:02:00.816 net/axgbe: not in enabled drivers build config 00:02:00.816 net/bnx2x: not in enabled drivers build config 00:02:00.816 net/bnxt: not in enabled drivers build config 00:02:00.816 net/bonding: not in enabled drivers build config 00:02:00.816 net/cnxk: not in enabled drivers build config 00:02:00.816 net/cpfl: not in enabled drivers build config 00:02:00.816 net/cxgbe: not in enabled drivers build config 00:02:00.816 net/dpaa: not in enabled drivers build config 00:02:00.816 net/dpaa2: not in enabled drivers build config 00:02:00.816 net/e1000: not in enabled drivers build config 00:02:00.816 net/ena: not in enabled drivers build config 00:02:00.816 net/enetc: not in enabled drivers build config 00:02:00.816 net/enetfec: not in enabled drivers build config 00:02:00.816 net/enic: not in enabled drivers build config 00:02:00.816 net/failsafe: not in enabled drivers build config 00:02:00.816 net/fm10k: not in enabled drivers build config 00:02:00.816 net/gve: not in enabled drivers build config 00:02:00.816 net/hinic: not in enabled drivers build config 00:02:00.816 net/hns3: not in enabled drivers build config 00:02:00.816 net/i40e: not in enabled drivers build config 00:02:00.816 net/iavf: not in enabled drivers build config 00:02:00.816 net/ice: not in enabled drivers build config 00:02:00.816 net/idpf: not in enabled drivers build config 00:02:00.816 net/igc: not in enabled drivers build config 00:02:00.816 net/ionic: not in enabled drivers build config 00:02:00.816 net/ipn3ke: not in enabled drivers build config 00:02:00.816 net/ixgbe: not in enabled drivers build config 00:02:00.816 net/mana: not in enabled drivers build config 00:02:00.816 net/memif: not in enabled drivers build config 00:02:00.816 net/mlx4: not in enabled drivers build config 00:02:00.816 net/mlx5: not in enabled drivers build config 00:02:00.816 net/mvneta: not in enabled drivers build config 00:02:00.816 net/mvpp2: not in enabled drivers build config 00:02:00.816 net/netvsc: not in enabled drivers build config 00:02:00.816 net/nfb: not in enabled drivers build config 00:02:00.816 net/nfp: not in enabled drivers build config 00:02:00.816 net/ngbe: not in enabled drivers build config 00:02:00.816 net/null: not in enabled drivers build config 00:02:00.816 net/octeontx: not in enabled drivers build config 00:02:00.816 net/octeon_ep: not in enabled drivers build config 00:02:00.816 net/pcap: not in enabled drivers build config 00:02:00.816 net/pfe: not in enabled drivers build config 00:02:00.816 net/qede: not in enabled drivers build config 00:02:00.816 net/ring: not in enabled drivers build config 00:02:00.816 net/sfc: not in enabled drivers build config 00:02:00.816 net/softnic: not in enabled drivers build config 00:02:00.816 net/tap: not in enabled drivers build config 00:02:00.816 net/thunderx: not in enabled drivers build config 00:02:00.816 net/txgbe: not in enabled drivers build config 00:02:00.816 net/vdev_netvsc: not in enabled drivers build config 00:02:00.816 net/vhost: not in enabled drivers build config 00:02:00.816 net/virtio: not in enabled drivers build config 00:02:00.816 net/vmxnet3: not in enabled drivers build config 00:02:00.816 raw/*: missing internal dependency, "rawdev" 00:02:00.816 crypto/armv8: not in enabled drivers build config 00:02:00.816 crypto/bcmfs: not in enabled drivers build config 00:02:00.816 crypto/caam_jr: not in enabled drivers build config 00:02:00.816 crypto/ccp: not in enabled drivers build config 00:02:00.816 crypto/cnxk: not in enabled drivers build config 00:02:00.816 crypto/dpaa_sec: not in enabled drivers build config 00:02:00.816 crypto/dpaa2_sec: not in enabled drivers build config 00:02:00.816 crypto/ipsec_mb: not in enabled drivers build config 00:02:00.816 crypto/mlx5: not in enabled drivers build config 00:02:00.816 crypto/mvsam: not in enabled drivers build config 00:02:00.816 crypto/nitrox: not in enabled drivers build config 00:02:00.816 crypto/null: not in enabled drivers build config 00:02:00.816 crypto/octeontx: not in enabled drivers build config 00:02:00.816 crypto/openssl: not in enabled drivers build config 00:02:00.816 crypto/scheduler: not in enabled drivers build config 00:02:00.816 crypto/uadk: not in enabled drivers build config 00:02:00.816 crypto/virtio: not in enabled drivers build config 00:02:00.816 compress/isal: not in enabled drivers build config 00:02:00.816 compress/mlx5: not in enabled drivers build config 00:02:00.816 compress/nitrox: not in enabled drivers build config 00:02:00.816 compress/octeontx: not in enabled drivers build config 00:02:00.816 compress/zlib: not in enabled drivers build config 00:02:00.816 regex/*: missing internal dependency, "regexdev" 00:02:00.816 ml/*: missing internal dependency, "mldev" 00:02:00.816 vdpa/ifc: not in enabled drivers build config 00:02:00.816 vdpa/mlx5: not in enabled drivers build config 00:02:00.816 vdpa/nfp: not in enabled drivers build config 00:02:00.816 vdpa/sfc: not in enabled drivers build config 00:02:00.816 event/*: missing internal dependency, "eventdev" 00:02:00.816 baseband/*: missing internal dependency, "bbdev" 00:02:00.816 gpu/*: missing internal dependency, "gpudev" 00:02:00.816 00:02:00.816 00:02:01.075 Build targets in project: 85 00:02:01.075 00:02:01.075 DPDK 24.03.0 00:02:01.075 00:02:01.075 User defined options 00:02:01.075 buildtype : debug 00:02:01.075 default_library : static 00:02:01.075 libdir : lib 00:02:01.075 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:01.075 b_sanitize : address 00:02:01.075 c_args : -Wno-stringop-overflow -fcommon -fPIC -Werror 00:02:01.075 c_link_args : 00:02:01.075 cpu_instruction_set: native 00:02:01.075 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:02:01.075 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,argparse,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:02:01.075 enable_docs : false 00:02:01.075 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:01.075 enable_kmods : false 00:02:01.075 tests : false 00:02:01.075 00:02:01.075 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:01.642 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:01.642 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:01.642 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:01.642 [3/268] Linking static target lib/librte_kvargs.a 00:02:01.642 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:01.642 [5/268] Linking static target lib/librte_log.a 00:02:01.642 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:01.901 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:01.901 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:01.901 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:01.901 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:01.901 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:01.901 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:01.901 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:02.160 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:02.160 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:02.160 [16/268] Linking static target lib/librte_telemetry.a 00:02:02.160 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:02.160 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:02.418 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:02.418 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:02.418 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:02.418 [22/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.418 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:02.418 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:02.418 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:02.677 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:02.677 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:02.677 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:02.677 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:02.677 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:02.936 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:02.936 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:02.936 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:02.936 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:02.936 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.936 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:02.936 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:02.936 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:02.936 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:03.194 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:03.194 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:03.194 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:03.194 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:03.453 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:03.453 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:03.453 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:03.453 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:03.453 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:03.711 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:03.711 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:03.711 [51/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.711 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:03.711 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:03.711 [54/268] Linking target lib/librte_log.so.24.1 00:02:03.711 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:03.711 [56/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.711 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:03.711 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:03.711 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:03.969 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:03.969 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:03.969 [62/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:03.969 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:03.969 [64/268] Linking target lib/librte_kvargs.so.24.1 00:02:04.228 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:04.228 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:04.228 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:04.228 [68/268] Linking target lib/librte_telemetry.so.24.1 00:02:04.228 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:04.228 [70/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:04.228 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:04.228 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:04.486 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:04.486 [74/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:04.486 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:04.486 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:04.486 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:04.486 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:04.486 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:04.486 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:04.487 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:04.487 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:04.744 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:04.744 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:04.744 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:04.744 [86/268] Linking static target lib/librte_eal.a 00:02:04.744 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:04.744 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:04.744 [89/268] Linking static target lib/librte_ring.a 00:02:04.744 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:04.744 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:05.011 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:05.011 [93/268] Linking static target lib/librte_mempool.a 00:02:05.011 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:05.011 [95/268] Linking static target lib/librte_rcu.a 00:02:05.011 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:05.011 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:05.011 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:05.011 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:05.270 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:05.270 [101/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.270 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:05.270 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.270 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:05.270 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:05.270 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:05.528 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:05.528 [108/268] Linking static target lib/librte_meter.a 00:02:05.528 [109/268] Linking static target lib/librte_net.a 00:02:05.528 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:05.528 [111/268] Linking static target lib/librte_mbuf.a 00:02:05.528 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:05.528 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:05.528 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:05.528 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:05.787 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.787 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.787 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.787 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:06.045 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:06.045 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:06.045 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:06.305 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.305 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:06.305 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:06.305 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:06.305 [127/268] Linking static target lib/librte_pci.a 00:02:06.305 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:06.305 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:06.305 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:06.305 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:06.305 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:06.564 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:06.564 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:06.564 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:06.564 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.564 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:06.564 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:06.564 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:06.564 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:06.564 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:06.564 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:06.564 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:06.564 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:06.564 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:06.564 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:06.564 [147/268] Linking static target lib/librte_cmdline.a 00:02:06.830 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:06.830 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:06.830 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:06.830 [151/268] Linking static target lib/librte_timer.a 00:02:06.830 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:07.092 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:07.092 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:07.092 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:07.092 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:07.351 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.351 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:07.351 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:07.351 [160/268] Linking static target lib/librte_compressdev.a 00:02:07.351 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:07.351 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:07.351 [163/268] Linking static target lib/librte_hash.a 00:02:07.351 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:07.610 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:07.610 [166/268] Linking static target lib/librte_dmadev.a 00:02:07.610 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:07.610 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:07.610 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:07.610 [170/268] Linking static target lib/librte_ethdev.a 00:02:07.610 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:07.610 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:07.610 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.880 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:07.880 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.880 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:08.151 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:08.151 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:08.151 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:08.151 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:08.151 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.151 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.151 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:08.151 [184/268] Linking static target lib/librte_cryptodev.a 00:02:08.151 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:08.151 [186/268] Linking static target lib/librte_power.a 00:02:08.411 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:08.411 [188/268] Linking static target lib/librte_reorder.a 00:02:08.411 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:08.411 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:08.411 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:08.670 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:08.670 [193/268] Linking static target lib/librte_security.a 00:02:08.670 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.928 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:08.928 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.186 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:09.186 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.186 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:09.186 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:09.186 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:09.444 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:09.444 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:09.444 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:09.444 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:09.444 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:09.703 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:09.703 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:09.703 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:09.703 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:09.703 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.703 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:09.961 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:09.961 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:09.961 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:09.961 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:09.961 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:09.961 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:09.961 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:09.961 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:09.961 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:09.961 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.218 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:10.218 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:10.218 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:10.218 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:10.476 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.410 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:14.697 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.697 [230/268] Linking target lib/librte_eal.so.24.1 00:02:14.697 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:14.697 [232/268] Linking target lib/librte_pci.so.24.1 00:02:14.697 [233/268] Linking target lib/librte_ring.so.24.1 00:02:14.697 [234/268] Linking target lib/librte_meter.so.24.1 00:02:14.697 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:14.697 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:14.697 [237/268] Linking target lib/librte_timer.so.24.1 00:02:14.697 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:14.697 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:14.697 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:14.697 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:14.697 [242/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.697 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:14.697 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:14.697 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:14.697 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:14.972 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:14.972 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:14.972 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:14.972 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:14.972 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:15.229 [252/268] Linking target lib/librte_net.so.24.1 00:02:15.229 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:15.229 [254/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:15.229 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:15.229 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:15.229 [257/268] Linking static target lib/librte_vhost.a 00:02:15.229 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:15.229 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:15.487 [260/268] Linking target lib/librte_security.so.24.1 00:02:15.487 [261/268] Linking target lib/librte_hash.so.24.1 00:02:15.487 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:15.487 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:15.487 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:15.487 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:15.745 [266/268] Linking target lib/librte_power.so.24.1 00:02:17.645 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.645 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:17.645 INFO: autodetecting backend as ninja 00:02:17.645 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:18.580 CC lib/log/log.o 00:02:18.580 CC lib/log/log_deprecated.o 00:02:18.580 CC lib/log/log_flags.o 00:02:18.580 CC lib/ut_mock/mock.o 00:02:18.580 CC lib/ut/ut.o 00:02:18.840 LIB libspdk_ut.a 00:02:18.840 LIB libspdk_log.a 00:02:18.840 LIB libspdk_ut_mock.a 00:02:19.099 CC lib/ioat/ioat.o 00:02:19.099 CC lib/util/base64.o 00:02:19.099 CC lib/util/cpuset.o 00:02:19.099 CC lib/util/bit_array.o 00:02:19.099 CC lib/util/crc16.o 00:02:19.099 CC lib/util/crc32.o 00:02:19.099 CC lib/util/crc32c.o 00:02:19.099 CC lib/dma/dma.o 00:02:19.099 CXX lib/trace_parser/trace.o 00:02:19.099 CC lib/vfio_user/host/vfio_user_pci.o 00:02:19.099 CC lib/vfio_user/host/vfio_user.o 00:02:19.099 CC lib/util/crc32_ieee.o 00:02:19.357 LIB libspdk_dma.a 00:02:19.357 CC lib/util/crc64.o 00:02:19.357 CC lib/util/dif.o 00:02:19.357 CC lib/util/fd.o 00:02:19.357 CC lib/util/file.o 00:02:19.357 CC lib/util/hexlify.o 00:02:19.357 CC lib/util/iov.o 00:02:19.357 CC lib/util/math.o 00:02:19.357 LIB libspdk_ioat.a 00:02:19.357 CC lib/util/pipe.o 00:02:19.357 CC lib/util/strerror_tls.o 00:02:19.357 CC lib/util/string.o 00:02:19.357 LIB libspdk_vfio_user.a 00:02:19.358 CC lib/util/uuid.o 00:02:19.616 CC lib/util/fd_group.o 00:02:19.616 CC lib/util/xor.o 00:02:19.616 CC lib/util/zipf.o 00:02:20.184 LIB libspdk_util.a 00:02:20.184 CC lib/vmd/vmd.o 00:02:20.184 CC lib/vmd/led.o 00:02:20.184 CC lib/conf/conf.o 00:02:20.184 LIB libspdk_trace_parser.a 00:02:20.184 CC lib/env_dpdk/env.o 00:02:20.184 CC lib/env_dpdk/pci.o 00:02:20.184 CC lib/env_dpdk/memory.o 00:02:20.184 CC lib/rdma/common.o 00:02:20.444 CC lib/idxd/idxd.o 00:02:20.444 CC lib/json/json_parse.o 00:02:20.444 CC lib/idxd/idxd_user.o 00:02:20.444 CC lib/json/json_util.o 00:02:20.703 CC lib/json/json_write.o 00:02:20.703 CC lib/rdma/rdma_verbs.o 00:02:20.703 LIB libspdk_conf.a 00:02:20.703 CC lib/env_dpdk/init.o 00:02:20.703 CC lib/env_dpdk/threads.o 00:02:20.703 CC lib/env_dpdk/pci_ioat.o 00:02:20.703 CC lib/env_dpdk/pci_virtio.o 00:02:20.703 CC lib/env_dpdk/pci_vmd.o 00:02:20.963 CC lib/env_dpdk/pci_idxd.o 00:02:20.963 CC lib/env_dpdk/pci_event.o 00:02:20.963 LIB libspdk_rdma.a 00:02:20.963 LIB libspdk_json.a 00:02:20.963 CC lib/env_dpdk/sigbus_handler.o 00:02:20.963 CC lib/env_dpdk/pci_dpdk.o 00:02:20.963 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:20.963 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:20.963 LIB libspdk_idxd.a 00:02:20.963 LIB libspdk_vmd.a 00:02:21.222 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:21.222 CC lib/jsonrpc/jsonrpc_server.o 00:02:21.222 CC lib/jsonrpc/jsonrpc_client.o 00:02:21.222 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:21.480 LIB libspdk_jsonrpc.a 00:02:21.738 CC lib/rpc/rpc.o 00:02:21.998 LIB libspdk_rpc.a 00:02:21.998 LIB libspdk_env_dpdk.a 00:02:22.257 CC lib/notify/notify.o 00:02:22.257 CC lib/notify/notify_rpc.o 00:02:22.257 CC lib/trace/trace_rpc.o 00:02:22.257 CC lib/trace/trace.o 00:02:22.257 CC lib/trace/trace_flags.o 00:02:22.257 CC lib/keyring/keyring.o 00:02:22.257 CC lib/keyring/keyring_rpc.o 00:02:22.515 LIB libspdk_notify.a 00:02:22.515 LIB libspdk_keyring.a 00:02:22.515 LIB libspdk_trace.a 00:02:22.775 CC lib/sock/sock.o 00:02:22.775 CC lib/sock/sock_rpc.o 00:02:22.775 CC lib/thread/thread.o 00:02:22.775 CC lib/thread/iobuf.o 00:02:23.343 LIB libspdk_sock.a 00:02:23.601 CC lib/nvme/nvme_fabric.o 00:02:23.601 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:23.601 CC lib/nvme/nvme_ctrlr.o 00:02:23.601 CC lib/nvme/nvme_pcie.o 00:02:23.601 CC lib/nvme/nvme_pcie_common.o 00:02:23.601 CC lib/nvme/nvme_ns_cmd.o 00:02:23.601 CC lib/nvme/nvme.o 00:02:23.601 CC lib/nvme/nvme_ns.o 00:02:23.601 CC lib/nvme/nvme_qpair.o 00:02:24.168 CC lib/nvme/nvme_quirks.o 00:02:24.169 CC lib/nvme/nvme_transport.o 00:02:24.169 CC lib/nvme/nvme_discovery.o 00:02:24.169 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:24.428 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:24.428 CC lib/nvme/nvme_tcp.o 00:02:24.428 CC lib/nvme/nvme_opal.o 00:02:24.428 CC lib/nvme/nvme_io_msg.o 00:02:24.428 LIB libspdk_thread.a 00:02:24.428 CC lib/nvme/nvme_poll_group.o 00:02:24.686 CC lib/nvme/nvme_zns.o 00:02:24.686 CC lib/nvme/nvme_stubs.o 00:02:24.686 CC lib/nvme/nvme_auth.o 00:02:24.686 CC lib/nvme/nvme_cuse.o 00:02:24.945 CC lib/accel/accel.o 00:02:24.945 CC lib/nvme/nvme_rdma.o 00:02:24.945 CC lib/accel/accel_rpc.o 00:02:25.204 CC lib/init/json_config.o 00:02:25.204 CC lib/virtio/virtio.o 00:02:25.204 CC lib/blob/blobstore.o 00:02:25.204 CC lib/virtio/virtio_vhost_user.o 00:02:25.464 CC lib/init/subsystem.o 00:02:25.464 CC lib/virtio/virtio_vfio_user.o 00:02:25.464 CC lib/blob/request.o 00:02:25.723 CC lib/blob/zeroes.o 00:02:25.723 CC lib/blob/blob_bs_dev.o 00:02:25.723 CC lib/init/subsystem_rpc.o 00:02:25.723 CC lib/init/rpc.o 00:02:25.723 CC lib/virtio/virtio_pci.o 00:02:25.723 CC lib/accel/accel_sw.o 00:02:25.982 LIB libspdk_init.a 00:02:26.241 LIB libspdk_accel.a 00:02:26.241 LIB libspdk_virtio.a 00:02:26.241 CC lib/event/app.o 00:02:26.241 CC lib/event/app_rpc.o 00:02:26.241 CC lib/event/log_rpc.o 00:02:26.241 CC lib/event/reactor.o 00:02:26.241 CC lib/event/scheduler_static.o 00:02:26.500 LIB libspdk_nvme.a 00:02:26.500 CC lib/bdev/bdev.o 00:02:26.500 CC lib/bdev/bdev_rpc.o 00:02:26.500 CC lib/bdev/bdev_zone.o 00:02:26.500 CC lib/bdev/part.o 00:02:26.500 CC lib/bdev/scsi_nvme.o 00:02:26.759 LIB libspdk_event.a 00:02:29.293 LIB libspdk_blob.a 00:02:29.293 CC lib/blobfs/tree.o 00:02:29.293 CC lib/blobfs/blobfs.o 00:02:29.293 CC lib/lvol/lvol.o 00:02:29.293 LIB libspdk_bdev.a 00:02:29.552 CC lib/scsi/dev.o 00:02:29.552 CC lib/scsi/lun.o 00:02:29.552 CC lib/scsi/port.o 00:02:29.552 CC lib/scsi/scsi.o 00:02:29.552 CC lib/scsi/scsi_bdev.o 00:02:29.552 CC lib/ftl/ftl_core.o 00:02:29.552 CC lib/nbd/nbd.o 00:02:29.552 CC lib/nvmf/ctrlr.o 00:02:29.810 CC lib/nvmf/ctrlr_discovery.o 00:02:29.810 CC lib/nvmf/ctrlr_bdev.o 00:02:29.810 CC lib/nvmf/subsystem.o 00:02:30.069 CC lib/nvmf/nvmf.o 00:02:30.069 CC lib/nbd/nbd_rpc.o 00:02:30.069 CC lib/ftl/ftl_init.o 00:02:30.069 LIB libspdk_blobfs.a 00:02:30.069 CC lib/scsi/scsi_pr.o 00:02:30.327 CC lib/scsi/scsi_rpc.o 00:02:30.327 LIB libspdk_nbd.a 00:02:30.327 LIB libspdk_lvol.a 00:02:30.327 CC lib/scsi/task.o 00:02:30.327 CC lib/ftl/ftl_layout.o 00:02:30.327 CC lib/ftl/ftl_debug.o 00:02:30.327 CC lib/ftl/ftl_io.o 00:02:30.585 CC lib/ftl/ftl_sb.o 00:02:30.585 LIB libspdk_scsi.a 00:02:30.585 CC lib/nvmf/nvmf_rpc.o 00:02:30.585 CC lib/ftl/ftl_l2p.o 00:02:30.585 CC lib/nvmf/transport.o 00:02:30.585 CC lib/nvmf/tcp.o 00:02:30.844 CC lib/nvmf/stubs.o 00:02:30.844 CC lib/ftl/ftl_l2p_flat.o 00:02:30.844 CC lib/nvmf/mdns_server.o 00:02:30.844 CC lib/nvmf/rdma.o 00:02:30.844 CC lib/ftl/ftl_nv_cache.o 00:02:31.103 CC lib/ftl/ftl_band.o 00:02:31.103 CC lib/nvmf/auth.o 00:02:31.362 CC lib/ftl/ftl_band_ops.o 00:02:31.362 CC lib/iscsi/conn.o 00:02:31.362 CC lib/ftl/ftl_writer.o 00:02:31.362 CC lib/vhost/vhost.o 00:02:31.621 CC lib/vhost/vhost_rpc.o 00:02:31.621 CC lib/vhost/vhost_scsi.o 00:02:31.880 CC lib/ftl/ftl_rq.o 00:02:31.880 CC lib/ftl/ftl_reloc.o 00:02:31.880 CC lib/ftl/ftl_l2p_cache.o 00:02:32.138 CC lib/ftl/ftl_p2l.o 00:02:32.138 CC lib/iscsi/init_grp.o 00:02:32.138 CC lib/vhost/vhost_blk.o 00:02:32.138 CC lib/ftl/mngt/ftl_mngt.o 00:02:32.138 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:32.138 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:32.397 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:32.397 CC lib/iscsi/iscsi.o 00:02:32.397 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:32.397 CC lib/iscsi/md5.o 00:02:32.397 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:32.397 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:32.397 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:32.397 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:32.656 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:32.656 CC lib/iscsi/param.o 00:02:32.656 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:32.656 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:32.656 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:32.656 CC lib/ftl/utils/ftl_conf.o 00:02:32.915 CC lib/iscsi/portal_grp.o 00:02:32.915 CC lib/ftl/utils/ftl_md.o 00:02:32.915 CC lib/ftl/utils/ftl_mempool.o 00:02:32.915 CC lib/iscsi/tgt_node.o 00:02:32.915 CC lib/iscsi/iscsi_subsystem.o 00:02:32.915 CC lib/vhost/rte_vhost_user.o 00:02:32.915 CC lib/iscsi/iscsi_rpc.o 00:02:33.174 CC lib/iscsi/task.o 00:02:33.174 CC lib/ftl/utils/ftl_bitmap.o 00:02:33.174 CC lib/ftl/utils/ftl_property.o 00:02:33.174 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:33.433 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:33.433 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:33.433 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:33.433 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:33.433 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:33.433 LIB libspdk_nvmf.a 00:02:33.433 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:33.433 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:33.433 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:33.692 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:33.692 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:33.692 CC lib/ftl/base/ftl_base_dev.o 00:02:33.692 CC lib/ftl/base/ftl_base_bdev.o 00:02:33.692 CC lib/ftl/ftl_trace.o 00:02:33.951 LIB libspdk_iscsi.a 00:02:33.951 LIB libspdk_ftl.a 00:02:33.951 LIB libspdk_vhost.a 00:02:34.520 CC module/env_dpdk/env_dpdk_rpc.o 00:02:34.520 CC module/blob/bdev/blob_bdev.o 00:02:34.520 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:34.520 CC module/accel/dsa/accel_dsa.o 00:02:34.520 CC module/accel/error/accel_error.o 00:02:34.520 CC module/keyring/file/keyring.o 00:02:34.520 CC module/accel/ioat/accel_ioat.o 00:02:34.520 CC module/scheduler/gscheduler/gscheduler.o 00:02:34.520 CC module/sock/posix/posix.o 00:02:34.520 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:34.520 LIB libspdk_env_dpdk_rpc.a 00:02:34.779 CC module/accel/error/accel_error_rpc.o 00:02:34.779 CC module/keyring/file/keyring_rpc.o 00:02:34.779 LIB libspdk_scheduler_dpdk_governor.a 00:02:34.779 LIB libspdk_scheduler_gscheduler.a 00:02:34.779 CC module/accel/dsa/accel_dsa_rpc.o 00:02:34.779 CC module/accel/ioat/accel_ioat_rpc.o 00:02:34.779 LIB libspdk_accel_error.a 00:02:34.779 LIB libspdk_scheduler_dynamic.a 00:02:34.779 LIB libspdk_blob_bdev.a 00:02:34.779 LIB libspdk_keyring_file.a 00:02:35.037 LIB libspdk_accel_dsa.a 00:02:35.037 LIB libspdk_accel_ioat.a 00:02:35.037 CC module/accel/iaa/accel_iaa.o 00:02:35.037 CC module/accel/iaa/accel_iaa_rpc.o 00:02:35.037 CC module/keyring/linux/keyring.o 00:02:35.037 CC module/keyring/linux/keyring_rpc.o 00:02:35.037 CC module/blobfs/bdev/blobfs_bdev.o 00:02:35.037 CC module/bdev/gpt/gpt.o 00:02:35.037 CC module/bdev/lvol/vbdev_lvol.o 00:02:35.037 LIB libspdk_keyring_linux.a 00:02:35.037 CC module/bdev/delay/vbdev_delay.o 00:02:35.296 CC module/bdev/error/vbdev_error.o 00:02:35.296 LIB libspdk_accel_iaa.a 00:02:35.296 CC module/bdev/error/vbdev_error_rpc.o 00:02:35.296 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:35.296 CC module/bdev/malloc/bdev_malloc.o 00:02:35.296 CC module/bdev/null/bdev_null.o 00:02:35.296 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:35.296 CC module/bdev/gpt/vbdev_gpt.o 00:02:35.296 LIB libspdk_sock_posix.a 00:02:35.554 LIB libspdk_bdev_error.a 00:02:35.554 LIB libspdk_blobfs_bdev.a 00:02:35.554 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:35.555 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:35.555 CC module/bdev/nvme/bdev_nvme.o 00:02:35.555 CC module/bdev/null/bdev_null_rpc.o 00:02:35.555 CC module/bdev/passthru/vbdev_passthru.o 00:02:35.555 LIB libspdk_bdev_gpt.a 00:02:35.555 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:35.813 LIB libspdk_bdev_malloc.a 00:02:35.813 LIB libspdk_bdev_delay.a 00:02:35.813 CC module/bdev/raid/bdev_raid.o 00:02:35.813 LIB libspdk_bdev_null.a 00:02:35.813 CC module/bdev/raid/bdev_raid_rpc.o 00:02:35.813 LIB libspdk_bdev_lvol.a 00:02:35.813 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:35.813 CC module/bdev/raid/bdev_raid_sb.o 00:02:35.813 CC module/bdev/split/vbdev_split.o 00:02:35.813 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:35.813 CC module/bdev/raid/raid0.o 00:02:35.813 LIB libspdk_bdev_passthru.a 00:02:36.071 CC module/bdev/aio/bdev_aio.o 00:02:36.071 CC module/bdev/aio/bdev_aio_rpc.o 00:02:36.071 CC module/bdev/nvme/nvme_rpc.o 00:02:36.071 CC module/bdev/split/vbdev_split_rpc.o 00:02:36.071 CC module/bdev/raid/raid1.o 00:02:36.071 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:36.071 CC module/bdev/nvme/bdev_mdns_client.o 00:02:36.071 CC module/bdev/nvme/vbdev_opal.o 00:02:36.330 LIB libspdk_bdev_split.a 00:02:36.330 CC module/bdev/raid/concat.o 00:02:36.330 LIB libspdk_bdev_zone_block.a 00:02:36.330 LIB libspdk_bdev_aio.a 00:02:36.330 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:36.330 CC module/bdev/raid/raid5f.o 00:02:36.330 CC module/bdev/ftl/bdev_ftl.o 00:02:36.330 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:36.589 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:36.589 CC module/bdev/iscsi/bdev_iscsi.o 00:02:36.589 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:36.589 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:36.589 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:36.589 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:36.849 LIB libspdk_bdev_ftl.a 00:02:36.849 LIB libspdk_bdev_iscsi.a 00:02:36.849 LIB libspdk_bdev_raid.a 00:02:37.109 LIB libspdk_bdev_virtio.a 00:02:38.052 LIB libspdk_bdev_nvme.a 00:02:38.619 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:38.619 CC module/event/subsystems/scheduler/scheduler.o 00:02:38.619 CC module/event/subsystems/sock/sock.o 00:02:38.619 CC module/event/subsystems/vmd/vmd.o 00:02:38.619 CC module/event/subsystems/keyring/keyring.o 00:02:38.619 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:38.619 CC module/event/subsystems/iobuf/iobuf.o 00:02:38.619 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:38.619 LIB libspdk_event_keyring.a 00:02:38.619 LIB libspdk_event_sock.a 00:02:38.619 LIB libspdk_event_vhost_blk.a 00:02:38.877 LIB libspdk_event_scheduler.a 00:02:38.877 LIB libspdk_event_vmd.a 00:02:38.877 LIB libspdk_event_iobuf.a 00:02:39.135 CC module/event/subsystems/accel/accel.o 00:02:39.135 LIB libspdk_event_accel.a 00:02:39.701 CC module/event/subsystems/bdev/bdev.o 00:02:39.701 LIB libspdk_event_bdev.a 00:02:39.959 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:39.959 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:39.959 CC module/event/subsystems/scsi/scsi.o 00:02:39.959 CC module/event/subsystems/nbd/nbd.o 00:02:40.220 LIB libspdk_event_scsi.a 00:02:40.220 LIB libspdk_event_nbd.a 00:02:40.220 LIB libspdk_event_nvmf.a 00:02:40.482 CC module/event/subsystems/iscsi/iscsi.o 00:02:40.482 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:40.740 LIB libspdk_event_vhost_scsi.a 00:02:40.740 LIB libspdk_event_iscsi.a 00:02:40.999 TEST_HEADER include/spdk/accel.h 00:02:40.999 TEST_HEADER include/spdk/accel_module.h 00:02:40.999 CXX app/trace/trace.o 00:02:40.999 TEST_HEADER include/spdk/assert.h 00:02:40.999 TEST_HEADER include/spdk/barrier.h 00:02:40.999 TEST_HEADER include/spdk/base64.h 00:02:40.999 TEST_HEADER include/spdk/bdev.h 00:02:40.999 TEST_HEADER include/spdk/bdev_module.h 00:02:40.999 TEST_HEADER include/spdk/bdev_zone.h 00:02:40.999 TEST_HEADER include/spdk/bit_array.h 00:02:40.999 TEST_HEADER include/spdk/bit_pool.h 00:02:40.999 TEST_HEADER include/spdk/blob.h 00:02:40.999 TEST_HEADER include/spdk/blob_bdev.h 00:02:40.999 TEST_HEADER include/spdk/blobfs.h 00:02:40.999 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:40.999 TEST_HEADER include/spdk/conf.h 00:02:40.999 TEST_HEADER include/spdk/config.h 00:02:40.999 TEST_HEADER include/spdk/cpuset.h 00:02:40.999 TEST_HEADER include/spdk/crc16.h 00:02:40.999 TEST_HEADER include/spdk/crc32.h 00:02:40.999 TEST_HEADER include/spdk/crc64.h 00:02:40.999 TEST_HEADER include/spdk/dif.h 00:02:40.999 TEST_HEADER include/spdk/dma.h 00:02:40.999 TEST_HEADER include/spdk/endian.h 00:02:40.999 TEST_HEADER include/spdk/env.h 00:02:40.999 TEST_HEADER include/spdk/env_dpdk.h 00:02:40.999 TEST_HEADER include/spdk/event.h 00:02:40.999 CC examples/accel/perf/accel_perf.o 00:02:40.999 TEST_HEADER include/spdk/fd.h 00:02:40.999 TEST_HEADER include/spdk/fd_group.h 00:02:40.999 TEST_HEADER include/spdk/file.h 00:02:40.999 TEST_HEADER include/spdk/ftl.h 00:02:40.999 TEST_HEADER include/spdk/gpt_spec.h 00:02:40.999 TEST_HEADER include/spdk/hexlify.h 00:02:40.999 TEST_HEADER include/spdk/histogram_data.h 00:02:40.999 TEST_HEADER include/spdk/idxd.h 00:02:40.999 TEST_HEADER include/spdk/idxd_spec.h 00:02:40.999 TEST_HEADER include/spdk/init.h 00:02:40.999 TEST_HEADER include/spdk/ioat.h 00:02:40.999 TEST_HEADER include/spdk/ioat_spec.h 00:02:40.999 CC test/app/bdev_svc/bdev_svc.o 00:02:40.999 TEST_HEADER include/spdk/iscsi_spec.h 00:02:40.999 TEST_HEADER include/spdk/json.h 00:02:40.999 CC examples/bdev/hello_world/hello_bdev.o 00:02:40.999 TEST_HEADER include/spdk/jsonrpc.h 00:02:40.999 TEST_HEADER include/spdk/keyring.h 00:02:40.999 CC test/bdev/bdevio/bdevio.o 00:02:40.999 TEST_HEADER include/spdk/keyring_module.h 00:02:40.999 CC test/accel/dif/dif.o 00:02:40.999 CC test/dma/test_dma/test_dma.o 00:02:40.999 TEST_HEADER include/spdk/likely.h 00:02:40.999 TEST_HEADER include/spdk/log.h 00:02:40.999 TEST_HEADER include/spdk/lvol.h 00:02:40.999 TEST_HEADER include/spdk/memory.h 00:02:40.999 TEST_HEADER include/spdk/mmio.h 00:02:40.999 TEST_HEADER include/spdk/nbd.h 00:02:40.999 TEST_HEADER include/spdk/notify.h 00:02:40.999 TEST_HEADER include/spdk/nvme.h 00:02:40.999 TEST_HEADER include/spdk/nvme_intel.h 00:02:40.999 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:40.999 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:40.999 TEST_HEADER include/spdk/nvme_spec.h 00:02:40.999 CC test/blobfs/mkfs/mkfs.o 00:02:40.999 TEST_HEADER include/spdk/nvme_zns.h 00:02:40.999 TEST_HEADER include/spdk/nvmf.h 00:02:40.999 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:40.999 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:40.999 TEST_HEADER include/spdk/nvmf_spec.h 00:02:40.999 TEST_HEADER include/spdk/nvmf_transport.h 00:02:40.999 TEST_HEADER include/spdk/opal.h 00:02:40.999 TEST_HEADER include/spdk/opal_spec.h 00:02:40.999 TEST_HEADER include/spdk/pci_ids.h 00:02:41.327 TEST_HEADER include/spdk/pipe.h 00:02:41.327 TEST_HEADER include/spdk/queue.h 00:02:41.327 TEST_HEADER include/spdk/reduce.h 00:02:41.327 TEST_HEADER include/spdk/rpc.h 00:02:41.327 CC test/env/mem_callbacks/mem_callbacks.o 00:02:41.327 TEST_HEADER include/spdk/scheduler.h 00:02:41.327 TEST_HEADER include/spdk/scsi.h 00:02:41.327 TEST_HEADER include/spdk/scsi_spec.h 00:02:41.327 TEST_HEADER include/spdk/sock.h 00:02:41.327 TEST_HEADER include/spdk/stdinc.h 00:02:41.327 TEST_HEADER include/spdk/string.h 00:02:41.327 TEST_HEADER include/spdk/thread.h 00:02:41.327 TEST_HEADER include/spdk/trace.h 00:02:41.327 TEST_HEADER include/spdk/trace_parser.h 00:02:41.327 TEST_HEADER include/spdk/tree.h 00:02:41.327 TEST_HEADER include/spdk/ublk.h 00:02:41.327 TEST_HEADER include/spdk/util.h 00:02:41.327 TEST_HEADER include/spdk/uuid.h 00:02:41.327 TEST_HEADER include/spdk/version.h 00:02:41.328 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:41.328 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:41.328 TEST_HEADER include/spdk/vhost.h 00:02:41.328 TEST_HEADER include/spdk/vmd.h 00:02:41.328 TEST_HEADER include/spdk/xor.h 00:02:41.328 TEST_HEADER include/spdk/zipf.h 00:02:41.328 CXX test/cpp_headers/accel.o 00:02:41.328 LINK bdev_svc 00:02:41.328 LINK mkfs 00:02:41.328 LINK hello_bdev 00:02:41.602 LINK accel_perf 00:02:41.602 CXX test/cpp_headers/accel_module.o 00:02:41.602 LINK spdk_trace 00:02:41.602 LINK dif 00:02:41.602 LINK test_dma 00:02:41.860 LINK bdevio 00:02:41.860 LINK mem_callbacks 00:02:41.860 CXX test/cpp_headers/assert.o 00:02:42.119 CXX test/cpp_headers/barrier.o 00:02:42.377 CC app/trace_record/trace_record.o 00:02:42.377 CXX test/cpp_headers/base64.o 00:02:42.377 CC test/env/vtophys/vtophys.o 00:02:42.636 CXX test/cpp_headers/bdev.o 00:02:42.636 LINK spdk_trace_record 00:02:42.636 LINK vtophys 00:02:42.894 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:42.894 CXX test/cpp_headers/bdev_module.o 00:02:43.153 CXX test/cpp_headers/bdev_zone.o 00:02:43.411 CXX test/cpp_headers/bit_array.o 00:02:43.670 CXX test/cpp_headers/bit_pool.o 00:02:43.670 LINK nvme_fuzz 00:02:43.670 CC app/nvmf_tgt/nvmf_main.o 00:02:43.928 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:43.928 CXX test/cpp_headers/blob.o 00:02:43.928 LINK nvmf_tgt 00:02:43.928 LINK env_dpdk_post_init 00:02:44.187 CXX test/cpp_headers/blob_bdev.o 00:02:44.752 CXX test/cpp_headers/blobfs.o 00:02:44.752 CXX test/cpp_headers/blobfs_bdev.o 00:02:44.752 CC examples/bdev/bdevperf/bdevperf.o 00:02:45.011 CXX test/cpp_headers/conf.o 00:02:45.011 CXX test/cpp_headers/config.o 00:02:45.011 CXX test/cpp_headers/cpuset.o 00:02:45.269 CC test/env/memory/memory_ut.o 00:02:45.269 CXX test/cpp_headers/crc16.o 00:02:45.527 CXX test/cpp_headers/crc32.o 00:02:45.527 LINK bdevperf 00:02:45.527 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:45.786 CC app/iscsi_tgt/iscsi_tgt.o 00:02:45.786 CC test/app/histogram_perf/histogram_perf.o 00:02:45.786 CC test/app/jsoncat/jsoncat.o 00:02:45.786 CXX test/cpp_headers/crc64.o 00:02:45.786 CC test/env/pci/pci_ut.o 00:02:46.044 LINK histogram_perf 00:02:46.044 LINK jsoncat 00:02:46.044 LINK iscsi_tgt 00:02:46.044 CXX test/cpp_headers/dif.o 00:02:46.044 CXX test/cpp_headers/dma.o 00:02:46.303 CXX test/cpp_headers/endian.o 00:02:46.561 CC app/spdk_tgt/spdk_tgt.o 00:02:46.561 LINK pci_ut 00:02:46.561 CXX test/cpp_headers/env.o 00:02:46.561 LINK memory_ut 00:02:46.561 CXX test/cpp_headers/env_dpdk.o 00:02:46.561 LINK spdk_tgt 00:02:46.820 CXX test/cpp_headers/event.o 00:02:46.820 CXX test/cpp_headers/fd.o 00:02:46.820 CC test/app/stub/stub.o 00:02:46.820 CXX test/cpp_headers/fd_group.o 00:02:47.079 CXX test/cpp_headers/file.o 00:02:47.079 CC test/event/event_perf/event_perf.o 00:02:47.079 LINK stub 00:02:47.079 CXX test/cpp_headers/ftl.o 00:02:47.079 CC test/lvol/esnap/esnap.o 00:02:47.079 LINK event_perf 00:02:47.337 CC app/spdk_lspci/spdk_lspci.o 00:02:47.337 LINK spdk_lspci 00:02:47.595 CXX test/cpp_headers/gpt_spec.o 00:02:47.595 LINK iscsi_fuzz 00:02:47.853 CXX test/cpp_headers/hexlify.o 00:02:48.111 CXX test/cpp_headers/histogram_data.o 00:02:48.369 CXX test/cpp_headers/idxd.o 00:02:48.626 CXX test/cpp_headers/idxd_spec.o 00:02:48.626 CC test/event/reactor/reactor.o 00:02:48.883 CC test/rpc_client/rpc_client_test.o 00:02:48.883 CXX test/cpp_headers/init.o 00:02:48.883 CC test/nvme/aer/aer.o 00:02:48.883 LINK reactor 00:02:49.140 LINK rpc_client_test 00:02:49.140 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:49.140 CXX test/cpp_headers/ioat.o 00:02:49.140 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:49.397 CXX test/cpp_headers/ioat_spec.o 00:02:49.397 LINK aer 00:02:49.654 CC test/thread/poller_perf/poller_perf.o 00:02:49.654 CC examples/blob/hello_world/hello_blob.o 00:02:49.654 CXX test/cpp_headers/iscsi_spec.o 00:02:49.654 LINK vhost_fuzz 00:02:49.911 LINK poller_perf 00:02:49.911 CC test/event/reactor_perf/reactor_perf.o 00:02:49.911 LINK hello_blob 00:02:49.911 CXX test/cpp_headers/json.o 00:02:50.168 CXX test/cpp_headers/jsonrpc.o 00:02:50.168 LINK reactor_perf 00:02:50.168 CC examples/ioat/perf/perf.o 00:02:50.426 CXX test/cpp_headers/keyring.o 00:02:50.426 CXX test/cpp_headers/keyring_module.o 00:02:50.684 LINK ioat_perf 00:02:50.941 CXX test/cpp_headers/likely.o 00:02:50.941 CXX test/cpp_headers/log.o 00:02:51.200 CC test/thread/lock/spdk_lock.o 00:02:51.200 CC app/spdk_nvme_perf/perf.o 00:02:51.200 CXX test/cpp_headers/lvol.o 00:02:51.458 CXX test/cpp_headers/memory.o 00:02:51.458 CC test/nvme/reset/reset.o 00:02:51.716 CXX test/cpp_headers/mmio.o 00:02:51.716 CC app/spdk_nvme_identify/identify.o 00:02:51.716 CC test/event/app_repeat/app_repeat.o 00:02:51.716 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:02:51.716 CC examples/ioat/verify/verify.o 00:02:51.973 CXX test/cpp_headers/nbd.o 00:02:51.973 LINK reset 00:02:51.973 CXX test/cpp_headers/notify.o 00:02:51.973 LINK app_repeat 00:02:51.973 LINK histogram_ut 00:02:51.973 LINK verify 00:02:52.231 CXX test/cpp_headers/nvme.o 00:02:52.488 LINK spdk_nvme_perf 00:02:52.488 CXX test/cpp_headers/nvme_intel.o 00:02:52.488 CC test/unit/lib/accel/accel.c/accel_ut.o 00:02:52.746 CXX test/cpp_headers/nvme_ocssd.o 00:02:53.004 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:53.261 LINK spdk_nvme_identify 00:02:53.261 CXX test/cpp_headers/nvme_spec.o 00:02:53.261 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:02:53.519 CXX test/cpp_headers/nvme_zns.o 00:02:53.778 CXX test/cpp_headers/nvmf.o 00:02:54.074 CXX test/cpp_headers/nvmf_cmd.o 00:02:54.074 CC test/nvme/sgl/sgl.o 00:02:54.074 LINK spdk_lock 00:02:54.074 CC examples/nvme/hello_world/hello_world.o 00:02:54.332 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:54.332 CC test/event/scheduler/scheduler.o 00:02:54.332 LINK hello_world 00:02:54.332 LINK sgl 00:02:54.590 CXX test/cpp_headers/nvmf_spec.o 00:02:54.590 CC examples/blob/cli/blobcli.o 00:02:54.590 CXX test/cpp_headers/nvmf_transport.o 00:02:54.848 LINK scheduler 00:02:54.848 CXX test/cpp_headers/opal.o 00:02:55.105 LINK esnap 00:02:55.105 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:02:55.105 CXX test/cpp_headers/opal_spec.o 00:02:55.363 CC app/spdk_nvme_discover/discovery_aer.o 00:02:55.363 CXX test/cpp_headers/pci_ids.o 00:02:55.622 LINK accel_ut 00:02:55.622 LINK blobcli 00:02:55.622 LINK spdk_nvme_discover 00:02:55.622 CXX test/cpp_headers/pipe.o 00:02:55.881 CC test/unit/lib/blob/blob.c/blob_ut.o 00:02:55.881 LINK blob_bdev_ut 00:02:55.881 CXX test/cpp_headers/queue.o 00:02:55.881 CXX test/cpp_headers/reduce.o 00:02:56.139 CXX test/cpp_headers/rpc.o 00:02:56.397 CC examples/nvme/reconnect/reconnect.o 00:02:56.397 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:56.397 CC app/spdk_top/spdk_top.o 00:02:56.397 CC test/nvme/e2edp/nvme_dp.o 00:02:56.397 CXX test/cpp_headers/scheduler.o 00:02:56.655 LINK reconnect 00:02:56.913 CXX test/cpp_headers/scsi.o 00:02:56.913 LINK nvme_manage 00:02:56.913 LINK nvme_dp 00:02:57.171 CXX test/cpp_headers/scsi_spec.o 00:02:57.429 CXX test/cpp_headers/sock.o 00:02:57.688 CXX test/cpp_headers/stdinc.o 00:02:57.688 CXX test/cpp_headers/string.o 00:02:57.688 CC examples/sock/hello_world/hello_sock.o 00:02:57.946 LINK spdk_top 00:02:57.946 CXX test/cpp_headers/thread.o 00:02:58.204 CXX test/cpp_headers/trace.o 00:02:58.204 LINK hello_sock 00:02:58.204 CC examples/vmd/lsvmd/lsvmd.o 00:02:58.204 CXX test/cpp_headers/trace_parser.o 00:02:58.204 CXX test/cpp_headers/tree.o 00:02:58.204 CC examples/nvme/arbitration/arbitration.o 00:02:58.462 LINK lsvmd 00:02:58.462 CXX test/cpp_headers/ublk.o 00:02:58.462 CC app/vhost/vhost.o 00:02:58.462 CC test/nvme/overhead/overhead.o 00:02:58.720 CXX test/cpp_headers/util.o 00:02:58.720 LINK vhost 00:02:58.720 LINK arbitration 00:02:58.720 CXX test/cpp_headers/uuid.o 00:02:58.720 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:02:58.978 LINK overhead 00:02:58.978 CXX test/cpp_headers/version.o 00:02:58.978 LINK tree_ut 00:02:59.237 CXX test/cpp_headers/vfio_user_pci.o 00:02:59.237 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:02:59.237 CXX test/cpp_headers/vfio_user_spec.o 00:02:59.496 LINK bdev_ut 00:02:59.496 CC test/unit/lib/dma/dma.c/dma_ut.o 00:02:59.496 CC app/spdk_dd/spdk_dd.o 00:02:59.496 CXX test/cpp_headers/vhost.o 00:02:59.754 CXX test/cpp_headers/vmd.o 00:02:59.754 LINK spdk_dd 00:02:59.754 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:00.013 CC examples/vmd/led/led.o 00:03:00.013 CXX test/cpp_headers/xor.o 00:03:00.013 LINK led 00:03:00.270 CC examples/nvme/hotplug/hotplug.o 00:03:00.270 LINK dma_ut 00:03:00.270 CXX test/cpp_headers/zipf.o 00:03:00.556 LINK hotplug 00:03:00.556 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:00.813 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:00.813 LINK cmb_copy 00:03:01.072 CC test/nvme/err_injection/err_injection.o 00:03:01.331 LINK blobfs_async_ut 00:03:01.331 LINK err_injection 00:03:01.331 CC examples/nvme/abort/abort.o 00:03:01.896 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:01.896 LINK abort 00:03:01.896 LINK pmr_persistence 00:03:01.896 CC test/unit/lib/event/app.c/app_ut.o 00:03:02.460 LINK blobfs_sync_ut 00:03:02.716 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:02.716 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:02.973 LINK app_ut 00:03:02.973 CC test/nvme/startup/startup.o 00:03:02.973 LINK blobfs_bdev_ut 00:03:03.230 LINK startup 00:03:03.230 CC app/fio/nvme/fio_plugin.o 00:03:03.230 LINK ioat_ut 00:03:03.501 CC app/fio/bdev/fio_plugin.o 00:03:03.501 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:03.501 CC examples/nvmf/nvmf/nvmf.o 00:03:03.501 CC examples/util/zipf/zipf.o 00:03:03.776 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:03.776 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:03.776 LINK zipf 00:03:03.776 LINK nvmf 00:03:04.035 LINK spdk_nvme 00:03:04.035 LINK spdk_bdev 00:03:04.293 LINK init_grp_ut 00:03:04.293 LINK blob_ut 00:03:04.552 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:04.810 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:04.810 LINK reactor_ut 00:03:04.810 LINK part_ut 00:03:04.810 CC test/nvme/reserve/reserve.o 00:03:05.068 CC test/unit/lib/log/log.c/log_ut.o 00:03:05.068 LINK reserve 00:03:05.068 LINK jsonrpc_server_ut 00:03:05.068 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:05.327 LINK log_ut 00:03:05.327 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:05.327 LINK conn_ut 00:03:05.585 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:05.585 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:05.585 LINK scsi_nvme_ut 00:03:05.585 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:05.843 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:06.100 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:06.100 LINK param_ut 00:03:06.357 LINK notify_ut 00:03:06.615 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:06.615 CC test/nvme/simple_copy/simple_copy.o 00:03:06.615 LINK gpt_ut 00:03:06.615 CC test/nvme/connect_stress/connect_stress.o 00:03:06.871 LINK simple_copy 00:03:06.872 LINK connect_stress 00:03:07.128 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:07.386 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:07.386 LINK json_parse_ut 00:03:07.643 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:07.643 LINK nvme_ut 00:03:07.901 CC examples/thread/thread/thread_ex.o 00:03:07.901 LINK portal_grp_ut 00:03:07.901 LINK lvol_ut 00:03:08.158 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:08.158 LINK thread 00:03:08.158 LINK iscsi_ut 00:03:08.158 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:08.158 LINK json_util_ut 00:03:08.416 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:08.416 CC test/nvme/boot_partition/boot_partition.o 00:03:08.416 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:08.676 LINK boot_partition 00:03:08.676 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:08.676 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:08.676 LINK tgt_node_ut 00:03:08.934 LINK vbdev_lvol_ut 00:03:09.235 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:09.492 LINK json_write_ut 00:03:09.492 LINK nvme_ns_ut 00:03:09.492 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:09.492 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:09.750 LINK nvme_ctrlr_cmd_ut 00:03:09.750 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:09.750 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:09.750 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:10.008 CC test/nvme/compliance/nvme_compliance.o 00:03:10.008 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:10.574 LINK nvme_compliance 00:03:10.832 LINK nvme_ns_ocssd_cmd_ut 00:03:10.832 LINK nvme_poll_group_ut 00:03:11.090 LINK nvme_ns_cmd_ut 00:03:11.348 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:11.348 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:11.348 LINK nvme_qpair_ut 00:03:11.605 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:11.605 CC examples/idxd/perf/perf.o 00:03:11.605 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:11.605 LINK nvme_quirks_ut 00:03:11.863 LINK nvme_ctrlr_ut 00:03:11.863 LINK nvme_pcie_ut 00:03:11.863 CC test/nvme/fused_ordering/fused_ordering.o 00:03:11.863 LINK idxd_perf 00:03:12.184 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:12.184 LINK fused_ordering 00:03:12.184 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:12.184 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:12.463 LINK doorbell_aers 00:03:12.723 LINK nvme_transport_ut 00:03:12.723 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:12.982 LINK interrupt_tgt 00:03:13.241 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:13.241 LINK nvme_io_msg_ut 00:03:13.498 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:13.756 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:14.017 LINK bdev_ut 00:03:14.017 CC test/nvme/fdp/fdp.o 00:03:14.017 LINK nvme_pcie_common_ut 00:03:14.276 LINK nvme_fabric_ut 00:03:14.276 LINK fdp 00:03:14.276 LINK nvme_tcp_ut 00:03:14.536 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:14.536 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:14.536 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:14.795 LINK bdev_zone_ut 00:03:14.795 LINK tcp_ut 00:03:14.795 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:15.054 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:15.054 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:15.313 LINK bdev_raid_sb_ut 00:03:15.313 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:15.572 LINK ctrlr_ut 00:03:15.831 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:15.831 CC test/nvme/cuse/cuse.o 00:03:15.831 LINK nvme_opal_ut 00:03:15.831 LINK subsystem_ut 00:03:16.090 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:03:16.090 LINK concat_ut 00:03:16.090 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:16.090 LINK ctrlr_discovery_ut 00:03:16.348 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:16.606 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:16.606 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:16.864 LINK raid1_ut 00:03:16.864 LINK bdev_raid_ut 00:03:17.431 LINK raid0_ut 00:03:17.688 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:17.688 LINK cuse 00:03:17.688 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:17.688 LINK nvme_cuse_ut 00:03:17.947 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:18.206 LINK nvme_rdma_ut 00:03:18.206 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:18.206 LINK dev_ut 00:03:18.206 LINK raid5f_ut 00:03:18.206 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:18.206 LINK vbdev_zone_block_ut 00:03:18.464 LINK ctrlr_bdev_ut 00:03:18.464 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:18.464 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:18.722 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:18.722 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:18.722 LINK base64_ut 00:03:18.980 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:18.980 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:18.980 LINK cpuset_ut 00:03:19.238 LINK bit_array_ut 00:03:19.238 LINK nvmf_ut 00:03:19.497 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:03:19.497 LINK iobuf_ut 00:03:19.497 LINK posix_ut 00:03:19.497 LINK scsi_ut 00:03:19.754 LINK lun_ut 00:03:19.754 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:19.754 LINK sock_ut 00:03:19.754 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:19.754 LINK crc16_ut 00:03:20.013 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:20.013 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:20.013 LINK crc32_ieee_ut 00:03:20.013 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:03:20.013 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:20.271 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:20.271 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:20.271 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:20.271 LINK crc32c_ut 00:03:20.531 LINK pci_event_ut 00:03:20.531 LINK rpc_ut 00:03:20.790 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:20.790 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:20.790 LINK subsystem_ut 00:03:20.790 LINK crc64_ut 00:03:20.790 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:20.790 LINK thread_ut 00:03:21.133 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:21.133 LINK rpc_ut 00:03:21.133 LINK iov_ut 00:03:21.407 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:21.407 LINK scsi_bdev_ut 00:03:21.407 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:03:21.407 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:21.665 CC test/unit/lib/util/math.c/math_ut.o 00:03:21.665 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:21.665 LINK math_ut 00:03:21.922 LINK auth_ut 00:03:21.923 LINK dif_ut 00:03:21.923 LINK keyring_ut 00:03:21.923 LINK idxd_user_ut 00:03:21.923 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:22.180 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:22.180 CC test/unit/lib/util/string.c/string_ut.o 00:03:22.180 LINK scsi_pr_ut 00:03:22.180 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:22.439 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:22.439 LINK idxd_ut 00:03:22.439 LINK string_ut 00:03:22.439 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:22.698 LINK xor_ut 00:03:22.698 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:22.698 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:22.698 LINK pipe_ut 00:03:22.956 CC test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut.o 00:03:22.956 LINK ftl_l2p_ut 00:03:22.956 LINK common_ut 00:03:23.214 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:23.214 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:23.472 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:23.472 LINK ftl_bitmap_ut 00:03:23.729 LINK ftl_io_ut 00:03:23.729 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:23.988 LINK ftl_mempool_ut 00:03:23.988 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:24.245 LINK bdev_nvme_ut 00:03:24.245 LINK ftl_mngt_ut 00:03:24.245 LINK ftl_p2l_ut 00:03:24.245 LINK rdma_ut 00:03:24.503 LINK ftl_band_ut 00:03:24.503 LINK transport_ut 00:03:25.068 LINK vhost_ut 00:03:25.633 LINK ftl_sb_ut 00:03:25.634 LINK ftl_layout_upgrade_ut 00:03:25.892 00:03:25.892 real 2m11.688s 00:03:25.892 user 11m25.944s 00:03:25.892 sys 2m34.257s 00:03:25.892 11:26:57 unittest_build -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:03:25.892 11:26:57 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:03:25.892 ************************************ 00:03:25.892 END TEST unittest_build 00:03:25.892 ************************************ 00:03:25.892 11:26:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:25.892 11:26:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:25.892 11:26:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:25.892 11:26:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.892 11:26:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:25.892 11:26:57 -- pm/common@44 -- $ pid=2187 00:03:25.892 11:26:57 -- pm/common@50 -- $ kill -TERM 2187 00:03:25.892 11:26:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.892 11:26:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:25.892 11:26:57 -- pm/common@44 -- $ pid=2189 00:03:25.892 11:26:57 -- pm/common@50 -- $ kill -TERM 2189 00:03:25.892 11:26:57 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:25.892 11:26:57 -- nvmf/common.sh@7 -- # uname -s 00:03:25.892 11:26:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:25.892 11:26:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:25.892 11:26:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:25.892 11:26:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:25.892 11:26:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:25.892 11:26:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:25.892 11:26:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:25.892 11:26:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:25.892 11:26:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:25.892 11:26:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:25.892 11:26:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b3fe2eab-3c24-4fd5-855c-abc957a7100d 00:03:25.892 11:26:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=b3fe2eab-3c24-4fd5-855c-abc957a7100d 00:03:25.892 11:26:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:25.892 11:26:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:25.893 11:26:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:25.893 11:26:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:25.893 11:26:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:25.893 11:26:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:25.893 11:26:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:25.893 11:26:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:25.893 11:26:57 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:25.893 11:26:57 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:25.893 11:26:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:25.893 11:26:57 -- paths/export.sh@5 -- # export PATH 00:03:25.893 11:26:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:25.893 11:26:57 -- nvmf/common.sh@47 -- # : 0 00:03:25.893 11:26:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:25.893 11:26:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:25.893 11:26:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:25.893 11:26:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:25.893 11:26:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:25.893 11:26:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:25.893 11:26:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:25.893 11:26:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:25.893 11:26:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:25.893 11:26:57 -- spdk/autotest.sh@32 -- # uname -s 00:03:25.893 11:26:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:25.893 11:26:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:03:25.893 11:26:57 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.893 11:26:57 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:25.893 11:26:57 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.893 11:26:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:25.893 11:26:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:26.151 11:26:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:03:26.151 11:26:57 -- spdk/autotest.sh@48 -- # udevadm_pid=100041 00:03:26.151 11:26:57 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:03:26.151 11:26:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:26.151 11:26:57 -- pm/common@17 -- # local monitor 00:03:26.151 11:26:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.151 11:26:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.151 11:26:57 -- pm/common@25 -- # sleep 1 00:03:26.151 11:26:57 -- pm/common@21 -- # date +%s 00:03:26.151 11:26:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1718018817 00:03:26.151 11:26:57 -- pm/common@21 -- # date +%s 00:03:26.151 11:26:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1718018817 00:03:26.151 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1718018817_collect-vmstat.pm.log 00:03:26.151 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1718018817_collect-cpu-load.pm.log 00:03:27.085 11:26:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:27.085 11:26:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:27.085 11:26:58 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:27.085 11:26:58 -- common/autotest_common.sh@10 -- # set +x 00:03:27.085 11:26:58 -- spdk/autotest.sh@59 -- # create_test_list 00:03:27.085 11:26:58 -- common/autotest_common.sh@747 -- # xtrace_disable 00:03:27.085 11:26:58 -- common/autotest_common.sh@10 -- # set +x 00:03:27.085 11:26:59 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:27.085 11:26:59 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:27.085 11:26:59 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:27.085 11:26:59 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:27.085 11:26:59 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:27.085 11:26:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:27.085 11:26:59 -- common/autotest_common.sh@1454 -- # uname 00:03:27.085 11:26:59 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:03:27.085 11:26:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:27.085 11:26:59 -- common/autotest_common.sh@1474 -- # uname 00:03:27.085 11:26:59 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:03:27.085 11:26:59 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:27.085 11:26:59 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:27.085 11:26:59 -- spdk/autotest.sh@72 -- # hash lcov 00:03:27.085 11:26:59 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:27.085 11:26:59 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:27.085 --rc lcov_branch_coverage=1 00:03:27.085 --rc lcov_function_coverage=1 00:03:27.085 --rc genhtml_branch_coverage=1 00:03:27.085 --rc genhtml_function_coverage=1 00:03:27.085 --rc genhtml_legend=1 00:03:27.085 --rc geninfo_all_blocks=1 00:03:27.085 ' 00:03:27.085 11:26:59 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:27.085 --rc lcov_branch_coverage=1 00:03:27.085 --rc lcov_function_coverage=1 00:03:27.085 --rc genhtml_branch_coverage=1 00:03:27.085 --rc genhtml_function_coverage=1 00:03:27.085 --rc genhtml_legend=1 00:03:27.085 --rc geninfo_all_blocks=1 00:03:27.085 ' 00:03:27.085 11:26:59 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:27.085 --rc lcov_branch_coverage=1 00:03:27.085 --rc lcov_function_coverage=1 00:03:27.085 --rc genhtml_branch_coverage=1 00:03:27.085 --rc genhtml_function_coverage=1 00:03:27.085 --rc genhtml_legend=1 00:03:27.085 --rc geninfo_all_blocks=1 00:03:27.085 --no-external' 00:03:27.085 11:26:59 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:27.085 --rc lcov_branch_coverage=1 00:03:27.085 --rc lcov_function_coverage=1 00:03:27.085 --rc genhtml_branch_coverage=1 00:03:27.085 --rc genhtml_function_coverage=1 00:03:27.085 --rc genhtml_legend=1 00:03:27.085 --rc geninfo_all_blocks=1 00:03:27.085 --no-external' 00:03:27.085 11:26:59 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:27.085 lcov: LCOV version 1.15 00:03:27.085 11:26:59 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:33.641 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:33.641 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:20.315 11:27:51 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:20.315 11:27:51 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:20.315 11:27:51 -- common/autotest_common.sh@10 -- # set +x 00:04:20.315 11:27:51 -- spdk/autotest.sh@91 -- # rm -f 00:04:20.315 11:27:51 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.315 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:20.315 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:20.315 11:27:51 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:20.315 11:27:51 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:20.315 11:27:51 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:20.315 11:27:51 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:20.315 11:27:51 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:20.315 11:27:51 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:20.315 11:27:51 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:20.315 11:27:51 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:20.315 11:27:51 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:20.315 11:27:51 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:20.315 11:27:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.315 11:27:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:20.315 11:27:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:20.315 11:27:51 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:20.315 11:27:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:20.315 No valid GPT data, bailing 00:04:20.315 11:27:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:20.315 11:27:51 -- scripts/common.sh@391 -- # pt= 00:04:20.315 11:27:51 -- scripts/common.sh@392 -- # return 1 00:04:20.315 11:27:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:20.315 1+0 records in 00:04:20.315 1+0 records out 00:04:20.315 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00459921 s, 228 MB/s 00:04:20.315 11:27:51 -- spdk/autotest.sh@118 -- # sync 00:04:20.315 11:27:51 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:20.315 11:27:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:20.315 11:27:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:21.688 11:27:53 -- spdk/autotest.sh@124 -- # uname -s 00:04:21.688 11:27:53 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:21.688 11:27:53 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:21.688 11:27:53 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:21.688 11:27:53 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:21.688 11:27:53 -- common/autotest_common.sh@10 -- # set +x 00:04:21.688 ************************************ 00:04:21.688 START TEST setup.sh 00:04:21.688 ************************************ 00:04:21.688 11:27:53 setup.sh -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:21.688 * Looking for test storage... 00:04:21.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:21.688 11:27:53 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:21.688 11:27:53 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:21.688 11:27:53 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:21.688 11:27:53 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:21.688 11:27:53 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:21.688 11:27:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:21.688 ************************************ 00:04:21.688 START TEST acl 00:04:21.688 ************************************ 00:04:21.688 11:27:53 setup.sh.acl -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:21.948 * Looking for test storage... 00:04:21.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:21.948 11:27:53 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:21.948 11:27:53 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:21.948 11:27:53 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:21.948 11:27:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:21.948 11:27:53 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:21.948 11:27:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:21.948 11:27:53 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:21.948 11:27:53 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:21.948 11:27:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:21.948 11:27:53 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:21.948 11:27:53 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:21.948 11:27:53 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:21.948 11:27:53 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:21.948 11:27:53 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:21.948 11:27:53 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.948 11:27:53 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:22.515 11:27:54 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:22.515 11:27:54 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:22.515 11:27:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.515 11:27:54 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:22.515 11:27:54 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.515 11:27:54 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:22.773 11:27:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:22.773 11:27:54 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.773 11:27:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.773 Hugepages 00:04:22.773 node hugesize free / total 00:04:22.773 11:27:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:22.773 11:27:54 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.773 11:27:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:23.139 00:04:23.139 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:23.139 11:27:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:23.139 11:27:54 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:23.139 11:27:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:23.139 11:27:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:23.139 11:27:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:23.139 11:27:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:23.139 11:27:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:23.139 11:27:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:23.139 11:27:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:23.139 11:27:55 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:23.139 11:27:55 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:23.139 11:27:55 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:23.139 11:27:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:23.139 11:27:55 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:23.139 11:27:55 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:23.139 11:27:55 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:23.139 11:27:55 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:23.139 11:27:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:23.139 ************************************ 00:04:23.139 START TEST denied 00:04:23.139 ************************************ 00:04:23.139 11:27:55 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:04:23.139 11:27:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:23.139 11:27:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:23.139 11:27:55 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.139 11:27:55 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.139 11:27:55 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:24.520 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:24.520 11:27:56 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:24.520 11:27:56 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:24.520 11:27:56 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:24.520 11:27:56 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:24.520 11:27:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:24.520 11:27:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:24.520 11:27:56 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:24.520 11:27:56 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:24.520 11:27:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.520 11:27:56 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:25.085 00:04:25.085 real 0m1.863s 00:04:25.085 user 0m0.499s 00:04:25.085 sys 0m1.420s 00:04:25.085 11:27:56 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:25.085 ************************************ 00:04:25.085 END TEST denied 00:04:25.085 ************************************ 00:04:25.085 11:27:56 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:25.085 11:27:56 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:25.085 11:27:56 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:25.085 11:27:56 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:25.085 11:27:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:25.085 ************************************ 00:04:25.085 START TEST allowed 00:04:25.085 ************************************ 00:04:25.085 11:27:57 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:04:25.085 11:27:57 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:25.085 11:27:57 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:25.085 11:27:57 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:25.085 11:27:57 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.085 11:27:57 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:26.988 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.988 11:27:58 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:26.988 11:27:58 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:26.988 11:27:58 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:26.988 11:27:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.988 11:27:58 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:27.246 00:04:27.246 real 0m2.152s 00:04:27.246 user 0m0.456s 00:04:27.246 sys 0m1.693s 00:04:27.246 11:27:59 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:27.246 11:27:59 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:27.246 ************************************ 00:04:27.246 END TEST allowed 00:04:27.246 ************************************ 00:04:27.246 00:04:27.246 real 0m5.489s 00:04:27.246 user 0m1.655s 00:04:27.246 sys 0m3.981s 00:04:27.246 11:27:59 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:27.246 11:27:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:27.246 ************************************ 00:04:27.246 END TEST acl 00:04:27.246 ************************************ 00:04:27.246 11:27:59 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:27.246 11:27:59 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:27.246 11:27:59 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:27.246 11:27:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:27.246 ************************************ 00:04:27.246 START TEST hugepages 00:04:27.246 ************************************ 00:04:27.246 11:27:59 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:27.507 * Looking for test storage... 00:04:27.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 2713380 kB' 'MemAvailable: 7400800 kB' 'Buffers: 36276 kB' 'Cached: 4781548 kB' 'SwapCached: 0 kB' 'Active: 1030960 kB' 'Inactive: 3908232 kB' 'Active(anon): 1032 kB' 'Inactive(anon): 132000 kB' 'Active(file): 1029928 kB' 'Inactive(file): 3776232 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'AnonPages: 150888 kB' 'Mapped: 68208 kB' 'Shmem: 2600 kB' 'KReclaimable: 202564 kB' 'Slab: 268056 kB' 'SReclaimable: 202564 kB' 'SUnreclaim: 65492 kB' 'KernelStack: 4552 kB' 'PageTables: 3824 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024332 kB' 'Committed_AS: 504896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.507 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:27.508 11:27:59 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:27.508 11:27:59 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:27.508 11:27:59 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:27.508 11:27:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:27.508 ************************************ 00:04:27.508 START TEST default_setup 00:04:27.508 ************************************ 00:04:27.508 11:27:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:04:27.508 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:27.508 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:27.508 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:27.508 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.509 11:27:59 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:28.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:28.074 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4795296 kB' 'MemAvailable: 9482704 kB' 'Buffers: 36276 kB' 'Cached: 4781564 kB' 'SwapCached: 0 kB' 'Active: 1031024 kB' 'Inactive: 3921972 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 145768 kB' 'Active(file): 1029976 kB' 'Inactive(file): 3776204 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 164684 kB' 'Mapped: 67912 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268120 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65588 kB' 'KernelStack: 4484 kB' 'PageTables: 3580 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 518260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19628 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.646 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.647 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4795548 kB' 'MemAvailable: 9482956 kB' 'Buffers: 36276 kB' 'Cached: 4781564 kB' 'SwapCached: 0 kB' 'Active: 1031016 kB' 'Inactive: 3921848 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 145644 kB' 'Active(file): 1029976 kB' 'Inactive(file): 3776204 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 164268 kB' 'Mapped: 67912 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268136 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65604 kB' 'KernelStack: 4420 kB' 'PageTables: 3420 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 518260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19628 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.648 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.649 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4795800 kB' 'MemAvailable: 9483212 kB' 'Buffers: 36276 kB' 'Cached: 4781568 kB' 'SwapCached: 0 kB' 'Active: 1031016 kB' 'Inactive: 3921892 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 145684 kB' 'Active(file): 1029976 kB' 'Inactive(file): 3776208 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 164316 kB' 'Mapped: 67912 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268136 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65604 kB' 'KernelStack: 4416 kB' 'PageTables: 3560 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 518260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.650 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:28.651 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:28.651 nr_hugepages=1024 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:28.652 resv_hugepages=0 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:28.652 surplus_hugepages=0 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:28.652 anon_hugepages=0 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4795800 kB' 'MemAvailable: 9483212 kB' 'Buffers: 36276 kB' 'Cached: 4781568 kB' 'SwapCached: 0 kB' 'Active: 1031016 kB' 'Inactive: 3921892 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 145684 kB' 'Active(file): 1029976 kB' 'Inactive(file): 3776208 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 164056 kB' 'Mapped: 67912 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268136 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65604 kB' 'KernelStack: 4416 kB' 'PageTables: 3560 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 518260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.652 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.653 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4796324 kB' 'MemUsed: 7446648 kB' 'SwapCached: 0 kB' 'Active: 1031016 kB' 'Inactive: 3922140 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 145932 kB' 'Active(file): 1029976 kB' 'Inactive(file): 3776208 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'FilePages: 4817844 kB' 'Mapped: 67912 kB' 'AnonPages: 164284 kB' 'Shmem: 2596 kB' 'KernelStack: 4384 kB' 'PageTables: 3484 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 202532 kB' 'Slab: 268136 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.654 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.655 node0=1024 expecting 1024 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:28.655 00:04:28.655 real 0m1.235s 00:04:28.655 user 0m0.386s 00:04:28.655 sys 0m0.858s 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:28.655 11:28:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:28.655 ************************************ 00:04:28.655 END TEST default_setup 00:04:28.655 ************************************ 00:04:28.943 11:28:00 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:28.943 11:28:00 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:28.943 11:28:00 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:28.943 11:28:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:28.943 ************************************ 00:04:28.943 START TEST per_node_1G_alloc 00:04:28.943 ************************************ 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.943 11:28:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.201 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:29.201 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.464 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5842660 kB' 'MemAvailable: 10530068 kB' 'Buffers: 36276 kB' 'Cached: 4781564 kB' 'SwapCached: 0 kB' 'Active: 1031032 kB' 'Inactive: 3921932 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 145736 kB' 'Active(file): 1029984 kB' 'Inactive(file): 3776196 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 164364 kB' 'Mapped: 67920 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268072 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65540 kB' 'KernelStack: 4384 kB' 'PageTables: 3496 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 518260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.465 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5842660 kB' 'MemAvailable: 10530068 kB' 'Buffers: 36276 kB' 'Cached: 4781564 kB' 'SwapCached: 0 kB' 'Active: 1031032 kB' 'Inactive: 3921932 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 145736 kB' 'Active(file): 1029984 kB' 'Inactive(file): 3776196 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 164320 kB' 'Mapped: 67920 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268072 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65540 kB' 'KernelStack: 4368 kB' 'PageTables: 3456 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 518260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19612 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.466 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.467 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5842660 kB' 'MemAvailable: 10530068 kB' 'Buffers: 36276 kB' 'Cached: 4781564 kB' 'SwapCached: 0 kB' 'Active: 1031024 kB' 'Inactive: 3921796 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 145600 kB' 'Active(file): 1029984 kB' 'Inactive(file): 3776196 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 164224 kB' 'Mapped: 67916 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268096 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65564 kB' 'KernelStack: 4416 kB' 'PageTables: 3560 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 518260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19628 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.468 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.469 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:29.470 nr_hugepages=512 00:04:29.470 resv_hugepages=0 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.470 surplus_hugepages=0 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.470 anon_hugepages=0 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.470 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5842660 kB' 'MemAvailable: 10530068 kB' 'Buffers: 36276 kB' 'Cached: 4781564 kB' 'SwapCached: 0 kB' 'Active: 1031024 kB' 'Inactive: 3921476 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 145280 kB' 'Active(file): 1029984 kB' 'Inactive(file): 3776196 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 164160 kB' 'Mapped: 67916 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268096 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65564 kB' 'KernelStack: 4384 kB' 'PageTables: 3488 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 518260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.471 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5842660 kB' 'MemUsed: 6400312 kB' 'SwapCached: 0 kB' 'Active: 1031024 kB' 'Inactive: 3921724 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 145528 kB' 'Active(file): 1029984 kB' 'Inactive(file): 3776196 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'FilePages: 4817840 kB' 'Mapped: 67916 kB' 'AnonPages: 164128 kB' 'Shmem: 2596 kB' 'KernelStack: 4452 kB' 'PageTables: 3488 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 202532 kB' 'Slab: 268096 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65564 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.472 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.473 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.474 node0=512 expecting 512 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:29.474 00:04:29.474 real 0m0.774s 00:04:29.474 user 0m0.308s 00:04:29.474 sys 0m0.513s 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:29.474 11:28:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.474 ************************************ 00:04:29.474 END TEST per_node_1G_alloc 00:04:29.474 ************************************ 00:04:29.732 11:28:01 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:29.732 11:28:01 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:29.732 11:28:01 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:29.732 11:28:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.732 ************************************ 00:04:29.732 START TEST even_2G_alloc 00:04:29.732 ************************************ 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.732 11:28:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.990 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:29.990 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4796112 kB' 'MemAvailable: 9483524 kB' 'Buffers: 36284 kB' 'Cached: 4781568 kB' 'SwapCached: 0 kB' 'Active: 1031032 kB' 'Inactive: 3922032 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 145832 kB' 'Active(file): 1029984 kB' 'Inactive(file): 3776200 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 484 kB' 'Writeback: 4 kB' 'AnonPages: 164452 kB' 'Mapped: 68140 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268304 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65772 kB' 'KernelStack: 4400 kB' 'PageTables: 3448 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 518260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.563 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.564 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4796344 kB' 'MemAvailable: 9483756 kB' 'Buffers: 36284 kB' 'Cached: 4781568 kB' 'SwapCached: 0 kB' 'Active: 1031036 kB' 'Inactive: 3921940 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 145744 kB' 'Active(file): 1029988 kB' 'Inactive(file): 3776196 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 484 kB' 'Writeback: 0 kB' 'AnonPages: 164348 kB' 'Mapped: 67920 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268176 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65644 kB' 'KernelStack: 4368 kB' 'PageTables: 3464 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 518260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.565 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.566 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4796344 kB' 'MemAvailable: 9483756 kB' 'Buffers: 36284 kB' 'Cached: 4781568 kB' 'SwapCached: 0 kB' 'Active: 1031036 kB' 'Inactive: 3922136 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 145940 kB' 'Active(file): 1029988 kB' 'Inactive(file): 3776196 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 484 kB' 'Writeback: 0 kB' 'AnonPages: 164320 kB' 'Mapped: 67920 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268176 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65644 kB' 'KernelStack: 4336 kB' 'PageTables: 3388 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 519016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19628 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.567 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.568 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.569 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:30.570 nr_hugepages=1024 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.570 resv_hugepages=0 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.570 surplus_hugepages=0 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.570 anon_hugepages=0 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.570 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4798776 kB' 'MemAvailable: 9486188 kB' 'Buffers: 36284 kB' 'Cached: 4781568 kB' 'SwapCached: 0 kB' 'Active: 1031040 kB' 'Inactive: 3921964 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 145768 kB' 'Active(file): 1029988 kB' 'Inactive(file): 3776196 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 224 kB' 'Writeback: 260 kB' 'AnonPages: 164696 kB' 'Mapped: 68180 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268176 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65644 kB' 'KernelStack: 4440 kB' 'PageTables: 3308 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 518260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19612 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.571 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.572 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.573 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4798776 kB' 'MemUsed: 7444196 kB' 'SwapCached: 0 kB' 'Active: 1031040 kB' 'Inactive: 3922092 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 145892 kB' 'Active(file): 1029988 kB' 'Inactive(file): 3776200 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 224 kB' 'Writeback: 260 kB' 'FilePages: 4817856 kB' 'Mapped: 68140 kB' 'AnonPages: 164544 kB' 'Shmem: 2596 kB' 'KernelStack: 4356 kB' 'PageTables: 3512 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 202532 kB' 'Slab: 268176 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.574 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.575 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.576 node0=1024 expecting 1024 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.576 00:04:30.576 real 0m1.051s 00:04:30.576 user 0m0.317s 00:04:30.576 sys 0m0.783s 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:30.576 11:28:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:30.576 ************************************ 00:04:30.576 END TEST even_2G_alloc 00:04:30.576 ************************************ 00:04:30.835 11:28:02 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:30.835 11:28:02 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:30.835 11:28:02 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:30.835 11:28:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:30.835 ************************************ 00:04:30.835 START TEST odd_alloc 00:04:30.835 ************************************ 00:04:30.835 11:28:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:04:30.835 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:30.835 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:30.835 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:30.835 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:30.835 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:30.835 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:30.835 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:30.835 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.835 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:30.836 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:30.836 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.836 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.836 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:30.836 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:30.836 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.836 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:30.836 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:30.836 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:30.836 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.836 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:30.836 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:30.836 11:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:30.836 11:28:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.836 11:28:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.094 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:31.094 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.660 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4797388 kB' 'MemAvailable: 9484808 kB' 'Buffers: 36284 kB' 'Cached: 4781568 kB' 'SwapCached: 0 kB' 'Active: 1031056 kB' 'Inactive: 3917996 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141820 kB' 'Active(file): 1030016 kB' 'Inactive(file): 3776176 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 64 kB' 'Writeback: 0 kB' 'AnonPages: 160452 kB' 'Mapped: 67136 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 267896 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65364 kB' 'KernelStack: 4336 kB' 'PageTables: 3252 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 507400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.661 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4797388 kB' 'MemAvailable: 9484808 kB' 'Buffers: 36284 kB' 'Cached: 4781568 kB' 'SwapCached: 0 kB' 'Active: 1031060 kB' 'Inactive: 3918348 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 142172 kB' 'Active(file): 1030016 kB' 'Inactive(file): 3776176 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 64 kB' 'Writeback: 0 kB' 'AnonPages: 160828 kB' 'Mapped: 67192 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 267896 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65364 kB' 'KernelStack: 4368 kB' 'PageTables: 3336 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 507400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4797164 kB' 'MemAvailable: 9484584 kB' 'Buffers: 36284 kB' 'Cached: 4781568 kB' 'SwapCached: 0 kB' 'Active: 1031056 kB' 'Inactive: 3918120 kB' 'Active(anon): 1036 kB' 'Inactive(anon): 141948 kB' 'Active(file): 1030020 kB' 'Inactive(file): 3776172 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 72 kB' 'Writeback: 0 kB' 'AnonPages: 160308 kB' 'Mapped: 67136 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 267984 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65452 kB' 'KernelStack: 4340 kB' 'PageTables: 3172 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 507400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:31.665 nr_hugepages=1025 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.665 resv_hugepages=0 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.665 surplus_hugepages=0 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.665 anon_hugepages=0 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4797164 kB' 'MemAvailable: 9484588 kB' 'Buffers: 36284 kB' 'Cached: 4781572 kB' 'SwapCached: 0 kB' 'Active: 1031060 kB' 'Inactive: 3917888 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141712 kB' 'Active(file): 1030020 kB' 'Inactive(file): 3776176 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 72 kB' 'Writeback: 0 kB' 'AnonPages: 160308 kB' 'Mapped: 67136 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 267984 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65452 kB' 'KernelStack: 4420 kB' 'PageTables: 3356 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 507400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.666 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.925 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4797164 kB' 'MemUsed: 7445808 kB' 'SwapCached: 0 kB' 'Active: 1031060 kB' 'Inactive: 3917800 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141624 kB' 'Active(file): 1030020 kB' 'Inactive(file): 3776176 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 72 kB' 'Writeback: 0 kB' 'FilePages: 4817856 kB' 'Mapped: 67136 kB' 'AnonPages: 160224 kB' 'Shmem: 2596 kB' 'KernelStack: 4356 kB' 'PageTables: 3196 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 202532 kB' 'Slab: 267984 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.926 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:31.927 node0=1025 expecting 1025 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:31.927 00:04:31.927 real 0m1.112s 00:04:31.927 user 0m0.369s 00:04:31.927 sys 0m0.715s 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:31.927 11:28:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:31.927 ************************************ 00:04:31.927 END TEST odd_alloc 00:04:31.927 ************************************ 00:04:31.927 11:28:03 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:31.927 11:28:03 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:31.927 11:28:03 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:31.927 11:28:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:31.927 ************************************ 00:04:31.927 START TEST custom_alloc 00:04:31.927 ************************************ 00:04:31.927 11:28:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:04:31.927 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:31.927 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:31.927 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:31.927 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:31.927 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:31.927 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:31.927 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:31.927 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.928 11:28:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:32.186 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:32.186 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.445 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:32.445 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:32.445 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:32.445 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.445 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.445 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:32.445 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:32.445 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:32.445 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5850196 kB' 'MemAvailable: 10537616 kB' 'Buffers: 36284 kB' 'Cached: 4781568 kB' 'SwapCached: 0 kB' 'Active: 1031064 kB' 'Inactive: 3918316 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 142144 kB' 'Active(file): 1030020 kB' 'Inactive(file): 3776172 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 76 kB' 'Writeback: 0 kB' 'AnonPages: 160820 kB' 'Mapped: 67140 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268164 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65632 kB' 'KernelStack: 4364 kB' 'PageTables: 3552 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 507528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.709 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.710 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5850456 kB' 'MemAvailable: 10537876 kB' 'Buffers: 36284 kB' 'Cached: 4781568 kB' 'SwapCached: 0 kB' 'Active: 1031064 kB' 'Inactive: 3917972 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 141800 kB' 'Active(file): 1030020 kB' 'Inactive(file): 3776172 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 76 kB' 'Writeback: 0 kB' 'AnonPages: 160476 kB' 'Mapped: 67100 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268164 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65632 kB' 'KernelStack: 4332 kB' 'PageTables: 3472 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 508448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.711 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5850696 kB' 'MemAvailable: 10538116 kB' 'Buffers: 36284 kB' 'Cached: 4781568 kB' 'SwapCached: 0 kB' 'Active: 1031060 kB' 'Inactive: 3917956 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141784 kB' 'Active(file): 1030020 kB' 'Inactive(file): 3776172 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 80 kB' 'Writeback: 0 kB' 'AnonPages: 160520 kB' 'Mapped: 67396 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268100 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65568 kB' 'KernelStack: 4336 kB' 'PageTables: 3340 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 507528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.712 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.713 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:32.714 nr_hugepages=512 00:04:32.714 resv_hugepages=0 00:04:32.714 surplus_hugepages=0 00:04:32.714 anon_hugepages=0 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5850940 kB' 'MemAvailable: 10538360 kB' 'Buffers: 36284 kB' 'Cached: 4781568 kB' 'SwapCached: 0 kB' 'Active: 1031056 kB' 'Inactive: 3917752 kB' 'Active(anon): 1036 kB' 'Inactive(anon): 141580 kB' 'Active(file): 1030020 kB' 'Inactive(file): 3776172 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 80 kB' 'Writeback: 0 kB' 'AnonPages: 160252 kB' 'Mapped: 67136 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268076 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65544 kB' 'KernelStack: 4304 kB' 'PageTables: 3256 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 507528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.714 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.715 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5850940 kB' 'MemUsed: 6392032 kB' 'SwapCached: 0 kB' 'Active: 1031056 kB' 'Inactive: 3917680 kB' 'Active(anon): 1036 kB' 'Inactive(anon): 141508 kB' 'Active(file): 1030020 kB' 'Inactive(file): 3776172 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 80 kB' 'Writeback: 0 kB' 'FilePages: 4817852 kB' 'Mapped: 67136 kB' 'AnonPages: 160168 kB' 'Shmem: 2596 kB' 'KernelStack: 4288 kB' 'PageTables: 3212 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 202532 kB' 'Slab: 268076 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.716 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.717 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:32.989 node0=512 expecting 512 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:32.989 00:04:32.989 real 0m0.934s 00:04:32.989 user 0m0.299s 00:04:32.989 sys 0m0.578s 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:32.989 11:28:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:32.989 ************************************ 00:04:32.989 END TEST custom_alloc 00:04:32.989 ************************************ 00:04:32.989 11:28:04 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:32.989 11:28:04 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:32.989 11:28:04 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:32.989 11:28:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:32.989 ************************************ 00:04:32.989 START TEST no_shrink_alloc 00:04:32.989 ************************************ 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.990 11:28:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.248 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:33.248 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.819 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4798268 kB' 'MemAvailable: 9485688 kB' 'Buffers: 36284 kB' 'Cached: 4781568 kB' 'SwapCached: 0 kB' 'Active: 1031072 kB' 'Inactive: 3918312 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 142136 kB' 'Active(file): 1030016 kB' 'Inactive(file): 3776176 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 84 kB' 'Writeback: 0 kB' 'AnonPages: 160808 kB' 'Mapped: 67236 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268272 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65740 kB' 'KernelStack: 4460 kB' 'PageTables: 3636 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 507528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4798752 kB' 'MemAvailable: 9486180 kB' 'Buffers: 36284 kB' 'Cached: 4781576 kB' 'SwapCached: 0 kB' 'Active: 1031056 kB' 'Inactive: 3917580 kB' 'Active(anon): 1036 kB' 'Inactive(anon): 141400 kB' 'Active(file): 1030020 kB' 'Inactive(file): 3776180 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 88 kB' 'Writeback: 0 kB' 'AnonPages: 160264 kB' 'Mapped: 67136 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268192 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65660 kB' 'KernelStack: 4288 kB' 'PageTables: 3208 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 507528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4798752 kB' 'MemAvailable: 9486180 kB' 'Buffers: 36284 kB' 'Cached: 4781576 kB' 'SwapCached: 0 kB' 'Active: 1031056 kB' 'Inactive: 3917500 kB' 'Active(anon): 1036 kB' 'Inactive(anon): 141320 kB' 'Active(file): 1030020 kB' 'Inactive(file): 3776180 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 88 kB' 'Writeback: 0 kB' 'AnonPages: 160196 kB' 'Mapped: 67136 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268096 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65564 kB' 'KernelStack: 4304 kB' 'PageTables: 3248 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 507528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.824 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.825 nr_hugepages=1024 00:04:33.825 resv_hugepages=0 00:04:33.825 surplus_hugepages=0 00:04:33.825 anon_hugepages=0 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4799004 kB' 'MemAvailable: 9486432 kB' 'Buffers: 36284 kB' 'Cached: 4781576 kB' 'SwapCached: 0 kB' 'Active: 1031056 kB' 'Inactive: 3917508 kB' 'Active(anon): 1036 kB' 'Inactive(anon): 141328 kB' 'Active(file): 1030020 kB' 'Inactive(file): 3776180 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 88 kB' 'Writeback: 0 kB' 'AnonPages: 159944 kB' 'Mapped: 67136 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268096 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65564 kB' 'KernelStack: 4288 kB' 'PageTables: 3208 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 507528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.825 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.826 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.827 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.827 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.827 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.827 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.827 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.827 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.827 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.827 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:33.827 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.827 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.827 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.827 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:33.827 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.827 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4800012 kB' 'MemUsed: 7442960 kB' 'SwapCached: 0 kB' 'Active: 1031056 kB' 'Inactive: 3917468 kB' 'Active(anon): 1036 kB' 'Inactive(anon): 141288 kB' 'Active(file): 1030020 kB' 'Inactive(file): 3776180 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 88 kB' 'Writeback: 0 kB' 'FilePages: 4817860 kB' 'Mapped: 67136 kB' 'AnonPages: 160164 kB' 'Shmem: 2596 kB' 'KernelStack: 4324 kB' 'PageTables: 3128 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 202532 kB' 'Slab: 268096 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65564 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.087 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.088 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.089 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.089 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.089 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.089 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:34.089 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.089 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.089 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.089 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.089 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:34.089 node0=1024 expecting 1024 00:04:34.089 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:34.089 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:34.089 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:34.089 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:34.089 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.089 11:28:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.349 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:34.349 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.349 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4799816 kB' 'MemAvailable: 9487240 kB' 'Buffers: 36284 kB' 'Cached: 4781572 kB' 'SwapCached: 0 kB' 'Active: 1031096 kB' 'Inactive: 3918240 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 142076 kB' 'Active(file): 1030032 kB' 'Inactive(file): 3776164 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 92 kB' 'Writeback: 0 kB' 'AnonPages: 160792 kB' 'Mapped: 67196 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268188 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65656 kB' 'KernelStack: 4348 kB' 'PageTables: 3792 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 507528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.350 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4799816 kB' 'MemAvailable: 9487240 kB' 'Buffers: 36284 kB' 'Cached: 4781572 kB' 'SwapCached: 0 kB' 'Active: 1031096 kB' 'Inactive: 3918184 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 142020 kB' 'Active(file): 1030032 kB' 'Inactive(file): 3776164 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 96 kB' 'Writeback: 0 kB' 'AnonPages: 160956 kB' 'Mapped: 67196 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268196 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65664 kB' 'KernelStack: 4364 kB' 'PageTables: 3812 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 507528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.353 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.353 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.353 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.353 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:34.353 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:34.353 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.353 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4800596 kB' 'MemAvailable: 9488020 kB' 'Buffers: 36284 kB' 'Cached: 4781572 kB' 'SwapCached: 0 kB' 'Active: 1031084 kB' 'Inactive: 3917728 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 141564 kB' 'Active(file): 1030032 kB' 'Inactive(file): 3776164 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 96 kB' 'Writeback: 0 kB' 'AnonPages: 160208 kB' 'Mapped: 67316 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268276 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65744 kB' 'KernelStack: 4296 kB' 'PageTables: 3564 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 507528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.614 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.615 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:34.616 nr_hugepages=1024 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.616 resv_hugepages=0 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.616 surplus_hugepages=0 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.616 anon_hugepages=0 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4800596 kB' 'MemAvailable: 9488020 kB' 'Buffers: 36284 kB' 'Cached: 4781572 kB' 'SwapCached: 0 kB' 'Active: 1031072 kB' 'Inactive: 3917656 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141492 kB' 'Active(file): 1030032 kB' 'Inactive(file): 3776164 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 96 kB' 'Writeback: 0 kB' 'AnonPages: 160108 kB' 'Mapped: 67320 kB' 'Shmem: 2596 kB' 'KReclaimable: 202532 kB' 'Slab: 268276 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65744 kB' 'KernelStack: 4332 kB' 'PageTables: 3496 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 507528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 143212 kB' 'DirectMap2M: 4050944 kB' 'DirectMap1G: 10485760 kB' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.616 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.617 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4800596 kB' 'MemUsed: 7442376 kB' 'SwapCached: 0 kB' 'Active: 1031072 kB' 'Inactive: 3917616 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141452 kB' 'Active(file): 1030032 kB' 'Inactive(file): 3776164 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 96 kB' 'Writeback: 0 kB' 'FilePages: 4817856 kB' 'Mapped: 67320 kB' 'AnonPages: 160316 kB' 'Shmem: 2596 kB' 'KernelStack: 4368 kB' 'PageTables: 3416 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 202532 kB' 'Slab: 268276 kB' 'SReclaimable: 202532 kB' 'SUnreclaim: 65744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.618 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:34.619 node0=1024 expecting 1024 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:34.619 00:04:34.619 real 0m1.707s 00:04:34.619 user 0m0.616s 00:04:34.619 sys 0m1.022s 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:34.619 11:28:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:34.619 ************************************ 00:04:34.619 END TEST no_shrink_alloc 00:04:34.619 ************************************ 00:04:34.619 11:28:06 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:34.619 11:28:06 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:34.619 11:28:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:34.619 11:28:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.619 11:28:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.619 11:28:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.619 11:28:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.619 11:28:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:34.619 11:28:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:34.619 ************************************ 00:04:34.619 END TEST hugepages 00:04:34.619 ************************************ 00:04:34.619 00:04:34.619 real 0m7.336s 00:04:34.619 user 0m2.540s 00:04:34.619 sys 0m4.758s 00:04:34.619 11:28:06 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:34.619 11:28:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:34.619 11:28:06 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:34.619 11:28:06 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:34.619 11:28:06 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:34.619 11:28:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:34.619 ************************************ 00:04:34.619 START TEST driver 00:04:34.619 ************************************ 00:04:34.619 11:28:06 setup.sh.driver -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:34.878 * Looking for test storage... 00:04:34.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:34.878 11:28:06 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:34.878 11:28:06 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.878 11:28:06 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:35.529 11:28:07 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:35.529 11:28:07 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:35.529 11:28:07 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:35.529 11:28:07 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:35.529 ************************************ 00:04:35.529 START TEST guess_driver 00:04:35.529 ************************************ 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ N == Y ]] 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:04:35.529 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:35.529 Looking for driver=uio_pci_generic 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.529 11:28:07 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:35.786 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:35.786 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:35.786 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.044 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.044 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:36.044 11:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.978 11:28:08 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:36.978 11:28:08 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:36.978 11:28:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.978 11:28:08 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.584 00:04:37.584 real 0m2.096s 00:04:37.584 user 0m0.463s 00:04:37.584 sys 0m1.635s 00:04:37.584 11:28:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:37.584 ************************************ 00:04:37.584 END TEST guess_driver 00:04:37.584 11:28:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:37.584 ************************************ 00:04:37.584 00:04:37.584 real 0m2.820s 00:04:37.584 user 0m0.801s 00:04:37.584 sys 0m2.056s 00:04:37.584 11:28:09 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:37.584 11:28:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:37.584 ************************************ 00:04:37.584 END TEST driver 00:04:37.584 ************************************ 00:04:37.584 11:28:09 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:37.584 11:28:09 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:37.584 11:28:09 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:37.584 11:28:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:37.584 ************************************ 00:04:37.584 START TEST devices 00:04:37.584 ************************************ 00:04:37.584 11:28:09 setup.sh.devices -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:37.584 * Looking for test storage... 00:04:37.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:37.584 11:28:09 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:37.584 11:28:09 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:37.584 11:28:09 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.584 11:28:09 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:38.157 11:28:10 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:38.157 11:28:10 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:38.157 11:28:10 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:38.157 11:28:10 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:38.157 11:28:10 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:38.157 11:28:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:38.157 11:28:10 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:38.157 11:28:10 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:38.157 11:28:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:38.157 11:28:10 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:38.157 11:28:10 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:38.157 11:28:10 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:38.157 11:28:10 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:38.157 11:28:10 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:38.157 11:28:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:38.157 11:28:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:38.157 11:28:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:38.157 11:28:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:38.157 11:28:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:38.157 11:28:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:38.157 11:28:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:38.157 11:28:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:38.157 No valid GPT data, bailing 00:04:38.416 11:28:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:38.416 11:28:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:38.416 11:28:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:38.417 11:28:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:38.417 11:28:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:38.417 11:28:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:38.417 11:28:10 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:38.417 11:28:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:38.417 11:28:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:38.417 11:28:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:38.417 11:28:10 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:38.417 11:28:10 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:38.417 11:28:10 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:38.417 11:28:10 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:38.417 11:28:10 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:38.417 11:28:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:38.417 ************************************ 00:04:38.417 START TEST nvme_mount 00:04:38.417 ************************************ 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:38.417 11:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:39.353 Creating new GPT entries in memory. 00:04:39.353 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:39.353 other utilities. 00:04:39.353 11:28:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:39.353 11:28:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.353 11:28:11 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:39.353 11:28:11 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:39.353 11:28:11 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:40.343 Creating new GPT entries in memory. 00:04:40.343 The operation has completed successfully. 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 104508 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:40.343 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:40.344 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.344 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:40.344 11:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.344 11:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:40.602 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:40.602 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:40.602 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:40.602 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.602 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:40.602 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.861 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:40.861 11:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:41.795 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:41.795 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:41.795 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:41.795 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:41.795 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.795 11:28:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:42.054 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:42.054 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:42.054 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:42.054 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.054 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:42.054 11:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.054 11:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:42.054 11:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.449 11:28:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.449 11:28:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:43.449 11:28:15 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.449 11:28:15 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.449 11:28:15 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:43.449 11:28:15 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.750 11:28:15 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:04:43.750 11:28:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:43.750 11:28:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:43.750 11:28:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:43.750 11:28:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:43.750 11:28:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.750 11:28:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:43.750 11:28:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.750 11:28:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.750 11:28:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:43.750 11:28:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.750 11:28:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.750 11:28:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:44.317 11:28:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:44.317 11:28:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:44.317 11:28:16 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:44.317 11:28:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.317 11:28:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:44.317 11:28:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.317 11:28:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:44.317 11:28:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.249 11:28:17 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:45.249 11:28:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:45.249 11:28:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:45.249 11:28:17 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:45.249 11:28:17 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:45.249 11:28:17 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:45.249 11:28:17 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:45.249 11:28:17 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:45.249 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:45.249 00:04:45.249 real 0m6.932s 00:04:45.249 user 0m0.760s 00:04:45.249 sys 0m3.541s 00:04:45.249 11:28:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:45.249 ************************************ 00:04:45.249 END TEST nvme_mount 00:04:45.249 ************************************ 00:04:45.249 11:28:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:45.249 11:28:17 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:45.249 11:28:17 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:45.249 11:28:17 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:45.249 11:28:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:45.249 ************************************ 00:04:45.249 START TEST dm_mount 00:04:45.249 ************************************ 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:45.249 11:28:17 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:46.625 Creating new GPT entries in memory. 00:04:46.625 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:46.625 other utilities. 00:04:46.625 11:28:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:46.625 11:28:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:46.625 11:28:18 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:46.625 11:28:18 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:46.625 11:28:18 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:47.560 Creating new GPT entries in memory. 00:04:47.560 The operation has completed successfully. 00:04:47.560 11:28:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:47.560 11:28:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:47.560 11:28:19 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:47.560 11:28:19 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:47.560 11:28:19 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:48.495 The operation has completed successfully. 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 105002 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.495 11:28:20 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:48.775 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:48.775 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:48.775 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:48.775 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.775 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:48.775 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.033 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:49.033 11:28:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.966 11:28:21 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:50.235 11:28:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:50.235 11:28:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:50.235 11:28:22 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:50.235 11:28:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.235 11:28:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:50.235 11:28:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.235 11:28:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:50.235 11:28:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.173 11:28:23 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:51.173 11:28:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:51.173 11:28:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:51.173 11:28:23 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:51.173 11:28:23 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:51.173 11:28:23 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:51.173 11:28:23 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:51.432 11:28:23 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.432 11:28:23 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:51.432 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:51.432 11:28:23 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:51.432 11:28:23 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:51.432 00:04:51.432 real 0m6.017s 00:04:51.432 user 0m0.433s 00:04:51.432 sys 0m2.417s 00:04:51.432 11:28:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:51.432 11:28:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:51.432 ************************************ 00:04:51.432 END TEST dm_mount 00:04:51.432 ************************************ 00:04:51.432 11:28:23 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:51.432 11:28:23 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:51.432 11:28:23 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:51.432 11:28:23 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.432 11:28:23 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:51.432 11:28:23 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:51.432 11:28:23 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:51.432 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:51.432 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:51.432 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:51.432 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:51.432 11:28:23 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:51.432 11:28:23 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:51.432 11:28:23 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:51.432 11:28:23 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.432 11:28:23 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:51.432 11:28:23 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:51.432 11:28:23 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:51.432 00:04:51.432 real 0m13.874s 00:04:51.432 user 0m1.645s 00:04:51.432 sys 0m6.436s 00:04:51.432 11:28:23 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:51.432 11:28:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:51.432 ************************************ 00:04:51.432 END TEST devices 00:04:51.432 ************************************ 00:04:51.432 00:04:51.432 real 0m29.839s 00:04:51.432 user 0m6.787s 00:04:51.432 sys 0m17.422s 00:04:51.432 11:28:23 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:51.432 11:28:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:51.432 ************************************ 00:04:51.432 END TEST setup.sh 00:04:51.432 ************************************ 00:04:51.432 11:28:23 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:51.998 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:51.998 Hugepages 00:04:51.998 node hugesize free / total 00:04:51.998 node0 1048576kB 0 / 0 00:04:51.998 node0 2048kB 2048 / 2048 00:04:51.998 00:04:51.998 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:51.998 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:52.256 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:52.256 11:28:24 -- spdk/autotest.sh@130 -- # uname -s 00:04:52.256 11:28:24 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:52.256 11:28:24 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:52.256 11:28:24 -- common/autotest_common.sh@1530 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:52.823 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:52.823 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:53.758 11:28:25 -- common/autotest_common.sh@1531 -- # sleep 1 00:04:54.694 11:28:26 -- common/autotest_common.sh@1532 -- # bdfs=() 00:04:54.694 11:28:26 -- common/autotest_common.sh@1532 -- # local bdfs 00:04:54.694 11:28:26 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:04:54.694 11:28:26 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:04:54.694 11:28:26 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:54.694 11:28:26 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:54.694 11:28:26 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:54.694 11:28:26 -- common/autotest_common.sh@1513 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:54.694 11:28:26 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:54.694 11:28:26 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:54.694 11:28:26 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 00:04:54.694 11:28:26 -- common/autotest_common.sh@1535 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.262 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:55.262 Waiting for block devices as requested 00:04:55.262 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:55.262 11:28:27 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:04:55.262 11:28:27 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:55.262 11:28:27 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:04:55.262 11:28:27 -- common/autotest_common.sh@1501 -- # grep 0000:00:10.0/nvme/nvme 00:04:55.262 11:28:27 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:04:55.262 11:28:27 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:04:55.262 11:28:27 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:04:55.262 11:28:27 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:04:55.262 11:28:27 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:04:55.262 11:28:27 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:04:55.262 11:28:27 -- common/autotest_common.sh@1544 -- # grep oacs 00:04:55.262 11:28:27 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:04:55.262 11:28:27 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:04:55.262 11:28:27 -- common/autotest_common.sh@1544 -- # oacs=' 0x12a' 00:04:55.262 11:28:27 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:04:55.262 11:28:27 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:04:55.262 11:28:27 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:04:55.262 11:28:27 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:04:55.262 11:28:27 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:04:55.262 11:28:27 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:04:55.262 11:28:27 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:04:55.262 11:28:27 -- common/autotest_common.sh@1556 -- # continue 00:04:55.262 11:28:27 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:55.262 11:28:27 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:55.262 11:28:27 -- common/autotest_common.sh@10 -- # set +x 00:04:55.520 11:28:27 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:55.520 11:28:27 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:55.520 11:28:27 -- common/autotest_common.sh@10 -- # set +x 00:04:55.520 11:28:27 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:55.778 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:56.069 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.006 11:28:28 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:57.006 11:28:28 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:57.006 11:28:28 -- common/autotest_common.sh@10 -- # set +x 00:04:57.006 11:28:28 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:57.006 11:28:28 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:04:57.006 11:28:28 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:04:57.006 11:28:28 -- common/autotest_common.sh@1576 -- # bdfs=() 00:04:57.006 11:28:28 -- common/autotest_common.sh@1576 -- # local bdfs 00:04:57.006 11:28:28 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:04:57.006 11:28:28 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:57.006 11:28:28 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:57.006 11:28:28 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.006 11:28:28 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:57.006 11:28:28 -- common/autotest_common.sh@1513 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:57.006 11:28:28 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:57.006 11:28:28 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 00:04:57.006 11:28:28 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:04:57.006 11:28:28 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:57.006 11:28:28 -- common/autotest_common.sh@1579 -- # device=0x0010 00:04:57.006 11:28:28 -- common/autotest_common.sh@1580 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:57.006 11:28:28 -- common/autotest_common.sh@1585 -- # printf '%s\n' 00:04:57.006 11:28:28 -- common/autotest_common.sh@1591 -- # [[ -z '' ]] 00:04:57.006 11:28:28 -- common/autotest_common.sh@1592 -- # return 0 00:04:57.006 11:28:28 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:04:57.006 11:28:28 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:57.006 11:28:28 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:57.006 11:28:28 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:57.006 11:28:28 -- common/autotest_common.sh@10 -- # set +x 00:04:57.006 ************************************ 00:04:57.006 START TEST unittest 00:04:57.006 ************************************ 00:04:57.006 11:28:28 unittest -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:57.006 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:57.006 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:04:57.006 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:04:57.006 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:57.006 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:04:57.006 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:57.006 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:04:57.006 ++ rpc_py=rpc_cmd 00:04:57.006 ++ set -e 00:04:57.006 ++ shopt -s nullglob 00:04:57.006 ++ shopt -s extglob 00:04:57.006 ++ shopt -s inherit_errexit 00:04:57.006 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:04:57.006 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:04:57.006 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:04:57.006 +++ CONFIG_WPDK_DIR= 00:04:57.006 +++ CONFIG_ASAN=y 00:04:57.006 +++ CONFIG_VBDEV_COMPRESS=n 00:04:57.007 +++ CONFIG_HAVE_EXECINFO_H=y 00:04:57.007 +++ CONFIG_USDT=n 00:04:57.007 +++ CONFIG_CUSTOMOCF=n 00:04:57.007 +++ CONFIG_PREFIX=/usr/local 00:04:57.007 +++ CONFIG_RBD=n 00:04:57.007 +++ CONFIG_LIBDIR= 00:04:57.007 +++ CONFIG_IDXD=y 00:04:57.007 +++ CONFIG_NVME_CUSE=y 00:04:57.007 +++ CONFIG_SMA=n 00:04:57.007 +++ CONFIG_VTUNE=n 00:04:57.007 +++ CONFIG_TSAN=n 00:04:57.007 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:04:57.007 +++ CONFIG_VFIO_USER_DIR= 00:04:57.007 +++ CONFIG_PGO_CAPTURE=n 00:04:57.007 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:04:57.007 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:57.007 +++ CONFIG_LTO=n 00:04:57.007 +++ CONFIG_ISCSI_INITIATOR=y 00:04:57.007 +++ CONFIG_CET=n 00:04:57.007 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:04:57.007 +++ CONFIG_OCF_PATH= 00:04:57.007 +++ CONFIG_RDMA_SET_TOS=y 00:04:57.007 +++ CONFIG_HAVE_ARC4RANDOM=n 00:04:57.007 +++ CONFIG_HAVE_LIBARCHIVE=n 00:04:57.007 +++ CONFIG_UBLK=n 00:04:57.007 +++ CONFIG_ISAL_CRYPTO=y 00:04:57.007 +++ CONFIG_OPENSSL_PATH= 00:04:57.007 +++ CONFIG_OCF=n 00:04:57.007 +++ CONFIG_FUSE=n 00:04:57.007 +++ CONFIG_VTUNE_DIR= 00:04:57.007 +++ CONFIG_FUZZER_LIB= 00:04:57.007 +++ CONFIG_FUZZER=n 00:04:57.007 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:04:57.007 +++ CONFIG_CRYPTO=n 00:04:57.007 +++ CONFIG_PGO_USE=n 00:04:57.007 +++ CONFIG_VHOST=y 00:04:57.007 +++ CONFIG_DAOS=n 00:04:57.007 +++ CONFIG_DPDK_INC_DIR= 00:04:57.007 +++ CONFIG_DAOS_DIR= 00:04:57.007 +++ CONFIG_UNIT_TESTS=y 00:04:57.007 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:04:57.007 +++ CONFIG_VIRTIO=y 00:04:57.007 +++ CONFIG_DPDK_UADK=n 00:04:57.007 +++ CONFIG_COVERAGE=y 00:04:57.007 +++ CONFIG_RDMA=y 00:04:57.007 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:04:57.007 +++ CONFIG_URING_PATH= 00:04:57.007 +++ CONFIG_XNVME=n 00:04:57.007 +++ CONFIG_VFIO_USER=n 00:04:57.007 +++ CONFIG_ARCH=native 00:04:57.007 +++ CONFIG_HAVE_EVP_MAC=y 00:04:57.007 +++ CONFIG_URING_ZNS=n 00:04:57.007 +++ CONFIG_WERROR=y 00:04:57.007 +++ CONFIG_HAVE_LIBBSD=n 00:04:57.007 +++ CONFIG_UBSAN=y 00:04:57.007 +++ CONFIG_IPSEC_MB_DIR= 00:04:57.007 +++ CONFIG_GOLANG=n 00:04:57.007 +++ CONFIG_ISAL=y 00:04:57.007 +++ CONFIG_IDXD_KERNEL=n 00:04:57.007 +++ CONFIG_DPDK_LIB_DIR= 00:04:57.007 +++ CONFIG_RDMA_PROV=verbs 00:04:57.007 +++ CONFIG_APPS=y 00:04:57.007 +++ CONFIG_SHARED=n 00:04:57.007 +++ CONFIG_HAVE_KEYUTILS=y 00:04:57.007 +++ CONFIG_FC_PATH= 00:04:57.007 +++ CONFIG_DPDK_PKG_CONFIG=n 00:04:57.007 +++ CONFIG_FC=n 00:04:57.007 +++ CONFIG_AVAHI=n 00:04:57.007 +++ CONFIG_FIO_PLUGIN=y 00:04:57.007 +++ CONFIG_RAID5F=y 00:04:57.007 +++ CONFIG_EXAMPLES=y 00:04:57.007 +++ CONFIG_TESTS=y 00:04:57.007 +++ CONFIG_CRYPTO_MLX5=n 00:04:57.007 +++ CONFIG_MAX_LCORES= 00:04:57.007 +++ CONFIG_IPSEC_MB=n 00:04:57.007 +++ CONFIG_PGO_DIR= 00:04:57.007 +++ CONFIG_DEBUG=y 00:04:57.007 +++ CONFIG_DPDK_COMPRESSDEV=n 00:04:57.007 +++ CONFIG_CROSS_PREFIX= 00:04:57.007 +++ CONFIG_URING=n 00:04:57.007 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:57.007 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:57.007 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:04:57.007 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:04:57.007 +++ _root=/home/vagrant/spdk_repo/spdk 00:04:57.007 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:04:57.007 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:04:57.007 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:04:57.007 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:04:57.007 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:04:57.007 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:04:57.007 +++ VHOST_APP=("$_app_dir/vhost") 00:04:57.007 +++ DD_APP=("$_app_dir/spdk_dd") 00:04:57.007 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:04:57.007 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:04:57.007 +++ [[ #ifndef SPDK_CONFIG_H 00:04:57.007 #define SPDK_CONFIG_H 00:04:57.007 #define SPDK_CONFIG_APPS 1 00:04:57.007 #define SPDK_CONFIG_ARCH native 00:04:57.007 #define SPDK_CONFIG_ASAN 1 00:04:57.007 #undef SPDK_CONFIG_AVAHI 00:04:57.007 #undef SPDK_CONFIG_CET 00:04:57.007 #define SPDK_CONFIG_COVERAGE 1 00:04:57.007 #define SPDK_CONFIG_CROSS_PREFIX 00:04:57.007 #undef SPDK_CONFIG_CRYPTO 00:04:57.007 #undef SPDK_CONFIG_CRYPTO_MLX5 00:04:57.007 #undef SPDK_CONFIG_CUSTOMOCF 00:04:57.007 #undef SPDK_CONFIG_DAOS 00:04:57.007 #define SPDK_CONFIG_DAOS_DIR 00:04:57.007 #define SPDK_CONFIG_DEBUG 1 00:04:57.007 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:04:57.007 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:57.007 #define SPDK_CONFIG_DPDK_INC_DIR 00:04:57.007 #define SPDK_CONFIG_DPDK_LIB_DIR 00:04:57.007 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:04:57.007 #undef SPDK_CONFIG_DPDK_UADK 00:04:57.007 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:57.007 #define SPDK_CONFIG_EXAMPLES 1 00:04:57.007 #undef SPDK_CONFIG_FC 00:04:57.007 #define SPDK_CONFIG_FC_PATH 00:04:57.007 #define SPDK_CONFIG_FIO_PLUGIN 1 00:04:57.007 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:04:57.007 #undef SPDK_CONFIG_FUSE 00:04:57.007 #undef SPDK_CONFIG_FUZZER 00:04:57.007 #define SPDK_CONFIG_FUZZER_LIB 00:04:57.007 #undef SPDK_CONFIG_GOLANG 00:04:57.007 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:04:57.007 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:04:57.007 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:04:57.007 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:04:57.007 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:04:57.007 #undef SPDK_CONFIG_HAVE_LIBBSD 00:04:57.007 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:04:57.007 #define SPDK_CONFIG_IDXD 1 00:04:57.007 #undef SPDK_CONFIG_IDXD_KERNEL 00:04:57.007 #undef SPDK_CONFIG_IPSEC_MB 00:04:57.007 #define SPDK_CONFIG_IPSEC_MB_DIR 00:04:57.007 #define SPDK_CONFIG_ISAL 1 00:04:57.007 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:04:57.007 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:04:57.007 #define SPDK_CONFIG_LIBDIR 00:04:57.007 #undef SPDK_CONFIG_LTO 00:04:57.007 #define SPDK_CONFIG_MAX_LCORES 00:04:57.007 #define SPDK_CONFIG_NVME_CUSE 1 00:04:57.007 #undef SPDK_CONFIG_OCF 00:04:57.007 #define SPDK_CONFIG_OCF_PATH 00:04:57.007 #define SPDK_CONFIG_OPENSSL_PATH 00:04:57.007 #undef SPDK_CONFIG_PGO_CAPTURE 00:04:57.007 #define SPDK_CONFIG_PGO_DIR 00:04:57.007 #undef SPDK_CONFIG_PGO_USE 00:04:57.007 #define SPDK_CONFIG_PREFIX /usr/local 00:04:57.007 #define SPDK_CONFIG_RAID5F 1 00:04:57.007 #undef SPDK_CONFIG_RBD 00:04:57.007 #define SPDK_CONFIG_RDMA 1 00:04:57.007 #define SPDK_CONFIG_RDMA_PROV verbs 00:04:57.007 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:04:57.007 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:04:57.007 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:04:57.007 #undef SPDK_CONFIG_SHARED 00:04:57.007 #undef SPDK_CONFIG_SMA 00:04:57.007 #define SPDK_CONFIG_TESTS 1 00:04:57.007 #undef SPDK_CONFIG_TSAN 00:04:57.007 #undef SPDK_CONFIG_UBLK 00:04:57.007 #define SPDK_CONFIG_UBSAN 1 00:04:57.007 #define SPDK_CONFIG_UNIT_TESTS 1 00:04:57.007 #undef SPDK_CONFIG_URING 00:04:57.007 #define SPDK_CONFIG_URING_PATH 00:04:57.007 #undef SPDK_CONFIG_URING_ZNS 00:04:57.007 #undef SPDK_CONFIG_USDT 00:04:57.007 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:04:57.007 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:04:57.007 #undef SPDK_CONFIG_VFIO_USER 00:04:57.007 #define SPDK_CONFIG_VFIO_USER_DIR 00:04:57.007 #define SPDK_CONFIG_VHOST 1 00:04:57.007 #define SPDK_CONFIG_VIRTIO 1 00:04:57.007 #undef SPDK_CONFIG_VTUNE 00:04:57.007 #define SPDK_CONFIG_VTUNE_DIR 00:04:57.007 #define SPDK_CONFIG_WERROR 1 00:04:57.007 #define SPDK_CONFIG_WPDK_DIR 00:04:57.007 #undef SPDK_CONFIG_XNVME 00:04:57.007 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:04:57.007 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:04:57.007 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:57.007 +++ [[ -e /bin/wpdk_common.sh ]] 00:04:57.007 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.007 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.007 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:57.007 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:57.008 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:57.008 ++++ export PATH 00:04:57.008 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:57.008 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:57.008 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:57.008 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:57.008 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:57.008 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:04:57.008 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:04:57.008 +++ TEST_TAG=N/A 00:04:57.008 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:04:57.008 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:04:57.008 ++++ uname -s 00:04:57.008 +++ PM_OS=Linux 00:04:57.008 +++ MONITOR_RESOURCES_SUDO=() 00:04:57.008 +++ declare -A MONITOR_RESOURCES_SUDO 00:04:57.008 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:04:57.008 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:04:57.008 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:04:57.008 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:04:57.008 +++ SUDO[0]= 00:04:57.008 +++ SUDO[1]='sudo -E' 00:04:57.008 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:04:57.008 +++ [[ Linux == FreeBSD ]] 00:04:57.008 +++ [[ Linux == Linux ]] 00:04:57.008 +++ [[ QEMU != QEMU ]] 00:04:57.008 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:04:57.008 ++ : 0 00:04:57.008 ++ export RUN_NIGHTLY 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_RUN_VALGRIND 00:04:57.008 ++ : 1 00:04:57.008 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:04:57.008 ++ : 1 00:04:57.008 ++ export SPDK_TEST_UNITTEST 00:04:57.008 ++ : 00:04:57.008 ++ export SPDK_TEST_AUTOBUILD 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_RELEASE_BUILD 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_ISAL 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_ISCSI 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_ISCSI_INITIATOR 00:04:57.008 ++ : 1 00:04:57.008 ++ export SPDK_TEST_NVME 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_NVME_PMR 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_NVME_BP 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_NVME_CLI 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_NVME_CUSE 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_NVME_FDP 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_NVMF 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_VFIOUSER 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_VFIOUSER_QEMU 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_FUZZER 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_FUZZER_SHORT 00:04:57.008 ++ : rdma 00:04:57.008 ++ export SPDK_TEST_NVMF_TRANSPORT 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_RBD 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_VHOST 00:04:57.008 ++ : 1 00:04:57.008 ++ export SPDK_TEST_BLOCKDEV 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_IOAT 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_BLOBFS 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_VHOST_INIT 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_LVOL 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_VBDEV_COMPRESS 00:04:57.008 ++ : 1 00:04:57.008 ++ export SPDK_RUN_ASAN 00:04:57.008 ++ : 1 00:04:57.008 ++ export SPDK_RUN_UBSAN 00:04:57.008 ++ : 00:04:57.008 ++ export SPDK_RUN_EXTERNAL_DPDK 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_RUN_NON_ROOT 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_CRYPTO 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_FTL 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_OCF 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_VMD 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_OPAL 00:04:57.008 ++ : 00:04:57.008 ++ export SPDK_TEST_NATIVE_DPDK 00:04:57.008 ++ : true 00:04:57.008 ++ export SPDK_AUTOTEST_X 00:04:57.008 ++ : 1 00:04:57.008 ++ export SPDK_TEST_RAID5 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_URING 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_USDT 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_USE_IGB_UIO 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_SCHEDULER 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_SCANBUILD 00:04:57.008 ++ : 00:04:57.008 ++ export SPDK_TEST_NVMF_NICS 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_SMA 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_DAOS 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_XNVME 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_ACCEL_DSA 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_ACCEL_IAA 00:04:57.008 ++ : 00:04:57.008 ++ export SPDK_TEST_FUZZER_TARGET 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_TEST_NVMF_MDNS 00:04:57.008 ++ : 0 00:04:57.008 ++ export SPDK_JSONRPC_GO_CLIENT 00:04:57.008 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:57.008 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:57.008 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:57.008 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:57.008 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:57.008 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:57.008 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:57.008 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:57.008 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:04:57.008 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:04:57.008 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:57.008 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:57.008 ++ export PYTHONDONTWRITEBYTECODE=1 00:04:57.008 ++ PYTHONDONTWRITEBYTECODE=1 00:04:57.008 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:57.008 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:57.008 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:57.008 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:57.008 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:04:57.008 ++ rm -rf /var/tmp/asan_suppression_file 00:04:57.008 ++ cat 00:04:57.008 ++ echo leak:libfuse3.so 00:04:57.008 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:57.008 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:57.008 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:57.008 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:57.008 ++ '[' -z /var/spdk/dependencies ']' 00:04:57.008 ++ export DEPENDENCY_DIR 00:04:57.008 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:57.008 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:57.008 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:57.008 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:57.008 ++ export QEMU_BIN= 00:04:57.008 ++ QEMU_BIN= 00:04:57.008 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:57.008 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:57.008 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:57.008 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:57.008 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:57.008 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:57.008 ++ '[' 0 -eq 0 ']' 00:04:57.008 ++ export valgrind= 00:04:57.008 ++ valgrind= 00:04:57.008 +++ uname -s 00:04:57.008 ++ '[' Linux = Linux ']' 00:04:57.008 ++ HUGEMEM=4096 00:04:57.008 ++ export CLEAR_HUGE=yes 00:04:57.008 ++ CLEAR_HUGE=yes 00:04:57.008 ++ [[ 0 -eq 1 ]] 00:04:57.008 ++ [[ 0 -eq 1 ]] 00:04:57.008 ++ MAKE=make 00:04:57.008 +++ nproc 00:04:57.008 ++ MAKEFLAGS=-j10 00:04:57.008 ++ export HUGEMEM=4096 00:04:57.008 ++ HUGEMEM=4096 00:04:57.008 ++ NO_HUGE=() 00:04:57.008 ++ TEST_MODE= 00:04:57.008 ++ [[ -z '' ]] 00:04:57.008 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:57.008 ++ exec 00:04:57.008 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:57.008 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:04:57.008 ++ set_test_storage 2147483648 00:04:57.008 ++ [[ -v testdir ]] 00:04:57.009 ++ local requested_size=2147483648 00:04:57.009 ++ local mount target_dir 00:04:57.009 ++ local -A mounts fss sizes avails uses 00:04:57.009 ++ local source fs size avail mount use 00:04:57.009 ++ local storage_fallback storage_candidates 00:04:57.009 +++ mktemp -udt spdk.XXXXXX 00:04:57.009 ++ storage_fallback=/tmp/spdk.EJncWR 00:04:57.009 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:04:57.009 ++ [[ -n '' ]] 00:04:57.009 ++ [[ -n '' ]] 00:04:57.009 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.EJncWR/tests/unit /tmp/spdk.EJncWR 00:04:57.267 ++ requested_size=2214592512 00:04:57.267 ++ read -r source fs size use avail _ mount 00:04:57.267 +++ df -T 00:04:57.267 +++ grep -v Filesystem 00:04:57.267 ++ mounts["$mount"]=tmpfs 00:04:57.267 ++ fss["$mount"]=tmpfs 00:04:57.267 ++ avails["$mount"]=1252601856 00:04:57.267 ++ sizes["$mount"]=1253683200 00:04:57.267 ++ uses["$mount"]=1081344 00:04:57.267 ++ read -r source fs size use avail _ mount 00:04:57.267 ++ mounts["$mount"]=/dev/vda1 00:04:57.267 ++ fss["$mount"]=ext4 00:04:57.267 ++ avails["$mount"]=10212327424 00:04:57.267 ++ sizes["$mount"]=20616794112 00:04:57.267 ++ uses["$mount"]=10387689472 00:04:57.267 ++ read -r source fs size use avail _ mount 00:04:57.267 ++ mounts["$mount"]=tmpfs 00:04:57.267 ++ fss["$mount"]=tmpfs 00:04:57.267 ++ avails["$mount"]=6268399616 00:04:57.267 ++ sizes["$mount"]=6268399616 00:04:57.267 ++ uses["$mount"]=0 00:04:57.267 ++ read -r source fs size use avail _ mount 00:04:57.267 ++ mounts["$mount"]=tmpfs 00:04:57.267 ++ fss["$mount"]=tmpfs 00:04:57.267 ++ avails["$mount"]=5242880 00:04:57.267 ++ sizes["$mount"]=5242880 00:04:57.267 ++ uses["$mount"]=0 00:04:57.267 ++ read -r source fs size use avail _ mount 00:04:57.267 ++ mounts["$mount"]=/dev/vda15 00:04:57.267 ++ fss["$mount"]=vfat 00:04:57.267 ++ avails["$mount"]=103061504 00:04:57.267 ++ sizes["$mount"]=109395968 00:04:57.267 ++ uses["$mount"]=6334464 00:04:57.267 ++ read -r source fs size use avail _ mount 00:04:57.267 ++ mounts["$mount"]=tmpfs 00:04:57.267 ++ fss["$mount"]=tmpfs 00:04:57.267 ++ avails["$mount"]=1253675008 00:04:57.267 ++ sizes["$mount"]=1253679104 00:04:57.267 ++ uses["$mount"]=4096 00:04:57.267 ++ read -r source fs size use avail _ mount 00:04:57.267 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:04:57.267 ++ fss["$mount"]=fuse.sshfs 00:04:57.267 ++ avails["$mount"]=90583638016 00:04:57.267 ++ sizes["$mount"]=105088212992 00:04:57.267 ++ uses["$mount"]=9119141888 00:04:57.267 ++ read -r source fs size use avail _ mount 00:04:57.267 ++ printf '* Looking for test storage...\n' 00:04:57.267 * Looking for test storage... 00:04:57.267 ++ local target_space new_size 00:04:57.267 ++ for target_dir in "${storage_candidates[@]}" 00:04:57.267 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:04:57.267 +++ awk '$1 !~ /Filesystem/{print $6}' 00:04:57.267 ++ mount=/ 00:04:57.267 ++ target_space=10212327424 00:04:57.267 ++ (( target_space == 0 || target_space < requested_size )) 00:04:57.267 ++ (( target_space >= requested_size )) 00:04:57.267 ++ [[ ext4 == tmpfs ]] 00:04:57.267 ++ [[ ext4 == ramfs ]] 00:04:57.267 ++ [[ / == / ]] 00:04:57.267 ++ new_size=12602281984 00:04:57.267 ++ (( new_size * 100 / sizes[/] > 95 )) 00:04:57.267 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:57.267 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:57.267 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:04:57.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:04:57.267 ++ return 0 00:04:57.267 ++ set -o errtrace 00:04:57.267 ++ shopt -s extdebug 00:04:57.267 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:04:57.267 ++ PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:04:57.267 11:28:29 unittest -- common/autotest_common.sh@1686 -- # true 00:04:57.267 11:28:29 unittest -- common/autotest_common.sh@1688 -- # xtrace_fd 00:04:57.267 11:28:29 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:04:57.267 11:28:29 unittest -- common/autotest_common.sh@29 -- # exec 00:04:57.267 11:28:29 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:04:57.267 11:28:29 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:04:57.267 11:28:29 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:04:57.267 11:28:29 unittest -- common/autotest_common.sh@18 -- # set -x 00:04:57.267 11:28:29 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:04:57.267 11:28:29 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:04:57.267 11:28:29 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:04:57.267 11:28:29 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:04:57.267 11:28:29 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:04:57.268 11:28:29 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=gcc 00:04:57.268 11:28:29 unittest -- unit/unittest.sh@181 -- # hash lcov 00:04:57.268 11:28:29 unittest -- unit/unittest.sh@181 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:57.268 11:28:29 unittest -- unit/unittest.sh@181 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:57.268 11:28:29 unittest -- unit/unittest.sh@182 -- # cov_avail=yes 00:04:57.268 11:28:29 unittest -- unit/unittest.sh@186 -- # '[' yes = yes ']' 00:04:57.268 11:28:29 unittest -- unit/unittest.sh@188 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:04:57.268 11:28:29 unittest -- unit/unittest.sh@191 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:57.268 11:28:29 unittest -- unit/unittest.sh@193 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:57.268 11:28:29 unittest -- unit/unittest.sh@201 -- # export 'LCOV_OPTS= 00:04:57.268 --rc lcov_branch_coverage=1 00:04:57.268 --rc lcov_function_coverage=1 00:04:57.268 --rc genhtml_branch_coverage=1 00:04:57.268 --rc genhtml_function_coverage=1 00:04:57.268 --rc genhtml_legend=1 00:04:57.268 --rc geninfo_all_blocks=1 00:04:57.268 ' 00:04:57.268 11:28:29 unittest -- unit/unittest.sh@201 -- # LCOV_OPTS=' 00:04:57.268 --rc lcov_branch_coverage=1 00:04:57.268 --rc lcov_function_coverage=1 00:04:57.268 --rc genhtml_branch_coverage=1 00:04:57.268 --rc genhtml_function_coverage=1 00:04:57.268 --rc genhtml_legend=1 00:04:57.268 --rc geninfo_all_blocks=1 00:04:57.268 ' 00:04:57.268 11:28:29 unittest -- unit/unittest.sh@202 -- # export 'LCOV=lcov 00:04:57.268 --rc lcov_branch_coverage=1 00:04:57.268 --rc lcov_function_coverage=1 00:04:57.268 --rc genhtml_branch_coverage=1 00:04:57.268 --rc genhtml_function_coverage=1 00:04:57.268 --rc genhtml_legend=1 00:04:57.268 --rc geninfo_all_blocks=1 00:04:57.268 --no-external' 00:04:57.268 11:28:29 unittest -- unit/unittest.sh@202 -- # LCOV='lcov 00:04:57.268 --rc lcov_branch_coverage=1 00:04:57.268 --rc lcov_function_coverage=1 00:04:57.268 --rc genhtml_branch_coverage=1 00:04:57.268 --rc genhtml_function_coverage=1 00:04:57.268 --rc genhtml_legend=1 00:04:57.268 --rc geninfo_all_blocks=1 00:04:57.268 --no-external' 00:04:57.268 11:28:29 unittest -- unit/unittest.sh@204 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:03.899 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:03.899 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:50.595 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:50.595 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:50.595 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:50.595 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:50.595 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:50.595 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:50.595 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:50.595 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:50.595 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:50.595 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:50.595 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:50.595 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:50.595 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:50.595 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:50.595 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:50.595 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:50.595 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:50.596 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:50.596 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:50.597 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:50.597 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:53.127 11:29:25 unittest -- unit/unittest.sh@208 -- # uname -m 00:05:53.127 11:29:25 unittest -- unit/unittest.sh@208 -- # '[' x86_64 = aarch64 ']' 00:05:53.127 11:29:25 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:53.127 11:29:25 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:53.127 11:29:25 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:53.127 11:29:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:53.127 ************************************ 00:05:53.127 START TEST unittest_pci_event 00:05:53.127 ************************************ 00:05:53.127 11:29:25 unittest.unittest_pci_event -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:53.127 00:05:53.127 00:05:53.127 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.127 http://cunit.sourceforge.net/ 00:05:53.127 00:05:53.127 00:05:53.127 Suite: pci_event 00:05:53.128 Test: test_pci_parse_event ...[2024-06-10 11:29:25.083249] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:05:53.128 [2024-06-10 11:29:25.083921] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:05:53.128 passed 00:05:53.128 00:05:53.128 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.128 suites 1 1 n/a 0 0 00:05:53.128 tests 1 1 1 0 0 00:05:53.128 asserts 15 15 15 0 n/a 00:05:53.128 00:05:53.128 Elapsed time = 0.001 seconds 00:05:53.128 00:05:53.128 real 0m0.041s 00:05:53.128 user 0m0.023s 00:05:53.128 sys 0m0.015s 00:05:53.128 11:29:25 unittest.unittest_pci_event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:53.128 ************************************ 00:05:53.128 END TEST unittest_pci_event 00:05:53.128 11:29:25 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:05:53.128 ************************************ 00:05:53.128 11:29:25 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:53.128 11:29:25 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:53.128 11:29:25 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:53.128 11:29:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:53.128 ************************************ 00:05:53.128 START TEST unittest_include 00:05:53.128 ************************************ 00:05:53.128 11:29:25 unittest.unittest_include -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:53.128 00:05:53.128 00:05:53.128 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.128 http://cunit.sourceforge.net/ 00:05:53.128 00:05:53.128 00:05:53.128 Suite: histogram 00:05:53.128 Test: histogram_test ...passed 00:05:53.128 Test: histogram_merge ...passed 00:05:53.128 00:05:53.128 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.128 suites 1 1 n/a 0 0 00:05:53.128 tests 2 2 2 0 0 00:05:53.128 asserts 50 50 50 0 n/a 00:05:53.128 00:05:53.128 Elapsed time = 0.005 seconds 00:05:53.386 00:05:53.386 real 0m0.042s 00:05:53.386 user 0m0.015s 00:05:53.386 sys 0m0.027s 00:05:53.386 11:29:25 unittest.unittest_include -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:53.386 ************************************ 00:05:53.386 11:29:25 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:05:53.386 END TEST unittest_include 00:05:53.386 ************************************ 00:05:53.387 11:29:25 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:05:53.387 11:29:25 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:53.387 11:29:25 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:53.387 11:29:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:53.387 ************************************ 00:05:53.387 START TEST unittest_bdev 00:05:53.387 ************************************ 00:05:53.387 11:29:25 unittest.unittest_bdev -- common/autotest_common.sh@1124 -- # unittest_bdev 00:05:53.387 11:29:25 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:53.387 00:05:53.387 00:05:53.387 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.387 http://cunit.sourceforge.net/ 00:05:53.387 00:05:53.387 00:05:53.387 Suite: bdev 00:05:53.387 Test: bytes_to_blocks_test ...passed 00:05:53.387 Test: num_blocks_test ...passed 00:05:53.387 Test: io_valid_test ...passed 00:05:53.387 Test: open_write_test ...[2024-06-10 11:29:25.366429] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:53.387 [2024-06-10 11:29:25.366799] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:53.387 [2024-06-10 11:29:25.367007] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:53.387 passed 00:05:53.645 Test: claim_test ...passed 00:05:53.645 Test: alias_add_del_test ...[2024-06-10 11:29:25.489967] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4580:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:53.645 [2024-06-10 11:29:25.490111] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4610:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:53.645 [2024-06-10 11:29:25.490158] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4580:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:53.645 passed 00:05:53.645 Test: get_device_stat_test ...passed 00:05:53.645 Test: bdev_io_types_test ...passed 00:05:53.645 Test: bdev_io_wait_test ...passed 00:05:53.645 Test: bdev_io_spans_split_test ...passed 00:05:53.903 Test: bdev_io_boundary_split_test ...passed 00:05:53.903 Test: bdev_io_max_size_and_segment_split_test ...[2024-06-10 11:29:25.728725] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:53.903 passed 00:05:53.903 Test: bdev_io_mix_split_test ...passed 00:05:53.903 Test: bdev_io_split_with_io_wait ...passed 00:05:53.903 Test: bdev_io_write_unit_split_test ...[2024-06-10 11:29:25.891197] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:53.903 [2024-06-10 11:29:25.891299] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:53.903 [2024-06-10 11:29:25.891336] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:53.903 [2024-06-10 11:29:25.891386] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:53.903 passed 00:05:54.162 Test: bdev_io_alignment_with_boundary ...passed 00:05:54.162 Test: bdev_io_alignment ...passed 00:05:54.162 Test: bdev_histograms ...passed 00:05:54.162 Test: bdev_write_zeroes ...passed 00:05:54.162 Test: bdev_compare_and_write ...passed 00:05:54.421 Test: bdev_compare ...passed 00:05:54.421 Test: bdev_compare_emulated ...passed 00:05:54.421 Test: bdev_zcopy_write ...passed 00:05:54.677 Test: bdev_zcopy_read ...passed 00:05:54.677 Test: bdev_open_while_hotremove ...passed 00:05:54.677 Test: bdev_close_while_hotremove ...passed 00:05:54.677 Test: bdev_open_ext_test ...[2024-06-10 11:29:26.519522] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8141:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:54.677 passed 00:05:54.677 Test: bdev_open_ext_unregister ...[2024-06-10 11:29:26.519734] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8141:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:54.677 passed 00:05:54.677 Test: bdev_set_io_timeout ...passed 00:05:54.677 Test: bdev_set_qd_sampling ...passed 00:05:54.677 Test: lba_range_overlap ...passed 00:05:54.677 Test: lock_lba_range_check_ranges ...passed 00:05:54.677 Test: lock_lba_range_with_io_outstanding ...passed 00:05:54.935 Test: lock_lba_range_overlapped ...passed 00:05:54.935 Test: bdev_quiesce ...[2024-06-10 11:29:26.816440] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10064:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:54.935 passed 00:05:54.935 Test: bdev_io_abort ...passed 00:05:54.935 Test: bdev_unmap ...passed 00:05:55.192 Test: bdev_write_zeroes_split_test ...passed 00:05:55.192 Test: bdev_set_options_test ...[2024-06-10 11:29:27.010876] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:55.192 passed 00:05:55.192 Test: bdev_get_memory_domains ...passed 00:05:55.192 Test: bdev_io_ext ...passed 00:05:55.192 Test: bdev_io_ext_no_opts ...passed 00:05:55.192 Test: bdev_io_ext_invalid_opts ...passed 00:05:55.192 Test: bdev_io_ext_split ...passed 00:05:55.450 Test: bdev_io_ext_bounce_buffer ...passed 00:05:55.450 Test: bdev_register_uuid_alias ...[2024-06-10 11:29:27.300682] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4580:bdev_name_add: *ERROR*: Bdev name d864abcf-5b95-4988-b19c-8c717f31e0c9 already exists 00:05:55.450 [2024-06-10 11:29:27.300748] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:d864abcf-5b95-4988-b19c-8c717f31e0c9 alias for bdev bdev0 00:05:55.450 passed 00:05:55.450 Test: bdev_unregister_by_name ...passed 00:05:55.450 Test: for_each_bdev_test ...[2024-06-10 11:29:27.327964] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7931:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:55.450 [2024-06-10 11:29:27.328043] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7939:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:55.450 passed 00:05:55.450 Test: bdev_seek_test ...passed 00:05:55.450 Test: bdev_copy ...passed 00:05:55.450 Test: bdev_copy_split_test ...passed 00:05:55.450 Test: examine_locks ...passed 00:05:55.450 Test: claim_v2_rwo ...[2024-06-10 11:29:27.483112] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:55.450 [2024-06-10 11:29:27.483188] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8665:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:55.450 [2024-06-10 11:29:27.483206] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:55.450 [2024-06-10 11:29:27.483265] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:55.451 [2024-06-10 11:29:27.483281] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:55.451 passed 00:05:55.451 Test: claim_v2_rom ...[2024-06-10 11:29:27.483334] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8660:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:55.451 [2024-06-10 11:29:27.483468] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:55.451 [2024-06-10 11:29:27.483519] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:55.451 [2024-06-10 11:29:27.483540] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:55.451 [2024-06-10 11:29:27.483568] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:55.451 passed 00:05:55.451 Test: claim_v2_rwm ...[2024-06-10 11:29:27.483612] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8703:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:55.451 [2024-06-10 11:29:27.483653] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8698:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:55.451 [2024-06-10 11:29:27.483747] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8733:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:55.451 [2024-06-10 11:29:27.483803] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:55.451 [2024-06-10 11:29:27.483827] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:55.451 [2024-06-10 11:29:27.483852] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:55.451 [2024-06-10 11:29:27.483871] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:55.451 [2024-06-10 11:29:27.483898] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8753:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:55.451 [2024-06-10 11:29:27.483936] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8733:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:55.451 passed 00:05:55.451 Test: claim_v2_existing_writer ...passed 00:05:55.451 Test: claim_v2_existing_v1 ...[2024-06-10 11:29:27.484092] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8698:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:55.451 [2024-06-10 11:29:27.484125] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8698:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:55.451 [2024-06-10 11:29:27.484226] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:55.451 [2024-06-10 11:29:27.484254] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:55.451 [2024-06-10 11:29:27.484273] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:55.451 passed 00:05:55.451 Test: claim_v1_existing_v2 ...[2024-06-10 11:29:27.484373] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:55.451 [2024-06-10 11:29:27.484420] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:55.451 [2024-06-10 11:29:27.484453] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:55.451 passed 00:05:55.451 Test: examine_claimed ...[2024-06-10 11:29:27.484723] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:55.451 passed 00:05:55.451 00:05:55.451 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.451 suites 1 1 n/a 0 0 00:05:55.451 tests 59 59 59 0 0 00:05:55.451 asserts 4599 4599 4599 0 n/a 00:05:55.451 00:05:55.451 Elapsed time = 2.218 seconds 00:05:55.709 11:29:27 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:55.709 00:05:55.709 00:05:55.709 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.709 http://cunit.sourceforge.net/ 00:05:55.709 00:05:55.709 00:05:55.709 Suite: nvme 00:05:55.709 Test: test_create_ctrlr ...passed 00:05:55.709 Test: test_reset_ctrlr ...[2024-06-10 11:29:27.548945] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.709 passed 00:05:55.709 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:55.709 Test: test_failover_ctrlr ...passed 00:05:55.709 Test: test_race_between_failover_and_add_secondary_trid ...[2024-06-10 11:29:27.552578] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.709 [2024-06-10 11:29:27.552868] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.709 [2024-06-10 11:29:27.553162] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.709 passed 00:05:55.709 Test: test_pending_reset ...[2024-06-10 11:29:27.555609] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.709 [2024-06-10 11:29:27.555968] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.709 passed 00:05:55.709 Test: test_attach_ctrlr ...[2024-06-10 11:29:27.557323] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:55.710 passed 00:05:55.710 Test: test_aer_cb ...passed 00:05:55.710 Test: test_submit_nvme_cmd ...passed 00:05:55.710 Test: test_add_remove_trid ...passed 00:05:55.710 Test: test_abort ...[2024-06-10 11:29:27.561327] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7447:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:55.710 passed 00:05:55.710 Test: test_get_io_qpair ...passed 00:05:55.710 Test: test_bdev_unregister ...passed 00:05:55.710 Test: test_compare_ns ...passed 00:05:55.710 Test: test_init_ana_log_page ...passed 00:05:55.710 Test: test_get_memory_domains ...passed 00:05:55.710 Test: test_reconnect_qpair ...[2024-06-10 11:29:27.564414] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.710 passed 00:05:55.710 Test: test_create_bdev_ctrlr ...[2024-06-10 11:29:27.564984] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5373:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:55.710 passed 00:05:55.710 Test: test_add_multi_ns_to_bdev ...[2024-06-10 11:29:27.566380] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4564:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:55.710 passed 00:05:55.710 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:05:55.710 Test: test_admin_path ...passed 00:05:55.710 Test: test_reset_bdev_ctrlr ...passed 00:05:55.710 Test: test_find_io_path ...passed 00:05:55.710 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:55.710 Test: test_retry_io_for_io_path_error ...passed 00:05:55.710 Test: test_retry_io_count ...passed 00:05:55.710 Test: test_concurrent_read_ana_log_page ...passed 00:05:55.710 Test: test_retry_io_for_ana_error ...passed 00:05:55.710 Test: test_check_io_error_resiliency_params ...[2024-06-10 11:29:27.574187] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6067:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:55.710 [2024-06-10 11:29:27.574265] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6071:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:55.710 [2024-06-10 11:29:27.574319] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6080:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:55.710 [2024-06-10 11:29:27.574352] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6083:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:55.710 [2024-06-10 11:29:27.574376] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6095:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:55.710 [2024-06-10 11:29:27.574440] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6095:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:55.710 passed 00:05:55.710 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-06-10 11:29:27.574464] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6075:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:55.710 [2024-06-10 11:29:27.574513] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6090:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:55.710 [2024-06-10 11:29:27.574552] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6087:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:55.710 passed 00:05:55.710 Test: test_reconnect_ctrlr ...[2024-06-10 11:29:27.575383] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.710 [2024-06-10 11:29:27.575545] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.710 [2024-06-10 11:29:27.575833] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.710 [2024-06-10 11:29:27.575987] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.710 [2024-06-10 11:29:27.576151] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.710 passed 00:05:55.710 Test: test_retry_failover_ctrlr ...[2024-06-10 11:29:27.576529] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.710 passed 00:05:55.710 Test: test_fail_path ...[2024-06-10 11:29:27.577163] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.710 [2024-06-10 11:29:27.577378] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.710 [2024-06-10 11:29:27.577572] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.710 [2024-06-10 11:29:27.577760] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.710 [2024-06-10 11:29:27.577963] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.710 passed 00:05:55.710 Test: test_nvme_ns_cmp ...passed 00:05:55.710 Test: test_ana_transition ...passed 00:05:55.710 Test: test_set_preferred_path ...passed 00:05:55.710 Test: test_find_next_io_path ...passed 00:05:55.710 Test: test_find_io_path_min_qd ...passed 00:05:55.710 Test: test_disable_auto_failback ...[2024-06-10 11:29:27.580526] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.710 passed 00:05:55.710 Test: test_set_multipath_policy ...passed 00:05:55.710 Test: test_uuid_generation ...passed 00:05:55.710 Test: test_retry_io_to_same_path ...passed 00:05:55.710 Test: test_race_between_reset_and_disconnected ...passed 00:05:55.710 Test: test_ctrlr_op_rpc ...passed 00:05:55.710 Test: test_bdev_ctrlr_op_rpc ...passed 00:05:55.710 Test: test_disable_enable_ctrlr ...[2024-06-10 11:29:27.585296] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.710 [2024-06-10 11:29:27.585503] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.710 passed 00:05:55.710 Test: test_delete_ctrlr_done ...passed 00:05:55.710 Test: test_ns_remove_during_reset ...passed 00:05:55.710 00:05:55.710 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.710 suites 1 1 n/a 0 0 00:05:55.710 tests 48 48 48 0 0 00:05:55.710 asserts 3565 3565 3565 0 n/a 00:05:55.710 00:05:55.710 Elapsed time = 0.040 seconds 00:05:55.710 11:29:27 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:55.710 00:05:55.710 00:05:55.710 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.710 http://cunit.sourceforge.net/ 00:05:55.710 00:05:55.710 Test Options 00:05:55.710 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:05:55.710 00:05:55.710 Suite: raid 00:05:55.710 Test: test_create_raid ...passed 00:05:55.710 Test: test_create_raid_superblock ...passed 00:05:55.710 Test: test_delete_raid ...passed 00:05:55.710 Test: test_create_raid_invalid_args ...[2024-06-10 11:29:27.643378] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:55.710 [2024-06-10 11:29:27.644161] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:55.710 [2024-06-10 11:29:27.645670] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:55.710 [2024-06-10 11:29:27.646196] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:55.710 [2024-06-10 11:29:27.646414] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:05:55.710 [2024-06-10 11:29:27.648679] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:55.710 [2024-06-10 11:29:27.648744] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:05:55.710 passed 00:05:55.710 Test: test_delete_raid_invalid_args ...passed 00:05:55.710 Test: test_io_channel ...passed 00:05:55.710 Test: test_reset_io ...passed 00:05:55.710 Test: test_multi_raid ...passed 00:05:55.710 Test: test_io_type_supported ...passed 00:05:55.710 Test: test_raid_json_dump_info ...passed 00:05:55.710 Test: test_context_size ...passed 00:05:55.710 Test: test_raid_level_conversions ...passed 00:05:55.710 Test: test_raid_io_split ...passed 00:05:55.710 Test: test_raid_process ...passed 00:05:55.710 00:05:55.710 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.710 suites 1 1 n/a 0 0 00:05:55.710 tests 14 14 14 0 0 00:05:55.710 asserts 6183 6183 6183 0 n/a 00:05:55.710 00:05:55.710 Elapsed time = 0.040 seconds 00:05:55.710 11:29:27 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:05:55.710 00:05:55.710 00:05:55.710 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.710 http://cunit.sourceforge.net/ 00:05:55.710 00:05:55.710 00:05:55.710 Suite: raid_sb 00:05:55.710 Test: test_raid_bdev_write_superblock ...passed 00:05:55.710 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:55.710 Test: test_raid_bdev_parse_superblock ...[2024-06-10 11:29:27.716304] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:55.710 passed 00:05:55.710 Suite: raid_sb_md 00:05:55.710 Test: test_raid_bdev_write_superblock ...passed 00:05:55.710 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:55.710 Test: test_raid_bdev_parse_superblock ...[2024-06-10 11:29:27.716926] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:55.710 passed 00:05:55.710 Suite: raid_sb_md_interleaved 00:05:55.710 Test: test_raid_bdev_write_superblock ...passed 00:05:55.710 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:55.710 Test: test_raid_bdev_parse_superblock ...[2024-06-10 11:29:27.717315] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:55.710 passed 00:05:55.710 00:05:55.710 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.711 suites 3 3 n/a 0 0 00:05:55.711 tests 9 9 9 0 0 00:05:55.711 asserts 139 139 139 0 n/a 00:05:55.711 00:05:55.711 Elapsed time = 0.002 seconds 00:05:55.711 11:29:27 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:05:55.711 00:05:55.711 00:05:55.711 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.711 http://cunit.sourceforge.net/ 00:05:55.711 00:05:55.711 00:05:55.711 Suite: concat 00:05:55.711 Test: test_concat_start ...passed 00:05:55.711 Test: test_concat_rw ...passed 00:05:55.711 Test: test_concat_null_payload ...passed 00:05:55.711 00:05:55.711 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.711 suites 1 1 n/a 0 0 00:05:55.711 tests 3 3 3 0 0 00:05:55.711 asserts 8460 8460 8460 0 n/a 00:05:55.711 00:05:55.711 Elapsed time = 0.006 seconds 00:05:55.969 11:29:27 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:05:55.969 00:05:55.969 00:05:55.969 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.969 http://cunit.sourceforge.net/ 00:05:55.969 00:05:55.969 00:05:55.969 Suite: raid0 00:05:55.969 Test: test_write_io ...passed 00:05:55.969 Test: test_read_io ...passed 00:05:55.969 Test: test_unmap_io ...passed 00:05:55.969 Test: test_io_failure ...passed 00:05:55.969 Suite: raid0_dif 00:05:55.969 Test: test_write_io ...passed 00:05:55.969 Test: test_read_io ...passed 00:05:55.969 Test: test_unmap_io ...passed 00:05:55.969 Test: test_io_failure ...passed 00:05:55.969 00:05:55.969 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.969 suites 2 2 n/a 0 0 00:05:55.969 tests 8 8 8 0 0 00:05:55.969 asserts 368291 368291 368291 0 n/a 00:05:55.969 00:05:55.969 Elapsed time = 0.121 seconds 00:05:55.969 11:29:27 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:05:55.969 00:05:55.969 00:05:55.969 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.969 http://cunit.sourceforge.net/ 00:05:55.969 00:05:55.969 00:05:55.969 Suite: raid1 00:05:55.969 Test: test_raid1_start ...passed 00:05:55.969 Test: test_raid1_read_balancing ...passed 00:05:55.969 Test: test_raid1_write_error ...passed 00:05:55.969 Test: test_raid1_read_error ...passed 00:05:55.969 00:05:55.969 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.969 suites 1 1 n/a 0 0 00:05:55.969 tests 4 4 4 0 0 00:05:55.969 asserts 4374 4374 4374 0 n/a 00:05:55.969 00:05:55.969 Elapsed time = 0.005 seconds 00:05:55.969 11:29:28 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:05:55.969 00:05:55.969 00:05:55.969 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.969 http://cunit.sourceforge.net/ 00:05:55.969 00:05:55.969 00:05:55.969 Suite: zone 00:05:55.969 Test: test_zone_get_operation ...passed 00:05:55.969 Test: test_bdev_zone_get_info ...passed 00:05:55.969 Test: test_bdev_zone_management ...passed 00:05:55.969 Test: test_bdev_zone_append ...passed 00:05:55.969 Test: test_bdev_zone_append_with_md ...passed 00:05:55.969 Test: test_bdev_zone_appendv ...passed 00:05:55.969 Test: test_bdev_zone_appendv_with_md ...passed 00:05:55.969 Test: test_bdev_io_get_append_location ...passed 00:05:55.969 00:05:55.969 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.969 suites 1 1 n/a 0 0 00:05:55.969 tests 8 8 8 0 0 00:05:55.969 asserts 94 94 94 0 n/a 00:05:55.969 00:05:55.969 Elapsed time = 0.000 seconds 00:05:56.227 11:29:28 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:05:56.227 00:05:56.227 00:05:56.227 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.227 http://cunit.sourceforge.net/ 00:05:56.227 00:05:56.227 00:05:56.227 Suite: gpt_parse 00:05:56.227 Test: test_parse_mbr_and_primary ...[2024-06-10 11:29:28.059556] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:56.227 [2024-06-10 11:29:28.059928] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:56.227 [2024-06-10 11:29:28.060029] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:56.227 [2024-06-10 11:29:28.060136] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:56.227 [2024-06-10 11:29:28.060199] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:56.227 [2024-06-10 11:29:28.060309] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:56.227 passed 00:05:56.227 Test: test_parse_secondary ...[2024-06-10 11:29:28.060887] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:56.227 [2024-06-10 11:29:28.060972] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:56.227 [2024-06-10 11:29:28.061027] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:56.227 [2024-06-10 11:29:28.061097] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:56.227 passed 00:05:56.227 Test: test_check_mbr ...[2024-06-10 11:29:28.061654] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:56.227 passed 00:05:56.227 Test: test_read_header ...[2024-06-10 11:29:28.061729] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:56.227 [2024-06-10 11:29:28.061810] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:05:56.228 [2024-06-10 11:29:28.061929] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:05:56.228 [2024-06-10 11:29:28.062029] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:05:56.228 [2024-06-10 11:29:28.062080] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:05:56.228 [2024-06-10 11:29:28.062145] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:05:56.228 [2024-06-10 11:29:28.062205] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:05:56.228 passed 00:05:56.228 Test: test_read_partitions ...[2024-06-10 11:29:28.062284] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:05:56.228 [2024-06-10 11:29:28.062342] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:05:56.228 [2024-06-10 11:29:28.062387] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:05:56.228 [2024-06-10 11:29:28.062425] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:05:56.228 [2024-06-10 11:29:28.062754] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:05:56.228 passed 00:05:56.228 00:05:56.228 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.228 suites 1 1 n/a 0 0 00:05:56.228 tests 5 5 5 0 0 00:05:56.228 asserts 33 33 33 0 n/a 00:05:56.228 00:05:56.228 Elapsed time = 0.004 seconds 00:05:56.228 11:29:28 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:05:56.228 00:05:56.228 00:05:56.228 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.228 http://cunit.sourceforge.net/ 00:05:56.228 00:05:56.228 00:05:56.228 Suite: bdev_part 00:05:56.228 Test: part_test ...[2024-06-10 11:29:28.100630] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4580:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:05:56.228 passed 00:05:56.228 Test: part_free_test ...passed 00:05:56.228 Test: part_get_io_channel_test ...passed 00:05:56.228 Test: part_construct_ext ...passed 00:05:56.228 00:05:56.228 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.228 suites 1 1 n/a 0 0 00:05:56.228 tests 4 4 4 0 0 00:05:56.228 asserts 48 48 48 0 n/a 00:05:56.228 00:05:56.228 Elapsed time = 0.058 seconds 00:05:56.228 11:29:28 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:05:56.228 00:05:56.228 00:05:56.228 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.228 http://cunit.sourceforge.net/ 00:05:56.228 00:05:56.228 00:05:56.228 Suite: scsi_nvme_suite 00:05:56.228 Test: scsi_nvme_translate_test ...passed 00:05:56.228 00:05:56.228 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.228 suites 1 1 n/a 0 0 00:05:56.228 tests 1 1 1 0 0 00:05:56.228 asserts 104 104 104 0 n/a 00:05:56.228 00:05:56.228 Elapsed time = 0.000 seconds 00:05:56.228 11:29:28 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:05:56.228 00:05:56.228 00:05:56.228 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.228 http://cunit.sourceforge.net/ 00:05:56.228 00:05:56.228 00:05:56.228 Suite: lvol 00:05:56.228 Test: ut_lvs_init ...[2024-06-10 11:29:28.240889] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:05:56.228 [2024-06-10 11:29:28.241282] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:05:56.228 passed 00:05:56.228 Test: ut_lvol_init ...passed 00:05:56.228 Test: ut_lvol_snapshot ...passed 00:05:56.228 Test: ut_lvol_clone ...passed 00:05:56.228 Test: ut_lvs_destroy ...passed 00:05:56.228 Test: ut_lvs_unload ...passed 00:05:56.228 Test: ut_lvol_resize ...[2024-06-10 11:29:28.242758] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:05:56.228 passed 00:05:56.228 Test: ut_lvol_set_read_only ...passed 00:05:56.228 Test: ut_lvol_hotremove ...passed 00:05:56.228 Test: ut_vbdev_lvol_get_io_channel ...passed 00:05:56.228 Test: ut_vbdev_lvol_io_type_supported ...passed 00:05:56.228 Test: ut_lvol_read_write ...passed 00:05:56.228 Test: ut_vbdev_lvol_submit_request ...passed 00:05:56.228 Test: ut_lvol_examine_config ...passed 00:05:56.228 Test: ut_lvol_examine_disk ...[2024-06-10 11:29:28.243434] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:05:56.228 passed 00:05:56.228 Test: ut_lvol_rename ...[2024-06-10 11:29:28.244417] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:05:56.228 [2024-06-10 11:29:28.244523] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:05:56.228 passed 00:05:56.228 Test: ut_bdev_finish ...passed 00:05:56.228 Test: ut_lvs_rename ...passed 00:05:56.228 Test: ut_lvol_seek ...passed 00:05:56.228 Test: ut_esnap_dev_create ...[2024-06-10 11:29:28.245237] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:05:56.228 [2024-06-10 11:29:28.245309] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:05:56.228 [2024-06-10 11:29:28.245341] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:05:56.228 [2024-06-10 11:29:28.245390] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1911:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:05:56.228 passed 00:05:56.228 Test: ut_lvol_esnap_clone_bad_args ...[2024-06-10 11:29:28.245550] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:05:56.228 [2024-06-10 11:29:28.245595] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:05:56.228 passed 00:05:56.228 Test: ut_lvol_shallow_copy ...[2024-06-10 11:29:28.245863] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:05:56.228 passed 00:05:56.228 Test: ut_lvol_set_external_parent ...[2024-06-10 11:29:28.245918] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:05:56.228 [2024-06-10 11:29:28.246026] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:05:56.228 passed 00:05:56.228 00:05:56.228 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.228 suites 1 1 n/a 0 0 00:05:56.228 tests 23 23 23 0 0 00:05:56.228 asserts 798 798 798 0 n/a 00:05:56.228 00:05:56.228 Elapsed time = 0.005 seconds 00:05:56.228 11:29:28 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:05:56.487 00:05:56.487 00:05:56.487 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.487 http://cunit.sourceforge.net/ 00:05:56.487 00:05:56.487 00:05:56.487 Suite: zone_block 00:05:56.487 Test: test_zone_block_create ...passed 00:05:56.487 Test: test_zone_block_create_invalid ...[2024-06-10 11:29:28.308954] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:05:56.487 [2024-06-10 11:29:28.309233] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-06-10 11:29:28.309404] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:05:56.487 [2024-06-10 11:29:28.309457] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-06-10 11:29:28.309614] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:05:56.487 [2024-06-10 11:29:28.309653] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:05:56.487 Test: test_get_zone_info ...[2024-06-10 11:29:28.309735] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:05:56.487 [2024-06-10 11:29:28.309788] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-06-10 11:29:28.310257] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.310334] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.310394] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 passed 00:05:56.487 Test: test_supported_io_types ...passed 00:05:56.487 Test: test_reset_zone ...[2024-06-10 11:29:28.311286] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.311341] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 passed 00:05:56.487 Test: test_open_zone ...[2024-06-10 11:29:28.311746] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.312348] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.312422] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 passed 00:05:56.487 Test: test_zone_write ...[2024-06-10 11:29:28.312848] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:56.487 [2024-06-10 11:29:28.312898] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.312946] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:56.487 [2024-06-10 11:29:28.312998] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.318454] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:05:56.487 [2024-06-10 11:29:28.318519] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.318596] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:05:56.487 [2024-06-10 11:29:28.318628] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.323731] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:56.487 [2024-06-10 11:29:28.323811] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 passed 00:05:56.487 Test: test_zone_read ...[2024-06-10 11:29:28.324259] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:05:56.487 [2024-06-10 11:29:28.324298] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.324365] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:05:56.487 [2024-06-10 11:29:28.324398] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.324814] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:05:56.487 passed 00:05:56.487 Test: test_close_zone ...[2024-06-10 11:29:28.324858] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.325169] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.325236] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.325433] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.325483] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 passed 00:05:56.487 Test: test_finish_zone ...[2024-06-10 11:29:28.326049] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 passed 00:05:56.487 Test: test_append_zone ...[2024-06-10 11:29:28.326110] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.326412] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:56.487 [2024-06-10 11:29:28.326459] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.326527] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:56.487 [2024-06-10 11:29:28.326566] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 [2024-06-10 11:29:28.338048] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:56.487 [2024-06-10 11:29:28.338142] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.487 passed 00:05:56.487 00:05:56.487 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.487 suites 1 1 n/a 0 0 00:05:56.487 tests 11 11 11 0 0 00:05:56.487 asserts 3437 3437 3437 0 n/a 00:05:56.487 00:05:56.487 Elapsed time = 0.031 seconds 00:05:56.487 11:29:28 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:05:56.487 00:05:56.487 00:05:56.487 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.487 http://cunit.sourceforge.net/ 00:05:56.487 00:05:56.487 00:05:56.487 Suite: bdev 00:05:56.487 Test: basic ...[2024-06-10 11:29:28.462162] thread.c:2369:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x562b1565b8a1): Operation not permitted (rc=-1) 00:05:56.487 [2024-06-10 11:29:28.462895] thread.c:2369:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x562b1565b860): Operation not permitted (rc=-1) 00:05:56.487 [2024-06-10 11:29:28.463053] thread.c:2369:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x562b1565b8a1): Operation not permitted (rc=-1) 00:05:56.487 passed 00:05:56.487 Test: unregister_and_close ...passed 00:05:56.746 Test: unregister_and_close_different_threads ...passed 00:05:56.746 Test: basic_qos ...passed 00:05:56.746 Test: put_channel_during_reset ...passed 00:05:56.746 Test: aborted_reset ...passed 00:05:57.004 Test: aborted_reset_no_outstanding_io ...passed 00:05:57.004 Test: io_during_reset ...passed 00:05:57.004 Test: reset_completions ...passed 00:05:57.004 Test: io_during_qos_queue ...passed 00:05:57.004 Test: io_during_qos_reset ...passed 00:05:57.262 Test: enomem ...passed 00:05:57.262 Test: enomem_multi_bdev ...passed 00:05:57.262 Test: enomem_multi_bdev_unregister ...passed 00:05:57.262 Test: enomem_multi_io_target ...passed 00:05:57.521 Test: qos_dynamic_enable ...passed 00:05:57.521 Test: bdev_histograms_mt ...passed 00:05:57.521 Test: bdev_set_io_timeout_mt ...[2024-06-10 11:29:29.470116] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:05:57.521 passed 00:05:57.521 Test: lock_lba_range_then_submit_io ...[2024-06-10 11:29:29.504631] thread.c:2173:spdk_io_device_register: *ERROR*: io_device 0x562b1565b820 already registered (old:0x6130000003c0 new:0x613000000c80) 00:05:57.521 passed 00:05:57.779 Test: unregister_during_reset ...passed 00:05:57.779 Test: event_notify_and_close ...passed 00:05:57.779 Test: unregister_and_qos_poller ...passed 00:05:57.779 Suite: bdev_wrong_thread 00:05:57.779 Test: spdk_bdev_register_wt ...[2024-06-10 11:29:29.760858] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8459:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:05:57.779 passed 00:05:57.779 Test: spdk_bdev_examine_wt ...[2024-06-10 11:29:29.761182] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 810:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:05:57.779 passed 00:05:57.779 00:05:57.779 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.779 suites 2 2 n/a 0 0 00:05:57.779 tests 24 24 24 0 0 00:05:57.779 asserts 621 621 621 0 n/a 00:05:57.779 00:05:57.779 Elapsed time = 1.325 seconds 00:05:57.779 00:05:57.779 real 0m4.557s 00:05:57.779 user 0m2.087s 00:05:57.779 sys 0m2.464s 00:05:57.779 11:29:29 unittest.unittest_bdev -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:57.779 11:29:29 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:57.779 ************************************ 00:05:57.779 END TEST unittest_bdev 00:05:57.779 ************************************ 00:05:58.037 11:29:29 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:58.037 11:29:29 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:58.037 11:29:29 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:58.037 11:29:29 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:58.037 11:29:29 unittest -- unit/unittest.sh@230 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:58.037 11:29:29 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:58.037 11:29:29 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:58.037 11:29:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:58.037 ************************************ 00:05:58.037 START TEST unittest_bdev_raid5f 00:05:58.037 ************************************ 00:05:58.037 11:29:29 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:58.037 00:05:58.037 00:05:58.037 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.037 http://cunit.sourceforge.net/ 00:05:58.037 00:05:58.037 00:05:58.037 Suite: raid5f 00:05:58.037 Test: test_raid5f_start ...passed 00:05:58.605 Test: test_raid5f_submit_read_request ...passed 00:05:58.864 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:05.456 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:06:31.993 Test: test_raid5f_chunk_write_error ...passed 00:06:46.867 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:06:51.053 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:37.719 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:37.719 00:07:37.719 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.719 suites 1 1 n/a 0 0 00:07:37.719 tests 8 8 8 0 0 00:07:37.719 asserts 518158 518158 518158 0 n/a 00:07:37.719 00:07:37.719 Elapsed time = 95.777 seconds 00:07:37.719 00:07:37.719 real 1m35.890s 00:07:37.719 user 1m30.609s 00:07:37.719 sys 0m5.268s 00:07:37.719 11:31:05 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:37.719 11:31:05 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:07:37.719 ************************************ 00:07:37.719 END TEST unittest_bdev_raid5f 00:07:37.719 ************************************ 00:07:37.719 11:31:05 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:07:37.719 11:31:05 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:37.719 11:31:05 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:37.719 11:31:05 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:37.719 ************************************ 00:07:37.719 START TEST unittest_blob_blobfs 00:07:37.719 ************************************ 00:07:37.719 11:31:05 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1124 -- # unittest_blob 00:07:37.719 11:31:05 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:37.719 11:31:05 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:37.719 00:07:37.719 00:07:37.719 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.719 http://cunit.sourceforge.net/ 00:07:37.719 00:07:37.719 00:07:37.719 Suite: blob_nocopy_noextent 00:07:37.719 Test: blob_init ...[2024-06-10 11:31:05.858886] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:37.719 passed 00:07:37.719 Test: blob_thin_provision ...passed 00:07:37.719 Test: blob_read_only ...passed 00:07:37.719 Test: bs_load ...[2024-06-10 11:31:05.972834] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:37.719 passed 00:07:37.719 Test: bs_load_custom_cluster_size ...passed 00:07:37.719 Test: bs_load_after_failed_grow ...passed 00:07:37.719 Test: bs_cluster_sz ...[2024-06-10 11:31:06.005898] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:37.719 [2024-06-10 11:31:06.006403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:37.719 [2024-06-10 11:31:06.006632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:37.719 passed 00:07:37.719 Test: bs_resize_md ...passed 00:07:37.719 Test: bs_destroy ...passed 00:07:37.719 Test: bs_type ...passed 00:07:37.719 Test: bs_super_block ...passed 00:07:37.719 Test: bs_test_recover_cluster_count ...passed 00:07:37.719 Test: bs_grow_live ...passed 00:07:37.719 Test: bs_grow_live_no_space ...passed 00:07:37.719 Test: bs_test_grow ...passed 00:07:37.719 Test: blob_serialize_test ...passed 00:07:37.719 Test: super_block_crc ...passed 00:07:37.719 Test: blob_thin_prov_write_count_io ...passed 00:07:37.719 Test: blob_thin_prov_unmap_cluster ...passed 00:07:37.719 Test: bs_load_iter_test ...passed 00:07:37.719 Test: blob_relations ...[2024-06-10 11:31:06.217725] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:37.719 [2024-06-10 11:31:06.217866] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.719 [2024-06-10 11:31:06.218826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:37.719 [2024-06-10 11:31:06.218900] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.719 passed 00:07:37.719 Test: blob_relations2 ...[2024-06-10 11:31:06.234145] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:37.719 [2024-06-10 11:31:06.234245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.719 [2024-06-10 11:31:06.234293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:37.719 [2024-06-10 11:31:06.234321] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.719 [2024-06-10 11:31:06.235734] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:37.719 [2024-06-10 11:31:06.235799] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.719 [2024-06-10 11:31:06.236223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:37.719 [2024-06-10 11:31:06.236274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.719 passed 00:07:37.719 Test: blob_relations3 ...passed 00:07:37.719 Test: blobstore_clean_power_failure ...passed 00:07:37.719 Test: blob_delete_snapshot_power_failure ...[2024-06-10 11:31:06.396720] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:37.719 [2024-06-10 11:31:06.409432] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:37.719 [2024-06-10 11:31:06.409568] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:37.719 [2024-06-10 11:31:06.409637] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.719 [2024-06-10 11:31:06.422917] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:37.720 [2024-06-10 11:31:06.423011] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:37.720 [2024-06-10 11:31:06.423043] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:37.720 [2024-06-10 11:31:06.423104] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.720 [2024-06-10 11:31:06.436945] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:37.720 [2024-06-10 11:31:06.437102] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.720 [2024-06-10 11:31:06.450987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:37.720 [2024-06-10 11:31:06.451119] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.720 [2024-06-10 11:31:06.464116] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:37.720 [2024-06-10 11:31:06.464249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.720 passed 00:07:37.720 Test: blob_create_snapshot_power_failure ...[2024-06-10 11:31:06.502824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:37.720 [2024-06-10 11:31:06.527736] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:37.720 [2024-06-10 11:31:06.540436] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:37.720 passed 00:07:37.720 Test: blob_io_unit ...passed 00:07:37.720 Test: blob_io_unit_compatibility ...passed 00:07:37.720 Test: blob_ext_md_pages ...passed 00:07:37.720 Test: blob_esnap_io_4096_4096 ...passed 00:07:37.720 Test: blob_esnap_io_512_512 ...passed 00:07:37.720 Test: blob_esnap_io_4096_512 ...passed 00:07:37.720 Test: blob_esnap_io_512_4096 ...passed 00:07:37.720 Test: blob_esnap_clone_resize ...passed 00:07:37.720 Suite: blob_bs_nocopy_noextent 00:07:37.720 Test: blob_open ...passed 00:07:37.720 Test: blob_create ...[2024-06-10 11:31:06.820587] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:37.720 passed 00:07:37.720 Test: blob_create_loop ...passed 00:07:37.720 Test: blob_create_fail ...[2024-06-10 11:31:06.919980] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:37.720 passed 00:07:37.720 Test: blob_create_internal ...passed 00:07:37.720 Test: blob_create_zero_extent ...passed 00:07:37.720 Test: blob_snapshot ...passed 00:07:37.720 Test: blob_clone ...passed 00:07:37.720 Test: blob_inflate ...[2024-06-10 11:31:07.109504] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:37.720 passed 00:07:37.720 Test: blob_delete ...passed 00:07:37.720 Test: blob_resize_test ...[2024-06-10 11:31:07.175875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:37.720 passed 00:07:37.720 Test: blob_resize_thin_test ...passed 00:07:37.720 Test: channel_ops ...passed 00:07:37.720 Test: blob_super ...passed 00:07:37.720 Test: blob_rw_verify_iov ...passed 00:07:37.720 Test: blob_unmap ...passed 00:07:37.720 Test: blob_iter ...passed 00:07:37.720 Test: blob_parse_md ...passed 00:07:37.720 Test: bs_load_pending_removal ...passed 00:07:37.720 Test: bs_unload ...[2024-06-10 11:31:07.493245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:37.720 passed 00:07:37.720 Test: bs_usable_clusters ...passed 00:07:37.720 Test: blob_crc ...[2024-06-10 11:31:07.561561] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:37.720 [2024-06-10 11:31:07.561690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:37.720 passed 00:07:37.720 Test: blob_flags ...passed 00:07:37.720 Test: bs_version ...passed 00:07:37.720 Test: blob_set_xattrs_test ...[2024-06-10 11:31:07.669978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:37.720 [2024-06-10 11:31:07.670120] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:37.720 passed 00:07:37.720 Test: blob_thin_prov_alloc ...passed 00:07:37.720 Test: blob_insert_cluster_msg_test ...passed 00:07:37.720 Test: blob_thin_prov_rw ...passed 00:07:37.720 Test: blob_thin_prov_rle ...passed 00:07:37.720 Test: blob_thin_prov_rw_iov ...passed 00:07:37.720 Test: blob_snapshot_rw ...passed 00:07:37.720 Test: blob_snapshot_rw_iov ...passed 00:07:37.720 Test: blob_inflate_rw ...passed 00:07:37.720 Test: blob_snapshot_freeze_io ...passed 00:07:37.720 Test: blob_operation_split_rw ...passed 00:07:37.720 Test: blob_operation_split_rw_iov ...passed 00:07:37.720 Test: blob_simultaneous_operations ...[2024-06-10 11:31:08.672258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:37.720 [2024-06-10 11:31:08.672362] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.720 [2024-06-10 11:31:08.673602] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:37.720 [2024-06-10 11:31:08.673682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.720 [2024-06-10 11:31:08.686524] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:37.720 [2024-06-10 11:31:08.686600] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.720 [2024-06-10 11:31:08.686744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:37.720 [2024-06-10 11:31:08.686778] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.720 passed 00:07:37.720 Test: blob_persist_test ...passed 00:07:37.720 Test: blob_decouple_snapshot ...passed 00:07:37.720 Test: blob_seek_io_unit ...passed 00:07:37.720 Test: blob_nested_freezes ...passed 00:07:37.720 Test: blob_clone_resize ...passed 00:07:37.720 Test: blob_shallow_copy ...[2024-06-10 11:31:08.964915] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:37.720 [2024-06-10 11:31:08.965292] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:37.720 [2024-06-10 11:31:08.965587] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:37.720 passed 00:07:37.720 Suite: blob_blob_nocopy_noextent 00:07:37.720 Test: blob_write ...passed 00:07:37.720 Test: blob_read ...passed 00:07:37.720 Test: blob_rw_verify ...passed 00:07:37.720 Test: blob_rw_verify_iov_nomem ...passed 00:07:37.720 Test: blob_rw_iov_read_only ...passed 00:07:37.720 Test: blob_xattr ...passed 00:07:37.720 Test: blob_dirty_shutdown ...passed 00:07:37.720 Test: blob_is_degraded ...passed 00:07:37.720 Suite: blob_esnap_bs_nocopy_noextent 00:07:37.720 Test: blob_esnap_create ...passed 00:07:37.720 Test: blob_esnap_thread_add_remove ...passed 00:07:37.720 Test: blob_esnap_clone_snapshot ...passed 00:07:37.720 Test: blob_esnap_clone_inflate ...passed 00:07:37.720 Test: blob_esnap_clone_decouple ...passed 00:07:37.720 Test: blob_esnap_clone_reload ...passed 00:07:37.720 Test: blob_esnap_hotplug ...passed 00:07:37.720 Test: blob_set_parent ...[2024-06-10 11:31:09.518451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:37.720 [2024-06-10 11:31:09.518527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:37.720 [2024-06-10 11:31:09.518691] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:37.720 [2024-06-10 11:31:09.518741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:37.720 [2024-06-10 11:31:09.519216] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:37.720 passed 00:07:37.720 Test: blob_set_external_parent ...[2024-06-10 11:31:09.552726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:37.720 [2024-06-10 11:31:09.552803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:37.720 [2024-06-10 11:31:09.552833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:37.720 [2024-06-10 11:31:09.553228] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:37.720 passed 00:07:37.720 Suite: blob_nocopy_extent 00:07:37.721 Test: blob_init ...[2024-06-10 11:31:09.564675] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:37.721 passed 00:07:37.721 Test: blob_thin_provision ...passed 00:07:37.721 Test: blob_read_only ...passed 00:07:37.721 Test: bs_load ...[2024-06-10 11:31:09.610997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:37.721 passed 00:07:37.721 Test: bs_load_custom_cluster_size ...passed 00:07:37.721 Test: bs_load_after_failed_grow ...passed 00:07:37.721 Test: bs_cluster_sz ...[2024-06-10 11:31:09.636494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:37.721 [2024-06-10 11:31:09.636800] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:37.721 [2024-06-10 11:31:09.636856] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:37.721 passed 00:07:37.721 Test: bs_resize_md ...passed 00:07:37.721 Test: bs_destroy ...passed 00:07:37.721 Test: bs_type ...passed 00:07:37.721 Test: bs_super_block ...passed 00:07:37.721 Test: bs_test_recover_cluster_count ...passed 00:07:37.721 Test: bs_grow_live ...passed 00:07:37.721 Test: bs_grow_live_no_space ...passed 00:07:37.721 Test: bs_test_grow ...passed 00:07:37.721 Test: blob_serialize_test ...passed 00:07:37.721 Test: super_block_crc ...passed 00:07:37.721 Test: blob_thin_prov_write_count_io ...passed 00:07:37.980 Test: blob_thin_prov_unmap_cluster ...passed 00:07:37.980 Test: bs_load_iter_test ...passed 00:07:37.980 Test: blob_relations ...[2024-06-10 11:31:09.818466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:37.980 [2024-06-10 11:31:09.818589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.980 [2024-06-10 11:31:09.819514] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:37.980 [2024-06-10 11:31:09.819568] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.980 passed 00:07:37.980 Test: blob_relations2 ...[2024-06-10 11:31:09.833800] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:37.980 [2024-06-10 11:31:09.833920] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.980 [2024-06-10 11:31:09.833951] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:37.980 [2024-06-10 11:31:09.833980] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.980 [2024-06-10 11:31:09.835324] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:37.980 [2024-06-10 11:31:09.835404] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.980 [2024-06-10 11:31:09.835788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:37.980 [2024-06-10 11:31:09.835836] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.980 passed 00:07:37.980 Test: blob_relations3 ...passed 00:07:37.980 Test: blobstore_clean_power_failure ...passed 00:07:37.980 Test: blob_delete_snapshot_power_failure ...[2024-06-10 11:31:09.991496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:37.980 [2024-06-10 11:31:10.003824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:37.980 [2024-06-10 11:31:10.016209] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:37.980 [2024-06-10 11:31:10.016312] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:37.980 [2024-06-10 11:31:10.016350] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.981 [2024-06-10 11:31:10.028723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:37.981 [2024-06-10 11:31:10.028833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:37.981 [2024-06-10 11:31:10.028862] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:37.981 [2024-06-10 11:31:10.028901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:38.239 [2024-06-10 11:31:10.041424] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:38.239 [2024-06-10 11:31:10.041518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:38.239 [2024-06-10 11:31:10.041545] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:38.239 [2024-06-10 11:31:10.041584] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:38.239 [2024-06-10 11:31:10.053998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:38.239 [2024-06-10 11:31:10.054122] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:38.239 [2024-06-10 11:31:10.066570] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:38.239 [2024-06-10 11:31:10.066704] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:38.239 [2024-06-10 11:31:10.079301] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:38.239 [2024-06-10 11:31:10.079403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:38.239 passed 00:07:38.239 Test: blob_create_snapshot_power_failure ...[2024-06-10 11:31:10.116494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:38.239 [2024-06-10 11:31:10.128624] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:38.239 [2024-06-10 11:31:10.152615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:38.239 [2024-06-10 11:31:10.164904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:38.239 passed 00:07:38.239 Test: blob_io_unit ...passed 00:07:38.239 Test: blob_io_unit_compatibility ...passed 00:07:38.239 Test: blob_ext_md_pages ...passed 00:07:38.239 Test: blob_esnap_io_4096_4096 ...passed 00:07:38.498 Test: blob_esnap_io_512_512 ...passed 00:07:38.498 Test: blob_esnap_io_4096_512 ...passed 00:07:38.498 Test: blob_esnap_io_512_4096 ...passed 00:07:38.498 Test: blob_esnap_clone_resize ...passed 00:07:38.498 Suite: blob_bs_nocopy_extent 00:07:38.498 Test: blob_open ...passed 00:07:38.498 Test: blob_create ...[2024-06-10 11:31:10.436147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:38.498 passed 00:07:38.498 Test: blob_create_loop ...passed 00:07:38.498 Test: blob_create_fail ...[2024-06-10 11:31:10.542698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:38.498 passed 00:07:38.756 Test: blob_create_internal ...passed 00:07:38.756 Test: blob_create_zero_extent ...passed 00:07:38.756 Test: blob_snapshot ...passed 00:07:38.756 Test: blob_clone ...passed 00:07:38.756 Test: blob_inflate ...[2024-06-10 11:31:10.736628] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:38.756 passed 00:07:38.756 Test: blob_delete ...passed 00:07:38.756 Test: blob_resize_test ...[2024-06-10 11:31:10.805819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:39.014 passed 00:07:39.014 Test: blob_resize_thin_test ...passed 00:07:39.014 Test: channel_ops ...passed 00:07:39.014 Test: blob_super ...passed 00:07:39.014 Test: blob_rw_verify_iov ...passed 00:07:39.014 Test: blob_unmap ...passed 00:07:39.014 Test: blob_iter ...passed 00:07:39.014 Test: blob_parse_md ...passed 00:07:39.273 Test: bs_load_pending_removal ...passed 00:07:39.273 Test: bs_unload ...[2024-06-10 11:31:11.125168] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:39.273 passed 00:07:39.273 Test: bs_usable_clusters ...passed 00:07:39.273 Test: blob_crc ...[2024-06-10 11:31:11.194603] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:39.273 [2024-06-10 11:31:11.194741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:39.273 passed 00:07:39.273 Test: blob_flags ...passed 00:07:39.273 Test: bs_version ...passed 00:07:39.273 Test: blob_set_xattrs_test ...[2024-06-10 11:31:11.300146] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:39.273 [2024-06-10 11:31:11.300258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:39.273 passed 00:07:39.531 Test: blob_thin_prov_alloc ...passed 00:07:39.531 Test: blob_insert_cluster_msg_test ...passed 00:07:39.531 Test: blob_thin_prov_rw ...passed 00:07:39.531 Test: blob_thin_prov_rle ...passed 00:07:39.829 Test: blob_thin_prov_rw_iov ...passed 00:07:39.829 Test: blob_snapshot_rw ...passed 00:07:39.829 Test: blob_snapshot_rw_iov ...passed 00:07:40.090 Test: blob_inflate_rw ...passed 00:07:40.090 Test: blob_snapshot_freeze_io ...passed 00:07:40.090 Test: blob_operation_split_rw ...passed 00:07:40.349 Test: blob_operation_split_rw_iov ...passed 00:07:40.349 Test: blob_simultaneous_operations ...[2024-06-10 11:31:12.343029] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:40.349 [2024-06-10 11:31:12.343132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:40.349 [2024-06-10 11:31:12.344512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:40.349 [2024-06-10 11:31:12.344593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:40.349 [2024-06-10 11:31:12.358636] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:40.349 [2024-06-10 11:31:12.358744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:40.349 [2024-06-10 11:31:12.358901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:40.349 [2024-06-10 11:31:12.358934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:40.349 passed 00:07:40.608 Test: blob_persist_test ...passed 00:07:40.608 Test: blob_decouple_snapshot ...passed 00:07:40.608 Test: blob_seek_io_unit ...passed 00:07:40.608 Test: blob_nested_freezes ...passed 00:07:40.608 Test: blob_clone_resize ...passed 00:07:40.608 Test: blob_shallow_copy ...[2024-06-10 11:31:12.664742] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:40.608 [2024-06-10 11:31:12.665136] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:40.608 [2024-06-10 11:31:12.665432] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:40.866 passed 00:07:40.866 Suite: blob_blob_nocopy_extent 00:07:40.866 Test: blob_write ...passed 00:07:40.866 Test: blob_read ...passed 00:07:40.866 Test: blob_rw_verify ...passed 00:07:40.866 Test: blob_rw_verify_iov_nomem ...passed 00:07:40.866 Test: blob_rw_iov_read_only ...passed 00:07:40.866 Test: blob_xattr ...passed 00:07:41.125 Test: blob_dirty_shutdown ...passed 00:07:41.125 Test: blob_is_degraded ...passed 00:07:41.125 Suite: blob_esnap_bs_nocopy_extent 00:07:41.125 Test: blob_esnap_create ...passed 00:07:41.125 Test: blob_esnap_thread_add_remove ...passed 00:07:41.125 Test: blob_esnap_clone_snapshot ...passed 00:07:41.125 Test: blob_esnap_clone_inflate ...passed 00:07:41.125 Test: blob_esnap_clone_decouple ...passed 00:07:41.384 Test: blob_esnap_clone_reload ...passed 00:07:41.384 Test: blob_esnap_hotplug ...passed 00:07:41.385 Test: blob_set_parent ...[2024-06-10 11:31:13.251298] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:41.385 [2024-06-10 11:31:13.251396] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:41.385 [2024-06-10 11:31:13.251502] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:41.385 [2024-06-10 11:31:13.251536] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:41.385 [2024-06-10 11:31:13.251929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:41.385 passed 00:07:41.385 Test: blob_set_external_parent ...[2024-06-10 11:31:13.285522] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:41.385 [2024-06-10 11:31:13.285613] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:41.385 [2024-06-10 11:31:13.285637] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:41.385 [2024-06-10 11:31:13.285972] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:41.385 passed 00:07:41.385 Suite: blob_copy_noextent 00:07:41.385 Test: blob_init ...[2024-06-10 11:31:13.297457] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:41.385 passed 00:07:41.385 Test: blob_thin_provision ...passed 00:07:41.385 Test: blob_read_only ...passed 00:07:41.385 Test: bs_load ...[2024-06-10 11:31:13.343266] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:41.385 passed 00:07:41.385 Test: bs_load_custom_cluster_size ...passed 00:07:41.385 Test: bs_load_after_failed_grow ...passed 00:07:41.385 Test: bs_cluster_sz ...[2024-06-10 11:31:13.367641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:41.385 [2024-06-10 11:31:13.367880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:41.385 [2024-06-10 11:31:13.367949] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:41.385 passed 00:07:41.385 Test: bs_resize_md ...passed 00:07:41.385 Test: bs_destroy ...passed 00:07:41.385 Test: bs_type ...passed 00:07:41.645 Test: bs_super_block ...passed 00:07:41.645 Test: bs_test_recover_cluster_count ...passed 00:07:41.645 Test: bs_grow_live ...passed 00:07:41.645 Test: bs_grow_live_no_space ...passed 00:07:41.645 Test: bs_test_grow ...passed 00:07:41.645 Test: blob_serialize_test ...passed 00:07:41.645 Test: super_block_crc ...passed 00:07:41.645 Test: blob_thin_prov_write_count_io ...passed 00:07:41.645 Test: blob_thin_prov_unmap_cluster ...passed 00:07:41.645 Test: bs_load_iter_test ...passed 00:07:41.645 Test: blob_relations ...[2024-06-10 11:31:13.566845] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:41.645 [2024-06-10 11:31:13.566976] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:41.645 [2024-06-10 11:31:13.567738] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:41.645 [2024-06-10 11:31:13.567816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:41.645 passed 00:07:41.645 Test: blob_relations2 ...[2024-06-10 11:31:13.585846] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:41.645 [2024-06-10 11:31:13.585978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:41.645 [2024-06-10 11:31:13.586033] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:41.645 [2024-06-10 11:31:13.586060] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:41.645 [2024-06-10 11:31:13.587402] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:41.645 [2024-06-10 11:31:13.587488] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:41.645 [2024-06-10 11:31:13.587896] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:41.645 [2024-06-10 11:31:13.587961] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:41.645 passed 00:07:41.645 Test: blob_relations3 ...passed 00:07:41.905 Test: blobstore_clean_power_failure ...passed 00:07:41.905 Test: blob_delete_snapshot_power_failure ...[2024-06-10 11:31:13.750918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:41.905 [2024-06-10 11:31:13.762954] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:41.905 [2024-06-10 11:31:13.763025] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:41.905 [2024-06-10 11:31:13.763052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:41.905 [2024-06-10 11:31:13.775029] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:41.905 [2024-06-10 11:31:13.775099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:41.905 [2024-06-10 11:31:13.775121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:41.905 [2024-06-10 11:31:13.775150] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:41.905 [2024-06-10 11:31:13.787166] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:41.905 [2024-06-10 11:31:13.787255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:41.905 [2024-06-10 11:31:13.799346] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:41.905 [2024-06-10 11:31:13.799449] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:41.905 [2024-06-10 11:31:13.811562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:41.905 [2024-06-10 11:31:13.811647] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:41.905 passed 00:07:41.905 Test: blob_create_snapshot_power_failure ...[2024-06-10 11:31:13.847513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:41.905 [2024-06-10 11:31:13.871059] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:41.905 [2024-06-10 11:31:13.883134] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:41.905 passed 00:07:41.905 Test: blob_io_unit ...passed 00:07:41.905 Test: blob_io_unit_compatibility ...passed 00:07:41.905 Test: blob_ext_md_pages ...passed 00:07:42.165 Test: blob_esnap_io_4096_4096 ...passed 00:07:42.165 Test: blob_esnap_io_512_512 ...passed 00:07:42.165 Test: blob_esnap_io_4096_512 ...passed 00:07:42.165 Test: blob_esnap_io_512_4096 ...passed 00:07:42.165 Test: blob_esnap_clone_resize ...passed 00:07:42.165 Suite: blob_bs_copy_noextent 00:07:42.165 Test: blob_open ...passed 00:07:42.165 Test: blob_create ...[2024-06-10 11:31:14.153846] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:42.165 passed 00:07:42.423 Test: blob_create_loop ...passed 00:07:42.423 Test: blob_create_fail ...[2024-06-10 11:31:14.245589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:42.423 passed 00:07:42.423 Test: blob_create_internal ...passed 00:07:42.423 Test: blob_create_zero_extent ...passed 00:07:42.423 Test: blob_snapshot ...passed 00:07:42.423 Test: blob_clone ...passed 00:07:42.423 Test: blob_inflate ...[2024-06-10 11:31:14.417962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:42.423 passed 00:07:42.423 Test: blob_delete ...passed 00:07:42.683 Test: blob_resize_test ...[2024-06-10 11:31:14.484169] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:42.683 passed 00:07:42.683 Test: blob_resize_thin_test ...passed 00:07:42.683 Test: channel_ops ...passed 00:07:42.683 Test: blob_super ...passed 00:07:42.683 Test: blob_rw_verify_iov ...passed 00:07:42.683 Test: blob_unmap ...passed 00:07:42.683 Test: blob_iter ...passed 00:07:42.683 Test: blob_parse_md ...passed 00:07:42.944 Test: bs_load_pending_removal ...passed 00:07:42.944 Test: bs_unload ...[2024-06-10 11:31:14.790063] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:42.944 passed 00:07:42.944 Test: bs_usable_clusters ...passed 00:07:42.944 Test: blob_crc ...[2024-06-10 11:31:14.856995] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:42.944 [2024-06-10 11:31:14.857102] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:42.944 passed 00:07:42.944 Test: blob_flags ...passed 00:07:42.944 Test: bs_version ...passed 00:07:42.944 Test: blob_set_xattrs_test ...[2024-06-10 11:31:14.958613] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:42.944 [2024-06-10 11:31:14.958745] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:42.944 passed 00:07:43.203 Test: blob_thin_prov_alloc ...passed 00:07:43.203 Test: blob_insert_cluster_msg_test ...passed 00:07:43.203 Test: blob_thin_prov_rw ...passed 00:07:43.203 Test: blob_thin_prov_rle ...passed 00:07:43.203 Test: blob_thin_prov_rw_iov ...passed 00:07:43.463 Test: blob_snapshot_rw ...passed 00:07:43.463 Test: blob_snapshot_rw_iov ...passed 00:07:43.721 Test: blob_inflate_rw ...passed 00:07:43.721 Test: blob_snapshot_freeze_io ...passed 00:07:43.721 Test: blob_operation_split_rw ...passed 00:07:43.981 Test: blob_operation_split_rw_iov ...passed 00:07:43.981 Test: blob_simultaneous_operations ...[2024-06-10 11:31:15.911250] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:43.981 [2024-06-10 11:31:15.911321] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:43.981 [2024-06-10 11:31:15.911728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:43.981 [2024-06-10 11:31:15.911781] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:43.981 [2024-06-10 11:31:15.914411] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:43.981 [2024-06-10 11:31:15.914464] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:43.981 [2024-06-10 11:31:15.914554] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:43.981 [2024-06-10 11:31:15.914570] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:43.981 passed 00:07:43.981 Test: blob_persist_test ...passed 00:07:43.981 Test: blob_decouple_snapshot ...passed 00:07:44.240 Test: blob_seek_io_unit ...passed 00:07:44.240 Test: blob_nested_freezes ...passed 00:07:44.240 Test: blob_clone_resize ...passed 00:07:44.240 Test: blob_shallow_copy ...[2024-06-10 11:31:16.154254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:44.240 [2024-06-10 11:31:16.154591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:44.240 [2024-06-10 11:31:16.154825] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:44.240 passed 00:07:44.240 Suite: blob_blob_copy_noextent 00:07:44.240 Test: blob_write ...passed 00:07:44.240 Test: blob_read ...passed 00:07:44.240 Test: blob_rw_verify ...passed 00:07:44.498 Test: blob_rw_verify_iov_nomem ...passed 00:07:44.498 Test: blob_rw_iov_read_only ...passed 00:07:44.498 Test: blob_xattr ...passed 00:07:44.498 Test: blob_dirty_shutdown ...passed 00:07:44.498 Test: blob_is_degraded ...passed 00:07:44.498 Suite: blob_esnap_bs_copy_noextent 00:07:44.498 Test: blob_esnap_create ...passed 00:07:44.498 Test: blob_esnap_thread_add_remove ...passed 00:07:44.498 Test: blob_esnap_clone_snapshot ...passed 00:07:44.757 Test: blob_esnap_clone_inflate ...passed 00:07:44.757 Test: blob_esnap_clone_decouple ...passed 00:07:44.757 Test: blob_esnap_clone_reload ...passed 00:07:44.757 Test: blob_esnap_hotplug ...passed 00:07:44.757 Test: blob_set_parent ...[2024-06-10 11:31:16.680427] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:44.757 [2024-06-10 11:31:16.680496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:44.757 [2024-06-10 11:31:16.680597] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:44.757 [2024-06-10 11:31:16.680629] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:44.757 [2024-06-10 11:31:16.680974] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:44.757 passed 00:07:44.757 Test: blob_set_external_parent ...[2024-06-10 11:31:16.713320] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:44.757 [2024-06-10 11:31:16.713429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:44.757 [2024-06-10 11:31:16.713450] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:44.757 [2024-06-10 11:31:16.713709] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:44.757 passed 00:07:44.757 Suite: blob_copy_extent 00:07:44.757 Test: blob_init ...[2024-06-10 11:31:16.724662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:44.757 passed 00:07:44.757 Test: blob_thin_provision ...passed 00:07:44.757 Test: blob_read_only ...passed 00:07:44.757 Test: bs_load ...[2024-06-10 11:31:16.768231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:44.757 passed 00:07:44.757 Test: bs_load_custom_cluster_size ...passed 00:07:44.757 Test: bs_load_after_failed_grow ...passed 00:07:44.757 Test: bs_cluster_sz ...[2024-06-10 11:31:16.791428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:44.757 [2024-06-10 11:31:16.791622] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:44.757 [2024-06-10 11:31:16.791659] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:44.757 passed 00:07:44.757 Test: bs_resize_md ...passed 00:07:45.016 Test: bs_destroy ...passed 00:07:45.016 Test: bs_type ...passed 00:07:45.016 Test: bs_super_block ...passed 00:07:45.016 Test: bs_test_recover_cluster_count ...passed 00:07:45.016 Test: bs_grow_live ...passed 00:07:45.016 Test: bs_grow_live_no_space ...passed 00:07:45.016 Test: bs_test_grow ...passed 00:07:45.016 Test: blob_serialize_test ...passed 00:07:45.016 Test: super_block_crc ...passed 00:07:45.016 Test: blob_thin_prov_write_count_io ...passed 00:07:45.016 Test: blob_thin_prov_unmap_cluster ...passed 00:07:45.016 Test: bs_load_iter_test ...passed 00:07:45.016 Test: blob_relations ...[2024-06-10 11:31:16.962846] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:45.016 [2024-06-10 11:31:16.962955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.016 [2024-06-10 11:31:16.963525] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:45.016 [2024-06-10 11:31:16.963566] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.016 passed 00:07:45.016 Test: blob_relations2 ...[2024-06-10 11:31:16.977052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:45.016 [2024-06-10 11:31:16.977126] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.016 [2024-06-10 11:31:16.977156] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:45.016 [2024-06-10 11:31:16.977174] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.016 [2024-06-10 11:31:16.978039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:45.016 [2024-06-10 11:31:16.978085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.016 [2024-06-10 11:31:16.978347] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:45.016 [2024-06-10 11:31:16.978377] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.016 passed 00:07:45.016 Test: blob_relations3 ...passed 00:07:45.275 Test: blobstore_clean_power_failure ...passed 00:07:45.275 Test: blob_delete_snapshot_power_failure ...[2024-06-10 11:31:17.130961] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:45.275 [2024-06-10 11:31:17.142507] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:45.275 [2024-06-10 11:31:17.154059] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:45.275 [2024-06-10 11:31:17.154142] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:45.275 [2024-06-10 11:31:17.154163] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.275 [2024-06-10 11:31:17.165776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:45.275 [2024-06-10 11:31:17.165863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:45.275 [2024-06-10 11:31:17.165884] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:45.275 [2024-06-10 11:31:17.165908] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.275 [2024-06-10 11:31:17.177516] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:45.275 [2024-06-10 11:31:17.180986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:45.275 [2024-06-10 11:31:17.181030] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:45.275 [2024-06-10 11:31:17.181059] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.275 [2024-06-10 11:31:17.192753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:45.275 [2024-06-10 11:31:17.192844] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.275 [2024-06-10 11:31:17.204771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:45.275 [2024-06-10 11:31:17.204894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.275 [2024-06-10 11:31:17.217122] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:45.275 [2024-06-10 11:31:17.217214] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.275 passed 00:07:45.275 Test: blob_create_snapshot_power_failure ...[2024-06-10 11:31:17.253456] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:45.275 [2024-06-10 11:31:17.265125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:45.275 [2024-06-10 11:31:17.288699] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:45.275 [2024-06-10 11:31:17.300873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:45.534 passed 00:07:45.534 Test: blob_io_unit ...passed 00:07:45.534 Test: blob_io_unit_compatibility ...passed 00:07:45.534 Test: blob_ext_md_pages ...passed 00:07:45.534 Test: blob_esnap_io_4096_4096 ...passed 00:07:45.534 Test: blob_esnap_io_512_512 ...passed 00:07:45.534 Test: blob_esnap_io_4096_512 ...passed 00:07:45.534 Test: blob_esnap_io_512_4096 ...passed 00:07:45.534 Test: blob_esnap_clone_resize ...passed 00:07:45.534 Suite: blob_bs_copy_extent 00:07:45.534 Test: blob_open ...passed 00:07:45.534 Test: blob_create ...[2024-06-10 11:31:17.566788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:45.534 passed 00:07:45.792 Test: blob_create_loop ...passed 00:07:45.792 Test: blob_create_fail ...[2024-06-10 11:31:17.666479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:45.792 passed 00:07:45.792 Test: blob_create_internal ...passed 00:07:45.792 Test: blob_create_zero_extent ...passed 00:07:45.792 Test: blob_snapshot ...passed 00:07:45.792 Test: blob_clone ...passed 00:07:45.792 Test: blob_inflate ...[2024-06-10 11:31:17.832804] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:45.792 passed 00:07:46.051 Test: blob_delete ...passed 00:07:46.051 Test: blob_resize_test ...[2024-06-10 11:31:17.896433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:46.051 passed 00:07:46.051 Test: blob_resize_thin_test ...passed 00:07:46.051 Test: channel_ops ...passed 00:07:46.051 Test: blob_super ...passed 00:07:46.051 Test: blob_rw_verify_iov ...passed 00:07:46.051 Test: blob_unmap ...passed 00:07:46.051 Test: blob_iter ...passed 00:07:46.309 Test: blob_parse_md ...passed 00:07:46.309 Test: bs_load_pending_removal ...passed 00:07:46.309 Test: bs_unload ...[2024-06-10 11:31:18.190426] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:46.309 passed 00:07:46.309 Test: bs_usable_clusters ...passed 00:07:46.309 Test: blob_crc ...[2024-06-10 11:31:18.253511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:46.309 [2024-06-10 11:31:18.253632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:46.309 passed 00:07:46.309 Test: blob_flags ...passed 00:07:46.309 Test: bs_version ...passed 00:07:46.309 Test: blob_set_xattrs_test ...[2024-06-10 11:31:18.350035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:46.309 [2024-06-10 11:31:18.350140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:46.309 passed 00:07:46.567 Test: blob_thin_prov_alloc ...passed 00:07:46.567 Test: blob_insert_cluster_msg_test ...passed 00:07:46.567 Test: blob_thin_prov_rw ...passed 00:07:46.567 Test: blob_thin_prov_rle ...passed 00:07:46.567 Test: blob_thin_prov_rw_iov ...passed 00:07:46.825 Test: blob_snapshot_rw ...passed 00:07:46.825 Test: blob_snapshot_rw_iov ...passed 00:07:47.083 Test: blob_inflate_rw ...passed 00:07:47.083 Test: blob_snapshot_freeze_io ...passed 00:07:47.083 Test: blob_operation_split_rw ...passed 00:07:47.342 Test: blob_operation_split_rw_iov ...passed 00:07:47.342 Test: blob_simultaneous_operations ...[2024-06-10 11:31:19.272261] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:47.342 [2024-06-10 11:31:19.272377] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.342 [2024-06-10 11:31:19.272820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:47.342 [2024-06-10 11:31:19.272873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.342 [2024-06-10 11:31:19.275350] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:47.342 [2024-06-10 11:31:19.275406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.342 [2024-06-10 11:31:19.275492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:47.342 [2024-06-10 11:31:19.275509] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.342 passed 00:07:47.342 Test: blob_persist_test ...passed 00:07:47.342 Test: blob_decouple_snapshot ...passed 00:07:47.604 Test: blob_seek_io_unit ...passed 00:07:47.604 Test: blob_nested_freezes ...passed 00:07:47.604 Test: blob_clone_resize ...passed 00:07:47.604 Test: blob_shallow_copy ...[2024-06-10 11:31:19.513024] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:47.604 [2024-06-10 11:31:19.513376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:47.604 [2024-06-10 11:31:19.513631] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:47.604 passed 00:07:47.604 Suite: blob_blob_copy_extent 00:07:47.604 Test: blob_write ...passed 00:07:47.604 Test: blob_read ...passed 00:07:47.604 Test: blob_rw_verify ...passed 00:07:47.864 Test: blob_rw_verify_iov_nomem ...passed 00:07:47.864 Test: blob_rw_iov_read_only ...passed 00:07:47.864 Test: blob_xattr ...passed 00:07:47.864 Test: blob_dirty_shutdown ...passed 00:07:47.864 Test: blob_is_degraded ...passed 00:07:47.864 Suite: blob_esnap_bs_copy_extent 00:07:47.864 Test: blob_esnap_create ...passed 00:07:47.864 Test: blob_esnap_thread_add_remove ...passed 00:07:47.864 Test: blob_esnap_clone_snapshot ...passed 00:07:48.124 Test: blob_esnap_clone_inflate ...passed 00:07:48.124 Test: blob_esnap_clone_decouple ...passed 00:07:48.124 Test: blob_esnap_clone_reload ...passed 00:07:48.124 Test: blob_esnap_hotplug ...passed 00:07:48.124 Test: blob_set_parent ...[2024-06-10 11:31:20.051587] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:48.124 [2024-06-10 11:31:20.051684] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:48.124 [2024-06-10 11:31:20.051794] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:48.124 [2024-06-10 11:31:20.051833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:48.124 [2024-06-10 11:31:20.052288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:48.124 passed 00:07:48.124 Test: blob_set_external_parent ...[2024-06-10 11:31:20.084786] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:48.124 [2024-06-10 11:31:20.084906] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:48.124 [2024-06-10 11:31:20.084934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:48.124 [2024-06-10 11:31:20.085302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:48.124 passed 00:07:48.124 00:07:48.124 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.124 suites 16 16 n/a 0 0 00:07:48.124 tests 376 376 376 0 0 00:07:48.124 asserts 143965 143965 143965 0 n/a 00:07:48.124 00:07:48.124 Elapsed time = 14.232 seconds 00:07:48.383 11:31:20 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:07:48.383 00:07:48.383 00:07:48.383 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.383 http://cunit.sourceforge.net/ 00:07:48.383 00:07:48.383 00:07:48.383 Suite: blob_bdev 00:07:48.383 Test: create_bs_dev ...passed 00:07:48.383 Test: create_bs_dev_ro ...[2024-06-10 11:31:20.208139] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:07:48.383 passed 00:07:48.383 Test: create_bs_dev_rw ...passed 00:07:48.383 Test: claim_bs_dev ...passed 00:07:48.383 Test: claim_bs_dev_ro ...[2024-06-10 11:31:20.208557] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:07:48.383 passed 00:07:48.383 Test: deferred_destroy_refs ...passed 00:07:48.383 Test: deferred_destroy_channels ...passed 00:07:48.383 Test: deferred_destroy_threads ...passed 00:07:48.383 00:07:48.383 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.383 suites 1 1 n/a 0 0 00:07:48.383 tests 8 8 8 0 0 00:07:48.383 asserts 119 119 119 0 n/a 00:07:48.383 00:07:48.383 Elapsed time = 0.001 seconds 00:07:48.383 11:31:20 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:07:48.383 00:07:48.383 00:07:48.383 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.383 http://cunit.sourceforge.net/ 00:07:48.383 00:07:48.383 00:07:48.383 Suite: tree 00:07:48.383 Test: blobfs_tree_op_test ...passed 00:07:48.383 00:07:48.383 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.383 suites 1 1 n/a 0 0 00:07:48.383 tests 1 1 1 0 0 00:07:48.383 asserts 27 27 27 0 n/a 00:07:48.383 00:07:48.383 Elapsed time = 0.000 seconds 00:07:48.383 11:31:20 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:07:48.383 00:07:48.383 00:07:48.383 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.383 http://cunit.sourceforge.net/ 00:07:48.383 00:07:48.383 00:07:48.383 Suite: blobfs_async_ut 00:07:48.383 Test: fs_init ...passed 00:07:48.383 Test: fs_open ...passed 00:07:48.383 Test: fs_create ...passed 00:07:48.383 Test: fs_truncate ...passed 00:07:48.383 Test: fs_rename ...passed 00:07:48.383 Test: fs_rw_async ...[2024-06-10 11:31:20.400911] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:07:48.383 passed 00:07:48.383 Test: fs_writev_readv_async ...passed 00:07:48.383 Test: tree_find_buffer_ut ...passed 00:07:48.383 Test: channel_ops ...passed 00:07:48.641 Test: channel_ops_sync ...passed 00:07:48.641 00:07:48.641 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.641 suites 1 1 n/a 0 0 00:07:48.641 tests 10 10 10 0 0 00:07:48.641 asserts 292 292 292 0 n/a 00:07:48.641 00:07:48.641 Elapsed time = 0.166 seconds 00:07:48.641 11:31:20 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:07:48.641 00:07:48.641 00:07:48.641 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.641 http://cunit.sourceforge.net/ 00:07:48.641 00:07:48.641 00:07:48.641 Suite: blobfs_sync_ut 00:07:48.641 Test: cache_read_after_write ...passed 00:07:48.641 Test: file_length ...[2024-06-10 11:31:20.601218] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:07:48.641 passed 00:07:48.641 Test: append_write_to_extend_blob ...passed 00:07:48.641 Test: partial_buffer ...passed 00:07:48.641 Test: cache_write_null_buffer ...passed 00:07:48.641 Test: fs_create_sync ...passed 00:07:48.641 Test: fs_rename_sync ...passed 00:07:48.901 Test: cache_append_no_cache ...passed 00:07:48.901 Test: fs_delete_file_without_close ...passed 00:07:48.901 00:07:48.901 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.901 suites 1 1 n/a 0 0 00:07:48.901 tests 9 9 9 0 0 00:07:48.901 asserts 345 345 345 0 n/a 00:07:48.901 00:07:48.901 Elapsed time = 0.366 seconds 00:07:48.901 11:31:20 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:07:48.901 00:07:48.901 00:07:48.901 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.901 http://cunit.sourceforge.net/ 00:07:48.901 00:07:48.901 00:07:48.901 Suite: blobfs_bdev_ut 00:07:48.901 Test: spdk_blobfs_bdev_detect_test ...[2024-06-10 11:31:20.795319] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:48.901 passed 00:07:48.901 Test: spdk_blobfs_bdev_create_test ...passed 00:07:48.901 Test: spdk_blobfs_bdev_mount_test ...passed 00:07:48.901 00:07:48.901 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.901 suites 1 1 n/a 0 0 00:07:48.901 tests 3 3 3 0 0 00:07:48.901 asserts 9 9 9 0 n/a 00:07:48.901 00:07:48.901 Elapsed time = 0.001 seconds 00:07:48.901 [2024-06-10 11:31:20.795758] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:48.901 00:07:48.901 real 0m14.994s 00:07:48.901 user 0m14.334s 00:07:48.901 sys 0m0.863s 00:07:48.901 11:31:20 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:48.901 11:31:20 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:07:48.901 ************************************ 00:07:48.901 END TEST unittest_blob_blobfs 00:07:48.901 ************************************ 00:07:48.901 11:31:20 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:07:48.901 11:31:20 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:48.901 11:31:20 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:48.901 11:31:20 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:48.901 ************************************ 00:07:48.901 START TEST unittest_event 00:07:48.901 ************************************ 00:07:48.901 11:31:20 unittest.unittest_event -- common/autotest_common.sh@1124 -- # unittest_event 00:07:48.901 11:31:20 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:07:48.901 00:07:48.901 00:07:48.901 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.901 http://cunit.sourceforge.net/ 00:07:48.901 00:07:48.901 00:07:48.901 Suite: app_suite 00:07:48.901 Test: test_spdk_app_parse_args ...app_ut [options] 00:07:48.901 00:07:48.901 CPU options: 00:07:48.901 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:48.901 (like [0,1,10]) 00:07:48.901 --lcores lcore to CPU mapping list. The list is in the format: 00:07:48.901 [<,lcores[@CPUs]>...] 00:07:48.901 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:48.901 Within the group, '-' is used for range separator, 00:07:48.901 ',' is used for single number separator. 00:07:48.901 '( )' can be omitted for single element group, 00:07:48.901 '@' can be omitted if cpus and lcores have the same value 00:07:48.901 --disable-cpumask-locks Disable CPU core lock files. 00:07:48.901 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:48.901 pollers in the app support interrupt mode) 00:07:48.901 -p, --main-core main (primary) core for DPDK 00:07:48.901 00:07:48.901 Configuration options: 00:07:48.901 -c, --config, --json JSON config file 00:07:48.901 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:48.901 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:48.901 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:48.901 --rpcs-allowed comma-separated list of permitted RPCS 00:07:48.901 --json-ignore-init-errors don't exit on invalid config entry 00:07:48.901 00:07:48.901 Memory options: 00:07:48.901 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:48.901 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:48.901 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:48.901 -R, --huge-unlink unlink huge files after initialization 00:07:48.901 -n, --mem-channels number of memory channels used for DPDK 00:07:48.901 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:48.902 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:48.902 --no-huge run without using hugepages 00:07:48.902 -i, --shm-id shared memory ID (optional) 00:07:48.902 -g, --single-file-segments force creating just one hugetlbfs file 00:07:48.902 00:07:48.902 PCI options: 00:07:48.902 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:48.902 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:48.902 -u, --no-pci disable PCI access 00:07:48.902 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:48.902 00:07:48.902 Log options: 00:07:48.902 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:48.902 --silence-noticelog disable notice level logging to stderr 00:07:48.902 00:07:48.902 Trace options: 00:07:48.902 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:48.902 setting 0 to disable trace (default 32768) 00:07:48.902 Tracepoints vary in size and can use more than one trace entry. 00:07:48.902 -e, --tpoint-group [:] 00:07:48.902 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:48.902 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:48.902 a tracepoint group. First tpoint inside a group can be enabled by 00:07:48.902 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:48.902 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:48.902 in /include/spdk_internal/trace_defs.h 00:07:48.902 00:07:48.902 Other options: 00:07:48.902 -h, --help show this usage 00:07:48.902 -v, --version print SPDK version 00:07:48.902 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:48.902 --env-context Opaque context for use of the env implementation 00:07:48.902 app_ut [options] 00:07:48.902 00:07:48.902 CPU options: 00:07:48.902 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDKapp_ut: invalid option -- 'z' 00:07:48.902 app_ut: unrecognized option '--test-long-opt' 00:07:48.902 00:07:48.902 (like [0,1,10]) 00:07:48.902 --lcores lcore to CPU mapping list. The list is in the format: 00:07:48.902 [<,lcores[@CPUs]>...] 00:07:48.902 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:48.902 Within the group, '-' is used for range separator, 00:07:48.902 ',' is used for single number separator. 00:07:48.902 '( )' can be omitted for single element group, 00:07:48.902 '@' can be omitted if cpus and lcores have the same value 00:07:48.902 --disable-cpumask-locks Disable CPU core lock files. 00:07:48.902 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:48.902 pollers in the app support interrupt mode) 00:07:48.902 -p, --main-core main (primary) core for DPDK 00:07:48.902 00:07:48.902 Configuration options: 00:07:48.902 -c, --config, --json JSON config file 00:07:48.902 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:48.902 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:48.902 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:48.902 --rpcs-allowed comma-separated list of permitted RPCS 00:07:48.902 --json-ignore-init-errors don't exit on invalid config entry 00:07:48.902 00:07:48.902 Memory options: 00:07:48.902 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:48.902 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:48.902 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:48.902 -R, --huge-unlink unlink huge files after initialization 00:07:48.902 -n, --mem-channels number of memory channels used for DPDK 00:07:48.902 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:48.902 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:48.902 --no-huge run without using hugepages 00:07:48.902 -i, --shm-id shared memory ID (optional) 00:07:48.902 -g, --single-file-segments force creating just one hugetlbfs file 00:07:48.902 00:07:48.902 PCI options: 00:07:48.902 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:48.902 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:48.902 -u, --no-pci disable PCI access 00:07:48.902 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:48.902 00:07:48.902 Log options: 00:07:48.902 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:48.902 --silence-noticelog disable notice level logging to stderr 00:07:48.902 00:07:48.902 Trace options: 00:07:48.902 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:48.902 setting 0 to disable trace (default 32768) 00:07:48.902 Tracepoints vary in size and can use more than one trace entry. 00:07:48.902 -e, --tpoint-group [:] 00:07:48.902 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:48.902 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:48.902 a tracepoint group. First tpoint inside a group can be enabled by 00:07:48.902 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:48.902 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:48.902 in /include/spdk_internal/trace_defs.h 00:07:48.902 00:07:48.902 Other options: 00:07:48.902 -h, --help show this usage 00:07:48.902 -v, --version print SPDK version 00:07:48.902 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:48.902 --env-context Opaque context for use of the env implementation 00:07:48.902 [2024-06-10 11:31:20.897352] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1192:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:07:48.902 [2024-06-10 11:31:20.897724] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:07:48.902 app_ut [options] 00:07:48.902 00:07:48.902 CPU options: 00:07:48.902 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:48.902 (like [0,1,10]) 00:07:48.902 --lcores lcore to CPU mapping list. The list is in the format: 00:07:48.902 [<,lcores[@CPUs]>...] 00:07:48.902 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:48.902 Within the group, '-' is used for range separator, 00:07:48.902 ',' is used for single number separator. 00:07:48.902 '( )' can be omitted for single element group, 00:07:48.902 '@' can be omitted if cpus and lcores have the same value 00:07:48.902 --disable-cpumask-locks Disable CPU core lock files. 00:07:48.902 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:48.902 pollers in the app support interrupt mode) 00:07:48.902 -p, --main-core main (primary) core for DPDK 00:07:48.902 00:07:48.902 Configuration options: 00:07:48.902 -c, --config, --json JSON config file 00:07:48.902 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:48.902 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:48.902 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:48.902 --rpcs-allowed comma-separated list of permitted RPCS 00:07:48.902 --json-ignore-init-errors don't exit on invalid config entry 00:07:48.902 00:07:48.902 Memory options: 00:07:48.902 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:48.902 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:48.902 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:48.902 -R, --huge-unlink unlink huge files after initialization 00:07:48.902 -n, --mem-channels number of memory channels used for DPDK 00:07:48.902 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:48.902 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:48.902 --no-huge run without using hugepages 00:07:48.902 -i, --shm-id shared memory ID (optional) 00:07:48.902 -g, --single-file-segments force creating just one hugetlbfs file 00:07:48.902 00:07:48.902 PCI options: 00:07:48.902 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:48.902 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:48.902 -u, --no-pci disable PCI access 00:07:48.902 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:48.902 00:07:48.902 Log options: 00:07:48.902 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:48.902 --silence-noticelog disable notice level logging to stderr 00:07:48.902 00:07:48.902 Trace options: 00:07:48.902 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:48.902 setting 0 to disable trace (default 32768) 00:07:48.902 Tracepoints vary in size and can use more than one trace entry. 00:07:48.902 -e, --tpoint-group [:] 00:07:48.902 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:48.902 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:48.902 a tracepoint group. First tpoint inside a group can be enabled by 00:07:48.902 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:48.902 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:48.902 in /include/spdk_internal/trace_defs.h 00:07:48.902 00:07:48.902 Other options: 00:07:48.902 -h, --help show this usage 00:07:48.902 -v, --version print SPDK version 00:07:48.902 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:48.903 --env-context Opaque context for use of the env implementation 00:07:48.903 passed 00:07:48.903 00:07:48.903 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.903 suites 1 1 n/a 0 0 00:07:48.903 tests 1 1 1 0 0 00:07:48.903 asserts 8 8 8 0 n/a 00:07:48.903 00:07:48.903 Elapsed time = 0.002 seconds 00:07:48.903 [2024-06-10 11:31:20.898054] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:07:48.903 11:31:20 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:07:48.903 00:07:48.903 00:07:48.903 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.903 http://cunit.sourceforge.net/ 00:07:48.903 00:07:48.903 00:07:48.903 Suite: app_suite 00:07:48.903 Test: test_create_reactor ...passed 00:07:48.903 Test: test_init_reactors ...passed 00:07:48.903 Test: test_event_call ...passed 00:07:48.903 Test: test_schedule_thread ...passed 00:07:48.903 Test: test_reschedule_thread ...passed 00:07:48.903 Test: test_bind_thread ...passed 00:07:48.903 Test: test_for_each_reactor ...passed 00:07:48.903 Test: test_reactor_stats ...passed 00:07:49.162 Test: test_scheduler ...passed 00:07:49.162 Test: test_governor ...passed 00:07:49.162 00:07:49.162 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.162 suites 1 1 n/a 0 0 00:07:49.162 tests 10 10 10 0 0 00:07:49.162 asserts 344 344 344 0 n/a 00:07:49.162 00:07:49.162 Elapsed time = 0.021 seconds 00:07:49.162 00:07:49.162 real 0m0.113s 00:07:49.162 user 0m0.056s 00:07:49.162 sys 0m0.058s 00:07:49.162 11:31:20 unittest.unittest_event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:49.162 11:31:20 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:07:49.162 ************************************ 00:07:49.162 END TEST unittest_event 00:07:49.162 ************************************ 00:07:49.162 11:31:21 unittest -- unit/unittest.sh@235 -- # uname -s 00:07:49.162 11:31:21 unittest -- unit/unittest.sh@235 -- # '[' Linux = Linux ']' 00:07:49.162 11:31:21 unittest -- unit/unittest.sh@236 -- # run_test unittest_ftl unittest_ftl 00:07:49.162 11:31:21 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:49.162 11:31:21 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:49.162 11:31:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:49.162 ************************************ 00:07:49.162 START TEST unittest_ftl 00:07:49.162 ************************************ 00:07:49.162 11:31:21 unittest.unittest_ftl -- common/autotest_common.sh@1124 -- # unittest_ftl 00:07:49.162 11:31:21 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:07:49.162 00:07:49.162 00:07:49.162 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.162 http://cunit.sourceforge.net/ 00:07:49.162 00:07:49.162 00:07:49.162 Suite: ftl_band_suite 00:07:49.162 Test: test_band_block_offset_from_addr_base ...passed 00:07:49.162 Test: test_band_block_offset_from_addr_offset ...passed 00:07:49.162 Test: test_band_addr_from_block_offset ...passed 00:07:49.421 Test: test_band_set_addr ...passed 00:07:49.421 Test: test_invalidate_addr ...passed 00:07:49.421 Test: test_next_xfer_addr ...passed 00:07:49.421 00:07:49.421 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.421 suites 1 1 n/a 0 0 00:07:49.421 tests 6 6 6 0 0 00:07:49.421 asserts 30356 30356 30356 0 n/a 00:07:49.421 00:07:49.421 Elapsed time = 0.223 seconds 00:07:49.421 11:31:21 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:07:49.421 00:07:49.421 00:07:49.421 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.421 http://cunit.sourceforge.net/ 00:07:49.421 00:07:49.421 00:07:49.421 Suite: ftl_bitmap 00:07:49.421 Test: test_ftl_bitmap_create ...[2024-06-10 11:31:21.414268] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:07:49.421 [2024-06-10 11:31:21.414624] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:07:49.421 passed 00:07:49.421 Test: test_ftl_bitmap_get ...passed 00:07:49.421 Test: test_ftl_bitmap_set ...passed 00:07:49.421 Test: test_ftl_bitmap_clear ...passed 00:07:49.421 Test: test_ftl_bitmap_find_first_set ...passed 00:07:49.421 Test: test_ftl_bitmap_find_first_clear ...passed 00:07:49.421 Test: test_ftl_bitmap_count_set ...passed 00:07:49.421 00:07:49.421 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.421 suites 1 1 n/a 0 0 00:07:49.421 tests 7 7 7 0 0 00:07:49.421 asserts 137 137 137 0 n/a 00:07:49.421 00:07:49.421 Elapsed time = 0.001 seconds 00:07:49.421 11:31:21 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:07:49.421 00:07:49.421 00:07:49.421 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.421 http://cunit.sourceforge.net/ 00:07:49.421 00:07:49.421 00:07:49.421 Suite: ftl_io_suite 00:07:49.421 Test: test_completion ...passed 00:07:49.421 Test: test_multiple_ios ...passed 00:07:49.421 00:07:49.421 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.421 suites 1 1 n/a 0 0 00:07:49.421 tests 2 2 2 0 0 00:07:49.421 asserts 47 47 47 0 n/a 00:07:49.421 00:07:49.421 Elapsed time = 0.004 seconds 00:07:49.679 11:31:21 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:07:49.679 00:07:49.680 00:07:49.680 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.680 http://cunit.sourceforge.net/ 00:07:49.680 00:07:49.680 00:07:49.680 Suite: ftl_mngt 00:07:49.680 Test: test_next_step ...passed 00:07:49.680 Test: test_continue_step ...passed 00:07:49.680 Test: test_get_func_and_step_cntx_alloc ...passed 00:07:49.680 Test: test_fail_step ...passed 00:07:49.680 Test: test_mngt_call_and_call_rollback ...passed 00:07:49.680 Test: test_nested_process_failure ...passed 00:07:49.680 Test: test_call_init_success ...passed 00:07:49.680 Test: test_call_init_failure ...passed 00:07:49.680 00:07:49.680 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.680 suites 1 1 n/a 0 0 00:07:49.680 tests 8 8 8 0 0 00:07:49.680 asserts 196 196 196 0 n/a 00:07:49.680 00:07:49.680 Elapsed time = 0.002 seconds 00:07:49.680 11:31:21 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:07:49.680 00:07:49.680 00:07:49.680 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.680 http://cunit.sourceforge.net/ 00:07:49.680 00:07:49.680 00:07:49.680 Suite: ftl_mempool 00:07:49.680 Test: test_ftl_mempool_create ...passed 00:07:49.680 Test: test_ftl_mempool_get_put ...passed 00:07:49.680 00:07:49.680 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.680 suites 1 1 n/a 0 0 00:07:49.680 tests 2 2 2 0 0 00:07:49.680 asserts 36 36 36 0 n/a 00:07:49.680 00:07:49.680 Elapsed time = 0.000 seconds 00:07:49.680 11:31:21 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:07:49.680 00:07:49.680 00:07:49.680 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.680 http://cunit.sourceforge.net/ 00:07:49.680 00:07:49.680 00:07:49.680 Suite: ftl_addr64_suite 00:07:49.680 Test: test_addr_cached ...passed 00:07:49.680 00:07:49.680 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.680 suites 1 1 n/a 0 0 00:07:49.680 tests 1 1 1 0 0 00:07:49.680 asserts 1536 1536 1536 0 n/a 00:07:49.680 00:07:49.680 Elapsed time = 0.000 seconds 00:07:49.680 11:31:21 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:07:49.680 00:07:49.680 00:07:49.680 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.680 http://cunit.sourceforge.net/ 00:07:49.680 00:07:49.680 00:07:49.680 Suite: ftl_sb 00:07:49.680 Test: test_sb_crc_v2 ...passed 00:07:49.680 Test: test_sb_crc_v3 ...passed 00:07:49.680 Test: test_sb_v3_md_layout ...[2024-06-10 11:31:21.618201] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:07:49.680 [2024-06-10 11:31:21.618521] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:49.680 [2024-06-10 11:31:21.618576] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:49.680 [2024-06-10 11:31:21.618623] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:49.680 [2024-06-10 11:31:21.618677] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:49.680 [2024-06-10 11:31:21.618771] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:07:49.680 passed 00:07:49.680 Test: test_sb_v5_md_layout ...[2024-06-10 11:31:21.618809] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:49.680 [2024-06-10 11:31:21.618866] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:49.680 [2024-06-10 11:31:21.618967] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:49.680 [2024-06-10 11:31:21.619019] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:49.680 [2024-06-10 11:31:21.619067] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:49.680 passed 00:07:49.680 00:07:49.680 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.680 suites 1 1 n/a 0 0 00:07:49.680 tests 4 4 4 0 0 00:07:49.680 asserts 160 160 160 0 n/a 00:07:49.680 00:07:49.680 Elapsed time = 0.002 seconds 00:07:49.680 11:31:21 unittest.unittest_ftl -- unit/unittest.sh@63 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:07:49.680 00:07:49.680 00:07:49.680 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.680 http://cunit.sourceforge.net/ 00:07:49.680 00:07:49.680 00:07:49.680 Suite: ftl_layout_upgrade 00:07:49.680 Test: test_l2p_upgrade ...passed 00:07:49.680 00:07:49.680 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.680 suites 1 1 n/a 0 0 00:07:49.680 tests 1 1 1 0 0 00:07:49.680 asserts 152 152 152 0 n/a 00:07:49.680 00:07:49.680 Elapsed time = 0.001 seconds 00:07:49.680 11:31:21 unittest.unittest_ftl -- unit/unittest.sh@64 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut 00:07:49.680 00:07:49.680 00:07:49.680 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.680 http://cunit.sourceforge.net/ 00:07:49.680 00:07:49.680 00:07:49.680 Suite: ftl_p2l_suite 00:07:49.680 Test: test_p2l_num_pages ...passed 00:07:50.246 Test: test_ckpt_issue ...passed 00:07:50.813 Test: test_persist_band_p2l ...passed 00:07:51.380 Test: test_clean_restore_p2l ...passed 00:07:52.753 Test: test_dirty_restore_p2l ...passed 00:07:52.753 00:07:52.753 Run Summary: Type Total Ran Passed Failed Inactive 00:07:52.753 suites 1 1 n/a 0 0 00:07:52.753 tests 5 5 5 0 0 00:07:52.753 asserts 10020 10020 10020 0 n/a 00:07:52.753 00:07:52.753 Elapsed time = 2.771 seconds 00:07:52.753 00:07:52.753 real 0m3.446s 00:07:52.753 user 0m1.275s 00:07:52.753 sys 0m2.173s 00:07:52.753 11:31:24 unittest.unittest_ftl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:52.753 11:31:24 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x 00:07:52.753 ************************************ 00:07:52.753 END TEST unittest_ftl 00:07:52.753 ************************************ 00:07:52.753 11:31:24 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:52.753 11:31:24 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:52.754 11:31:24 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:52.754 11:31:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:52.754 ************************************ 00:07:52.754 START TEST unittest_accel 00:07:52.754 ************************************ 00:07:52.754 11:31:24 unittest.unittest_accel -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:52.754 00:07:52.754 00:07:52.754 CUnit - A unit testing framework for C - Version 2.1-3 00:07:52.754 http://cunit.sourceforge.net/ 00:07:52.754 00:07:52.754 00:07:52.754 Suite: accel_sequence 00:07:52.754 Test: test_sequence_fill_copy ...passed 00:07:52.754 Test: test_sequence_abort ...passed 00:07:52.754 Test: test_sequence_append_error ...passed 00:07:52.754 Test: test_sequence_completion_error ...[2024-06-10 11:31:24.582126] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1931:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fa3a37547c0 00:07:52.754 [2024-06-10 11:31:24.582647] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1931:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7fa3a37547c0 00:07:52.754 [2024-06-10 11:31:24.583244] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1841:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7fa3a37547c0 00:07:52.754 [2024-06-10 11:31:24.583323] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1841:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7fa3a37547c0 00:07:52.754 passed 00:07:52.754 Test: test_sequence_decompress ...passed 00:07:52.754 Test: test_sequence_reverse ...passed 00:07:52.754 Test: test_sequence_copy_elision ...passed 00:07:52.754 Test: test_sequence_accel_buffers ...passed 00:07:52.754 Test: test_sequence_memory_domain ...[2024-06-10 11:31:24.596714] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1733:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:07:52.754 [2024-06-10 11:31:24.596946] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1772:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:07:52.754 passed 00:07:52.754 Test: test_sequence_module_memory_domain ...passed 00:07:52.754 Test: test_sequence_crypto ...passed 00:07:52.754 Test: test_sequence_driver ...[2024-06-10 11:31:24.604545] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1880:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7fa3a2a037c0 using driver: ut 00:07:52.754 [2024-06-10 11:31:24.604670] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1944:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fa3a2a037c0 through driver: ut 00:07:52.754 passed 00:07:52.754 Test: test_sequence_same_iovs ...passed 00:07:52.754 Test: test_sequence_crc32 ...passed 00:07:52.754 Suite: accel 00:07:52.754 Test: test_spdk_accel_task_complete ...passed 00:07:52.754 Test: test_get_task ...passed 00:07:52.754 Test: test_spdk_accel_submit_copy ...passed 00:07:52.754 Test: test_spdk_accel_submit_dualcast ...[2024-06-10 11:31:24.610305] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:52.754 [2024-06-10 11:31:24.610379] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:52.754 passed 00:07:52.754 Test: test_spdk_accel_submit_compare ...passed 00:07:52.754 Test: test_spdk_accel_submit_fill ...passed 00:07:52.754 Test: test_spdk_accel_submit_crc32c ...passed 00:07:52.754 Test: test_spdk_accel_submit_crc32cv ...passed 00:07:52.754 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:07:52.754 Test: test_spdk_accel_submit_xor ...passed 00:07:52.754 Test: test_spdk_accel_module_find_by_name ...passed 00:07:52.754 Test: test_spdk_accel_module_register ...passed 00:07:52.754 00:07:52.754 Run Summary: Type Total Ran Passed Failed Inactive 00:07:52.754 suites 2 2 n/a 0 0 00:07:52.754 tests 26 26 26 0 0 00:07:52.754 asserts 830 830 830 0 n/a 00:07:52.754 00:07:52.754 Elapsed time = 0.041 seconds 00:07:52.754 00:07:52.754 real 0m0.090s 00:07:52.754 user 0m0.037s 00:07:52.754 sys 0m0.054s 00:07:52.754 11:31:24 unittest.unittest_accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:52.754 11:31:24 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.754 ************************************ 00:07:52.754 END TEST unittest_accel 00:07:52.754 ************************************ 00:07:52.754 11:31:24 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:52.754 11:31:24 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:52.754 11:31:24 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:52.754 11:31:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:52.754 ************************************ 00:07:52.754 START TEST unittest_ioat 00:07:52.754 ************************************ 00:07:52.754 11:31:24 unittest.unittest_ioat -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:52.754 00:07:52.754 00:07:52.754 CUnit - A unit testing framework for C - Version 2.1-3 00:07:52.754 http://cunit.sourceforge.net/ 00:07:52.754 00:07:52.754 00:07:52.754 Suite: ioat 00:07:52.754 Test: ioat_state_check ...passed 00:07:52.754 00:07:52.754 Run Summary: Type Total Ran Passed Failed Inactive 00:07:52.754 suites 1 1 n/a 0 0 00:07:52.754 tests 1 1 1 0 0 00:07:52.754 asserts 32 32 32 0 n/a 00:07:52.754 00:07:52.754 Elapsed time = 0.000 seconds 00:07:52.754 00:07:52.754 real 0m0.041s 00:07:52.754 user 0m0.013s 00:07:52.754 sys 0m0.028s 00:07:52.754 11:31:24 unittest.unittest_ioat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:52.754 11:31:24 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:07:52.754 ************************************ 00:07:52.754 END TEST unittest_ioat 00:07:52.754 ************************************ 00:07:52.754 11:31:24 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:52.754 11:31:24 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:52.754 11:31:24 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:52.754 11:31:24 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:52.754 11:31:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:52.754 ************************************ 00:07:52.754 START TEST unittest_idxd_user 00:07:52.754 ************************************ 00:07:52.754 11:31:24 unittest.unittest_idxd_user -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:53.013 00:07:53.013 00:07:53.013 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.013 http://cunit.sourceforge.net/ 00:07:53.013 00:07:53.013 00:07:53.013 Suite: idxd_user 00:07:53.013 Test: test_idxd_wait_cmd ...[2024-06-10 11:31:24.822003] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:53.013 passed 00:07:53.013 Test: test_idxd_reset_dev ...[2024-06-10 11:31:24.822402] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:07:53.013 [2024-06-10 11:31:24.822570] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:53.013 [2024-06-10 11:31:24.822629] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:07:53.013 passed 00:07:53.013 Test: test_idxd_group_config ...passed 00:07:53.013 Test: test_idxd_wq_config ...passed 00:07:53.013 00:07:53.013 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.013 suites 1 1 n/a 0 0 00:07:53.013 tests 4 4 4 0 0 00:07:53.013 asserts 20 20 20 0 n/a 00:07:53.013 00:07:53.013 Elapsed time = 0.001 seconds 00:07:53.013 00:07:53.013 real 0m0.041s 00:07:53.013 user 0m0.029s 00:07:53.013 sys 0m0.012s 00:07:53.013 11:31:24 unittest.unittest_idxd_user -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:53.013 11:31:24 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:07:53.013 ************************************ 00:07:53.013 END TEST unittest_idxd_user 00:07:53.013 ************************************ 00:07:53.013 11:31:24 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:07:53.013 11:31:24 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:53.013 11:31:24 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:53.013 11:31:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:53.013 ************************************ 00:07:53.013 START TEST unittest_iscsi 00:07:53.013 ************************************ 00:07:53.013 11:31:24 unittest.unittest_iscsi -- common/autotest_common.sh@1124 -- # unittest_iscsi 00:07:53.013 11:31:24 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:07:53.013 00:07:53.013 00:07:53.013 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.013 http://cunit.sourceforge.net/ 00:07:53.013 00:07:53.013 00:07:53.013 Suite: conn_suite 00:07:53.013 Test: read_task_split_in_order_case ...passed 00:07:53.013 Test: read_task_split_reverse_order_case ...passed 00:07:53.013 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:07:53.013 Test: process_non_read_task_completion_test ...passed 00:07:53.013 Test: free_tasks_on_connection ...passed 00:07:53.013 Test: free_tasks_with_queued_datain ...passed 00:07:53.013 Test: abort_queued_datain_task_test ...passed 00:07:53.013 Test: abort_queued_datain_tasks_test ...passed 00:07:53.013 00:07:53.013 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.013 suites 1 1 n/a 0 0 00:07:53.013 tests 8 8 8 0 0 00:07:53.013 asserts 230 230 230 0 n/a 00:07:53.013 00:07:53.013 Elapsed time = 0.000 seconds 00:07:53.013 11:31:24 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:07:53.013 00:07:53.013 00:07:53.013 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.013 http://cunit.sourceforge.net/ 00:07:53.013 00:07:53.013 00:07:53.013 Suite: iscsi_suite 00:07:53.013 Test: param_negotiation_test ...passed 00:07:53.013 Test: list_negotiation_test ...passed 00:07:53.013 Test: parse_valid_test ...passed 00:07:53.013 Test: parse_invalid_test ...[2024-06-10 11:31:24.983115] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:07:53.013 [2024-06-10 11:31:24.983480] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:07:53.013 [2024-06-10 11:31:24.983545] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:07:53.013 [2024-06-10 11:31:24.983629] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:07:53.013 [2024-06-10 11:31:24.983792] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:07:53.013 [2024-06-10 11:31:24.983862] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:07:53.013 [2024-06-10 11:31:24.983999] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:07:53.013 passed 00:07:53.013 00:07:53.013 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.013 suites 1 1 n/a 0 0 00:07:53.013 tests 4 4 4 0 0 00:07:53.013 asserts 161 161 161 0 n/a 00:07:53.013 00:07:53.013 Elapsed time = 0.006 seconds 00:07:53.013 11:31:25 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:07:53.013 00:07:53.013 00:07:53.013 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.013 http://cunit.sourceforge.net/ 00:07:53.013 00:07:53.013 00:07:53.013 Suite: iscsi_target_node_suite 00:07:53.013 Test: add_lun_test_cases ...[2024-06-10 11:31:25.028233] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:07:53.013 [2024-06-10 11:31:25.028610] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:07:53.013 [2024-06-10 11:31:25.028717] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:53.013 [2024-06-10 11:31:25.028763] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:53.013 [2024-06-10 11:31:25.028808] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:07:53.013 passed 00:07:53.013 Test: allow_any_allowed ...passed 00:07:53.013 Test: allow_ipv6_allowed ...passed 00:07:53.013 Test: allow_ipv6_denied ...passed 00:07:53.013 Test: allow_ipv6_invalid ...passed 00:07:53.013 Test: allow_ipv4_allowed ...passed 00:07:53.013 Test: allow_ipv4_denied ...passed 00:07:53.013 Test: allow_ipv4_invalid ...passed 00:07:53.013 Test: node_access_allowed ...passed 00:07:53.013 Test: node_access_denied_by_empty_netmask ...passed 00:07:53.013 Test: node_access_multi_initiator_groups_cases ...passed 00:07:53.014 Test: allow_iscsi_name_multi_maps_case ...passed 00:07:53.014 Test: chap_param_test_cases ...[2024-06-10 11:31:25.029280] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:07:53.014 [2024-06-10 11:31:25.029328] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:07:53.014 passed 00:07:53.014 00:07:53.014 [2024-06-10 11:31:25.029388] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:07:53.014 [2024-06-10 11:31:25.029427] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:07:53.014 [2024-06-10 11:31:25.029476] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:07:53.014 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.014 suites 1 1 n/a 0 0 00:07:53.014 tests 13 13 13 0 0 00:07:53.014 asserts 50 50 50 0 n/a 00:07:53.014 00:07:53.014 Elapsed time = 0.001 seconds 00:07:53.014 11:31:25 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:07:53.273 00:07:53.273 00:07:53.273 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.273 http://cunit.sourceforge.net/ 00:07:53.273 00:07:53.273 00:07:53.273 Suite: iscsi_suite 00:07:53.273 Test: op_login_check_target_test ...passed 00:07:53.273 Test: op_login_session_normal_test ...[2024-06-10 11:31:25.074142] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:07:53.273 [2024-06-10 11:31:25.074556] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:53.273 [2024-06-10 11:31:25.074611] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:53.273 [2024-06-10 11:31:25.074837] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:53.273 [2024-06-10 11:31:25.074911] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:07:53.273 [2024-06-10 11:31:25.075023] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:53.273 [2024-06-10 11:31:25.075136] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:07:53.273 [2024-06-10 11:31:25.075203] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:53.273 passed 00:07:53.273 Test: maxburstlength_test ...[2024-06-10 11:31:25.075469] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:53.273 [2024-06-10 11:31:25.075525] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4554:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:07:53.273 passed 00:07:53.273 Test: underflow_for_read_transfer_test ...passed 00:07:53.273 Test: underflow_for_zero_read_transfer_test ...passed 00:07:53.273 Test: underflow_for_request_sense_test ...passed 00:07:53.273 Test: underflow_for_check_condition_test ...passed 00:07:53.273 Test: add_transfer_task_test ...passed 00:07:53.273 Test: get_transfer_task_test ...passed 00:07:53.273 Test: del_transfer_task_test ...passed 00:07:53.273 Test: clear_all_transfer_tasks_test ...passed 00:07:53.273 Test: build_iovs_test ...passed 00:07:53.273 Test: build_iovs_with_md_test ...passed 00:07:53.273 Test: pdu_hdr_op_login_test ...[2024-06-10 11:31:25.077166] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:07:53.273 [2024-06-10 11:31:25.077309] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:07:53.273 passed 00:07:53.273 Test: pdu_hdr_op_text_test ...[2024-06-10 11:31:25.077401] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:07:53.273 [2024-06-10 11:31:25.077509] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2246:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:53.273 [2024-06-10 11:31:25.077613] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2278:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:07:53.273 passed 00:07:53.273 Test: pdu_hdr_op_logout_test ...[2024-06-10 11:31:25.077669] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2291:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:07:53.273 [2024-06-10 11:31:25.077774] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2521:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:07:53.273 passed 00:07:53.273 Test: pdu_hdr_op_scsi_test ...[2024-06-10 11:31:25.077950] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:53.273 [2024-06-10 11:31:25.077986] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:53.273 [2024-06-10 11:31:25.078051] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3370:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:07:53.273 [2024-06-10 11:31:25.078156] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3403:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:53.273 [2024-06-10 11:31:25.078274] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3410:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:07:53.273 [2024-06-10 11:31:25.078462] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3434:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:07:53.273 passed 00:07:53.273 Test: pdu_hdr_op_task_mgmt_test ...[2024-06-10 11:31:25.078580] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3611:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:07:53.273 [2024-06-10 11:31:25.078697] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3700:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:07:53.273 passed 00:07:53.273 Test: pdu_hdr_op_nopout_test ...[2024-06-10 11:31:25.078939] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3719:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:07:53.273 passed 00:07:53.273 Test: pdu_hdr_op_data_test ...[2024-06-10 11:31:25.079070] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:53.273 [2024-06-10 11:31:25.079113] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:53.273 [2024-06-10 11:31:25.079161] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3749:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:07:53.273 [2024-06-10 11:31:25.079232] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4192:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:07:53.273 [2024-06-10 11:31:25.079362] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4209:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:53.273 [2024-06-10 11:31:25.079465] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:53.273 [2024-06-10 11:31:25.079564] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:07:53.273 [2024-06-10 11:31:25.079672] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4228:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:07:53.273 [2024-06-10 11:31:25.079803] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4239:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:07:53.273 passed 00:07:53.273 Test: empty_text_with_cbit_test ...passed 00:07:53.273 Test: pdu_payload_read_test ...[2024-06-10 11:31:25.079873] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4249:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:07:53.273 [2024-06-10 11:31:25.082593] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4637:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:07:53.273 passed 00:07:53.273 Test: data_out_pdu_sequence_test ...passed 00:07:53.273 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:07:53.273 00:07:53.273 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.273 suites 1 1 n/a 0 0 00:07:53.273 tests 24 24 24 0 0 00:07:53.273 asserts 150253 150253 150253 0 n/a 00:07:53.273 00:07:53.273 Elapsed time = 0.019 seconds 00:07:53.273 11:31:25 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:07:53.273 00:07:53.273 00:07:53.273 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.273 http://cunit.sourceforge.net/ 00:07:53.273 00:07:53.273 00:07:53.273 Suite: init_grp_suite 00:07:53.273 Test: create_initiator_group_success_case ...passed 00:07:53.273 Test: find_initiator_group_success_case ...passed 00:07:53.273 Test: register_initiator_group_twice_case ...passed 00:07:53.273 Test: add_initiator_name_success_case ...passed 00:07:53.273 Test: add_initiator_name_fail_case ...passed 00:07:53.273 Test: delete_all_initiator_names_success_case ...passed 00:07:53.273 Test: add_netmask_success_case ...passed[2024-06-10 11:31:25.138966] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:07:53.273 00:07:53.273 Test: add_netmask_fail_case ...passed 00:07:53.273 Test: delete_all_netmasks_success_case ...passed 00:07:53.273 Test: initiator_name_overwrite_all_to_any_case ...passed 00:07:53.273 Test: netmask_overwrite_all_to_any_case ...passed 00:07:53.273 Test: add_delete_initiator_names_case ...passed 00:07:53.273 Test: add_duplicated_initiator_names_case ...[2024-06-10 11:31:25.139452] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:07:53.273 passed 00:07:53.273 Test: delete_nonexisting_initiator_names_case ...passed 00:07:53.273 Test: add_delete_netmasks_case ...passed 00:07:53.273 Test: add_duplicated_netmasks_case ...passed 00:07:53.273 Test: delete_nonexisting_netmasks_case ...passed 00:07:53.273 00:07:53.273 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.274 suites 1 1 n/a 0 0 00:07:53.274 tests 17 17 17 0 0 00:07:53.274 asserts 108 108 108 0 n/a 00:07:53.274 00:07:53.274 Elapsed time = 0.001 seconds 00:07:53.274 11:31:25 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:07:53.274 00:07:53.274 00:07:53.274 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.274 http://cunit.sourceforge.net/ 00:07:53.274 00:07:53.274 00:07:53.274 Suite: portal_grp_suite 00:07:53.274 Test: portal_create_ipv4_normal_case ...passed 00:07:53.274 Test: portal_create_ipv6_normal_case ...passed 00:07:53.274 Test: portal_create_ipv4_wildcard_case ...passed 00:07:53.274 Test: portal_create_ipv6_wildcard_case ...passed 00:07:53.274 Test: portal_create_twice_case ...passed 00:07:53.274 Test: portal_grp_register_unregister_case ...passed 00:07:53.274 Test: portal_grp_register_twice_case ...passed 00:07:53.274 Test: portal_grp_add_delete_case ...[2024-06-10 11:31:25.181090] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:07:53.274 passed 00:07:53.274 Test: portal_grp_add_delete_twice_case ...passed 00:07:53.274 00:07:53.274 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.274 suites 1 1 n/a 0 0 00:07:53.274 tests 9 9 9 0 0 00:07:53.274 asserts 44 44 44 0 n/a 00:07:53.274 00:07:53.274 Elapsed time = 0.004 seconds 00:07:53.274 00:07:53.274 real 0m0.296s 00:07:53.274 user 0m0.145s 00:07:53.274 sys 0m0.154s 00:07:53.274 11:31:25 unittest.unittest_iscsi -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:53.274 11:31:25 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:07:53.274 ************************************ 00:07:53.274 END TEST unittest_iscsi 00:07:53.274 ************************************ 00:07:53.274 11:31:25 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:07:53.274 11:31:25 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:53.274 11:31:25 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:53.274 11:31:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:53.274 ************************************ 00:07:53.274 START TEST unittest_json 00:07:53.274 ************************************ 00:07:53.274 11:31:25 unittest.unittest_json -- common/autotest_common.sh@1124 -- # unittest_json 00:07:53.274 11:31:25 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:07:53.274 00:07:53.274 00:07:53.274 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.274 http://cunit.sourceforge.net/ 00:07:53.274 00:07:53.274 00:07:53.274 Suite: json 00:07:53.274 Test: test_parse_literal ...passed 00:07:53.274 Test: test_parse_string_simple ...passed 00:07:53.274 Test: test_parse_string_control_chars ...passed 00:07:53.274 Test: test_parse_string_utf8 ...passed 00:07:53.274 Test: test_parse_string_escapes_twochar ...passed 00:07:53.274 Test: test_parse_string_escapes_unicode ...passed 00:07:53.274 Test: test_parse_number ...passed 00:07:53.274 Test: test_parse_array ...passed 00:07:53.274 Test: test_parse_object ...passed 00:07:53.274 Test: test_parse_nesting ...passed 00:07:53.274 Test: test_parse_comment ...passed 00:07:53.274 00:07:53.274 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.274 suites 1 1 n/a 0 0 00:07:53.274 tests 11 11 11 0 0 00:07:53.274 asserts 1516 1516 1516 0 n/a 00:07:53.274 00:07:53.274 Elapsed time = 0.001 seconds 00:07:53.274 11:31:25 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:07:53.274 00:07:53.274 00:07:53.274 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.274 http://cunit.sourceforge.net/ 00:07:53.274 00:07:53.274 00:07:53.274 Suite: json 00:07:53.274 Test: test_strequal ...passed 00:07:53.274 Test: test_num_to_uint16 ...passed 00:07:53.274 Test: test_num_to_int32 ...passed 00:07:53.274 Test: test_num_to_uint64 ...passed 00:07:53.274 Test: test_decode_object ...passed 00:07:53.274 Test: test_decode_array ...passed 00:07:53.274 Test: test_decode_bool ...passed 00:07:53.274 Test: test_decode_uint16 ...passed 00:07:53.274 Test: test_decode_int32 ...passed 00:07:53.274 Test: test_decode_uint32 ...passed 00:07:53.274 Test: test_decode_uint64 ...passed 00:07:53.274 Test: test_decode_string ...passed 00:07:53.274 Test: test_decode_uuid ...passed 00:07:53.274 Test: test_find ...passed 00:07:53.274 Test: test_find_array ...passed 00:07:53.274 Test: test_iterating ...passed 00:07:53.274 Test: test_free_object ...passed 00:07:53.274 00:07:53.274 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.274 suites 1 1 n/a 0 0 00:07:53.274 tests 17 17 17 0 0 00:07:53.274 asserts 236 236 236 0 n/a 00:07:53.274 00:07:53.274 Elapsed time = 0.001 seconds 00:07:53.533 11:31:25 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:07:53.533 00:07:53.533 00:07:53.533 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.533 http://cunit.sourceforge.net/ 00:07:53.533 00:07:53.533 00:07:53.533 Suite: json 00:07:53.533 Test: test_write_literal ...passed 00:07:53.533 Test: test_write_string_simple ...passed 00:07:53.533 Test: test_write_string_escapes ...passed 00:07:53.533 Test: test_write_string_utf16le ...passed 00:07:53.533 Test: test_write_number_int32 ...passed 00:07:53.533 Test: test_write_number_uint32 ...passed 00:07:53.533 Test: test_write_number_uint128 ...passed 00:07:53.533 Test: test_write_string_number_uint128 ...passed 00:07:53.533 Test: test_write_number_int64 ...passed 00:07:53.533 Test: test_write_number_uint64 ...passed 00:07:53.533 Test: test_write_number_double ...passed 00:07:53.533 Test: test_write_uuid ...passed 00:07:53.533 Test: test_write_array ...passed 00:07:53.533 Test: test_write_object ...passed 00:07:53.533 Test: test_write_nesting ...passed 00:07:53.533 Test: test_write_val ...passed 00:07:53.533 00:07:53.533 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.533 suites 1 1 n/a 0 0 00:07:53.533 tests 16 16 16 0 0 00:07:53.533 asserts 918 918 918 0 n/a 00:07:53.533 00:07:53.533 Elapsed time = 0.004 seconds 00:07:53.533 11:31:25 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:07:53.533 00:07:53.533 00:07:53.533 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.533 http://cunit.sourceforge.net/ 00:07:53.533 00:07:53.533 00:07:53.533 Suite: jsonrpc 00:07:53.533 Test: test_parse_request ...passed 00:07:53.533 Test: test_parse_request_streaming ...passed 00:07:53.533 00:07:53.533 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.533 suites 1 1 n/a 0 0 00:07:53.533 tests 2 2 2 0 0 00:07:53.533 asserts 289 289 289 0 n/a 00:07:53.533 00:07:53.533 Elapsed time = 0.005 seconds 00:07:53.533 00:07:53.533 real 0m0.164s 00:07:53.533 user 0m0.066s 00:07:53.533 sys 0m0.100s 00:07:53.533 ************************************ 00:07:53.533 END TEST unittest_json 00:07:53.533 ************************************ 00:07:53.533 11:31:25 unittest.unittest_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:53.533 11:31:25 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:07:53.533 11:31:25 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:07:53.533 11:31:25 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:53.533 11:31:25 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:53.533 11:31:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:53.533 ************************************ 00:07:53.533 START TEST unittest_rpc 00:07:53.533 ************************************ 00:07:53.533 11:31:25 unittest.unittest_rpc -- common/autotest_common.sh@1124 -- # unittest_rpc 00:07:53.533 11:31:25 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:07:53.533 00:07:53.533 00:07:53.533 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.533 http://cunit.sourceforge.net/ 00:07:53.533 00:07:53.533 00:07:53.533 Suite: rpc 00:07:53.533 Test: test_jsonrpc_handler ...passed 00:07:53.533 Test: test_spdk_rpc_is_method_allowed ...passed 00:07:53.533 Test: test_rpc_get_methods ...passed 00:07:53.533 Test: test_rpc_spdk_get_version ...passed 00:07:53.533 Test: test_spdk_rpc_listen_close ...passed 00:07:53.533 Test: test_rpc_run_multiple_servers ...[2024-06-10 11:31:25.509887] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:07:53.533 passed 00:07:53.533 00:07:53.534 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.534 suites 1 1 n/a 0 0 00:07:53.534 tests 6 6 6 0 0 00:07:53.534 asserts 23 23 23 0 n/a 00:07:53.534 00:07:53.534 Elapsed time = 0.001 seconds 00:07:53.534 00:07:53.534 real 0m0.044s 00:07:53.534 user 0m0.020s 00:07:53.534 sys 0m0.023s 00:07:53.534 11:31:25 unittest.unittest_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:53.534 11:31:25 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.534 ************************************ 00:07:53.534 END TEST unittest_rpc 00:07:53.534 ************************************ 00:07:53.534 11:31:25 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:53.534 11:31:25 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:53.534 11:31:25 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:53.534 11:31:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:53.792 ************************************ 00:07:53.792 START TEST unittest_notify 00:07:53.792 ************************************ 00:07:53.792 11:31:25 unittest.unittest_notify -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:53.792 00:07:53.792 00:07:53.792 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.792 http://cunit.sourceforge.net/ 00:07:53.792 00:07:53.792 00:07:53.792 Suite: app_suite 00:07:53.792 Test: notify ...passed 00:07:53.792 00:07:53.792 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.792 suites 1 1 n/a 0 0 00:07:53.792 tests 1 1 1 0 0 00:07:53.792 asserts 13 13 13 0 n/a 00:07:53.792 00:07:53.792 Elapsed time = 0.000 seconds 00:07:53.792 00:07:53.792 real 0m0.039s 00:07:53.792 user 0m0.028s 00:07:53.792 sys 0m0.011s 00:07:53.792 11:31:25 unittest.unittest_notify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:53.792 11:31:25 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:07:53.792 ************************************ 00:07:53.792 END TEST unittest_notify 00:07:53.792 ************************************ 00:07:53.792 11:31:25 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:07:53.792 11:31:25 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:53.792 11:31:25 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:53.792 11:31:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:53.792 ************************************ 00:07:53.792 START TEST unittest_nvme 00:07:53.792 ************************************ 00:07:53.792 11:31:25 unittest.unittest_nvme -- common/autotest_common.sh@1124 -- # unittest_nvme 00:07:53.792 11:31:25 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:07:53.792 00:07:53.792 00:07:53.792 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.792 http://cunit.sourceforge.net/ 00:07:53.792 00:07:53.792 00:07:53.792 Suite: nvme 00:07:53.792 Test: test_opc_data_transfer ...passed 00:07:53.792 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:07:53.792 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:07:53.792 Test: test_trid_parse_and_compare ...[2024-06-10 11:31:25.711097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1176:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:07:53.792 [2024-06-10 11:31:25.711444] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:53.792 [2024-06-10 11:31:25.711549] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1188:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:07:53.792 [2024-06-10 11:31:25.711599] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:53.792 [2024-06-10 11:31:25.711640] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without value 00:07:53.792 [2024-06-10 11:31:25.711743] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:53.792 passed 00:07:53.792 Test: test_trid_trtype_str ...passed 00:07:53.792 Test: test_trid_adrfam_str ...passed 00:07:53.792 Test: test_nvme_ctrlr_probe ...passed 00:07:53.792 Test: test_spdk_nvme_probe ...[2024-06-10 11:31:25.712023] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:53.792 [2024-06-10 11:31:25.712144] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:53.792 [2024-06-10 11:31:25.712190] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:53.792 [2024-06-10 11:31:25.712324] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:07:53.792 [2024-06-10 11:31:25.712401] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:53.792 passed 00:07:53.792 Test: test_spdk_nvme_connect ...[2024-06-10 11:31:25.712508] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 994:spdk_nvme_connect: *ERROR*: No transport ID specified 00:07:53.792 passed 00:07:53.792 Test: test_nvme_ctrlr_probe_internal ...[2024-06-10 11:31:25.712912] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:53.793 [2024-06-10 11:31:25.712978] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1005:spdk_nvme_connect: *ERROR*: Create probe context failed 00:07:53.793 passed 00:07:53.793 Test: test_nvme_init_controllers ...[2024-06-10 11:31:25.713107] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:53.793 [2024-06-10 11:31:25.713156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:07:53.793 [2024-06-10 11:31:25.713243] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:07:53.793 passed 00:07:53.793 Test: test_nvme_driver_init ...[2024-06-10 11:31:25.713377] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:07:53.793 [2024-06-10 11:31:25.713425] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:53.793 [2024-06-10 11:31:25.822647] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:07:53.793 passed 00:07:53.793 Test: test_spdk_nvme_detach ...passed 00:07:53.793 Test: test_nvme_completion_poll_cb ...passed 00:07:53.793 Test: test_nvme_user_copy_cmd_complete ...[2024-06-10 11:31:25.822985] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:07:53.793 passed 00:07:53.793 Test: test_nvme_allocate_request_null ...passed 00:07:53.793 Test: test_nvme_allocate_request ...passed 00:07:53.793 Test: test_nvme_free_request ...passed 00:07:53.793 Test: test_nvme_allocate_request_user_copy ...passed 00:07:53.793 Test: test_nvme_robust_mutex_init_shared ...passed 00:07:53.793 Test: test_nvme_request_check_timeout ...passed 00:07:53.793 Test: test_nvme_wait_for_completion ...passed 00:07:53.793 Test: test_spdk_nvme_parse_func ...passed 00:07:53.793 Test: test_spdk_nvme_detach_async ...passed 00:07:53.793 Test: test_nvme_parse_addr ...[2024-06-10 11:31:25.824561] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1586:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:07:53.793 passed 00:07:53.793 00:07:53.793 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.793 suites 1 1 n/a 0 0 00:07:53.793 tests 25 25 25 0 0 00:07:53.793 asserts 326 326 326 0 n/a 00:07:53.793 00:07:53.793 Elapsed time = 0.007 seconds 00:07:54.106 11:31:25 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:07:54.106 00:07:54.106 00:07:54.106 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.106 http://cunit.sourceforge.net/ 00:07:54.106 00:07:54.106 00:07:54.106 Suite: nvme_ctrlr 00:07:54.106 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-06-10 11:31:25.874478] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.106 passed 00:07:54.106 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-06-10 11:31:25.876670] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.106 passed 00:07:54.106 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-06-10 11:31:25.877956] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.106 passed 00:07:54.106 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-06-10 11:31:25.879209] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.106 passed 00:07:54.106 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-06-10 11:31:25.880496] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.107 [2024-06-10 11:31:25.881685] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-10 11:31:25.882950] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-10 11:31:25.884155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:54.107 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-06-10 11:31:25.886513] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.107 [2024-06-10 11:31:25.888759] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-10 11:31:25.889938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:54.107 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-06-10 11:31:25.892296] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.107 [2024-06-10 11:31:25.893576] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-10 11:31:25.895958] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:54.107 Test: test_nvme_ctrlr_init_delay ...[2024-06-10 11:31:25.898477] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.107 passed 00:07:54.107 Test: test_alloc_io_qpair_rr_1 ...[2024-06-10 11:31:25.899847] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.107 [2024-06-10 11:31:25.900076] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:54.107 [2024-06-10 11:31:25.900298] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:54.107 [2024-06-10 11:31:25.900420] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:54.107 [2024-06-10 11:31:25.900484] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:54.107 passed 00:07:54.107 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:07:54.107 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:07:54.107 Test: test_alloc_io_qpair_wrr_1 ...[2024-06-10 11:31:25.900643] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.107 passed 00:07:54.107 Test: test_alloc_io_qpair_wrr_2 ...[2024-06-10 11:31:25.900880] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.107 [2024-06-10 11:31:25.901040] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:54.107 passed 00:07:54.107 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-06-10 11:31:25.901393] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4858:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:07:54.107 [2024-06-10 11:31:25.901597] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4895:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:54.107 [2024-06-10 11:31:25.901733] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4935:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:07:54.107 passed 00:07:54.107 Test: test_nvme_ctrlr_fail ...[2024-06-10 11:31:25.901824] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4895:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:54.107 [2024-06-10 11:31:25.901922] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:07:54.107 passed 00:07:54.107 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:07:54.107 Test: test_nvme_ctrlr_set_supported_features ...passed 00:07:54.107 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:07:54.107 Test: test_nvme_ctrlr_test_active_ns ...[2024-06-10 11:31:25.902268] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.365 passed 00:07:54.365 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:07:54.365 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:07:54.365 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:07:54.366 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-06-10 11:31:26.244528] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.366 passed 00:07:54.366 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-06-10 11:31:26.251545] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.366 passed 00:07:54.366 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-06-10 11:31:26.252751] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.366 [2024-06-10 11:31:26.252822] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2883:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:07:54.366 passed 00:07:54.366 Test: test_alloc_io_qpair_fail ...[2024-06-10 11:31:26.253964] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.366 passed 00:07:54.366 Test: test_nvme_ctrlr_add_remove_process ...passed 00:07:54.366 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:07:54.366 Test: test_nvme_ctrlr_set_state ...[2024-06-10 11:31:26.254087] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 511:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:07:54.366 [2024-06-10 11:31:26.254229] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:07:54.366 passed 00:07:54.366 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-06-10 11:31:26.254298] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.366 passed 00:07:54.366 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-06-10 11:31:26.277826] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.366 passed 00:07:54.366 Test: test_nvme_ctrlr_ns_mgmt ...[2024-06-10 11:31:26.325288] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.366 passed 00:07:54.366 Test: test_nvme_ctrlr_reset ...[2024-06-10 11:31:26.326950] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.366 passed 00:07:54.366 Test: test_nvme_ctrlr_aer_callback ...[2024-06-10 11:31:26.327358] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.366 passed 00:07:54.366 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-06-10 11:31:26.328814] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.366 passed 00:07:54.366 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:07:54.366 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:07:54.366 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-06-10 11:31:26.330624] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.366 passed 00:07:54.366 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:07:54.366 Test: test_nvme_ctrlr_ana_resize ...[2024-06-10 11:31:26.331999] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.366 passed 00:07:54.366 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:07:54.366 Test: test_nvme_transport_ctrlr_ready ...[2024-06-10 11:31:26.333594] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4029:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:07:54.366 passed 00:07:54.366 Test: test_nvme_ctrlr_disable ...[2024-06-10 11:31:26.333655] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4080:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:07:54.366 [2024-06-10 11:31:26.333712] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:54.366 passed 00:07:54.366 00:07:54.366 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.366 suites 1 1 n/a 0 0 00:07:54.366 tests 43 43 43 0 0 00:07:54.366 asserts 10418 10418 10418 0 n/a 00:07:54.366 00:07:54.366 Elapsed time = 0.420 seconds 00:07:54.366 11:31:26 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:07:54.366 00:07:54.366 00:07:54.366 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.366 http://cunit.sourceforge.net/ 00:07:54.366 00:07:54.366 00:07:54.366 Suite: nvme_ctrlr_cmd 00:07:54.366 Test: test_get_log_pages ...passed 00:07:54.366 Test: test_set_feature_cmd ...passed 00:07:54.366 Test: test_set_feature_ns_cmd ...passed 00:07:54.366 Test: test_get_feature_cmd ...passed 00:07:54.366 Test: test_get_feature_ns_cmd ...passed 00:07:54.366 Test: test_abort_cmd ...passed 00:07:54.366 Test: test_set_host_id_cmds ...passed 00:07:54.366 Test: test_io_cmd_raw_no_payload_build ...passed 00:07:54.366 Test: test_io_raw_cmd ...passed 00:07:54.366 Test: test_io_raw_cmd_with_md ...passed 00:07:54.366 Test: test_namespace_attach ...passed 00:07:54.366 Test: test_namespace_detach ...passed 00:07:54.366 Test: test_namespace_create ...passed 00:07:54.366 Test: test_namespace_delete ...passed 00:07:54.366 Test: test_doorbell_buffer_config ...passed 00:07:54.366 Test: test_format_nvme ...passed 00:07:54.366 Test: test_fw_commit ...passed 00:07:54.366 Test: test_fw_image_download ...passed 00:07:54.366 Test: test_sanitize ...passed 00:07:54.366 Test: test_directive ...passed 00:07:54.366 Test: test_nvme_request_add_abort ...passed 00:07:54.366 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:07:54.366 Test: test_nvme_ctrlr_cmd_identify ...passed 00:07:54.366 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:07:54.366 00:07:54.366 [2024-06-10 11:31:26.392841] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:07:54.366 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.366 suites 1 1 n/a 0 0 00:07:54.366 tests 24 24 24 0 0 00:07:54.366 asserts 198 198 198 0 n/a 00:07:54.366 00:07:54.366 Elapsed time = 0.001 seconds 00:07:54.366 11:31:26 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:07:54.626 00:07:54.626 00:07:54.626 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.626 http://cunit.sourceforge.net/ 00:07:54.626 00:07:54.626 00:07:54.626 Suite: nvme_ctrlr_cmd 00:07:54.626 Test: test_geometry_cmd ...passed 00:07:54.626 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:07:54.626 00:07:54.626 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.626 suites 1 1 n/a 0 0 00:07:54.626 tests 2 2 2 0 0 00:07:54.626 asserts 7 7 7 0 n/a 00:07:54.626 00:07:54.626 Elapsed time = 0.000 seconds 00:07:54.626 11:31:26 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:07:54.626 00:07:54.626 00:07:54.626 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.626 http://cunit.sourceforge.net/ 00:07:54.626 00:07:54.626 00:07:54.626 Suite: nvme 00:07:54.626 Test: test_nvme_ns_construct ...passed 00:07:54.626 Test: test_nvme_ns_uuid ...passed 00:07:54.626 Test: test_nvme_ns_csi ...passed 00:07:54.626 Test: test_nvme_ns_data ...passed 00:07:54.626 Test: test_nvme_ns_set_identify_data ...passed 00:07:54.626 Test: test_spdk_nvme_ns_get_values ...passed 00:07:54.626 Test: test_spdk_nvme_ns_is_active ...passed 00:07:54.626 Test: spdk_nvme_ns_supports ...passed 00:07:54.626 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:07:54.626 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:07:54.626 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:07:54.626 Test: test_nvme_ns_find_id_desc ...passed 00:07:54.626 00:07:54.626 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.626 suites 1 1 n/a 0 0 00:07:54.626 tests 12 12 12 0 0 00:07:54.626 asserts 83 83 83 0 n/a 00:07:54.626 00:07:54.626 Elapsed time = 0.000 seconds 00:07:54.626 11:31:26 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:07:54.626 00:07:54.626 00:07:54.626 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.626 http://cunit.sourceforge.net/ 00:07:54.626 00:07:54.626 00:07:54.626 Suite: nvme_ns_cmd 00:07:54.626 Test: split_test ...passed 00:07:54.626 Test: split_test2 ...passed 00:07:54.626 Test: split_test3 ...passed 00:07:54.626 Test: split_test4 ...passed 00:07:54.626 Test: test_nvme_ns_cmd_flush ...passed 00:07:54.626 Test: test_nvme_ns_cmd_dataset_management ...passed 00:07:54.626 Test: test_nvme_ns_cmd_copy ...passed 00:07:54.626 Test: test_io_flags ...[2024-06-10 11:31:26.501275] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:07:54.626 passed 00:07:54.626 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:07:54.626 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:07:54.626 Test: test_nvme_ns_cmd_reservation_register ...passed 00:07:54.626 Test: test_nvme_ns_cmd_reservation_release ...passed 00:07:54.626 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:07:54.626 Test: test_nvme_ns_cmd_reservation_report ...passed 00:07:54.626 Test: test_cmd_child_request ...passed 00:07:54.626 Test: test_nvme_ns_cmd_readv ...passed 00:07:54.626 Test: test_nvme_ns_cmd_read_with_md ...passed 00:07:54.626 Test: test_nvme_ns_cmd_writev ...passed 00:07:54.626 Test: test_nvme_ns_cmd_write_with_md ...[2024-06-10 11:31:26.502565] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:07:54.626 passed 00:07:54.626 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:07:54.626 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:07:54.626 Test: test_nvme_ns_cmd_comparev ...passed 00:07:54.626 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:07:54.626 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:07:54.626 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:07:54.626 Test: test_nvme_ns_cmd_setup_request ...passed 00:07:54.626 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:07:54.626 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:07:54.626 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-06-10 11:31:26.504535] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:54.626 [2024-06-10 11:31:26.504649] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:54.626 passed 00:07:54.626 Test: test_nvme_ns_cmd_verify ...passed 00:07:54.626 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:07:54.626 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:07:54.626 00:07:54.626 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.626 suites 1 1 n/a 0 0 00:07:54.626 tests 32 32 32 0 0 00:07:54.626 asserts 550 550 550 0 n/a 00:07:54.626 00:07:54.626 Elapsed time = 0.005 seconds 00:07:54.626 11:31:26 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:07:54.626 00:07:54.626 00:07:54.626 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.626 http://cunit.sourceforge.net/ 00:07:54.626 00:07:54.626 00:07:54.626 Suite: nvme_ns_cmd 00:07:54.626 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:07:54.626 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:07:54.626 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:07:54.626 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:07:54.626 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:07:54.626 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:07:54.626 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:07:54.626 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:07:54.626 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:07:54.626 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:07:54.626 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:07:54.626 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:07:54.626 00:07:54.626 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.626 suites 1 1 n/a 0 0 00:07:54.626 tests 12 12 12 0 0 00:07:54.626 asserts 123 123 123 0 n/a 00:07:54.626 00:07:54.626 Elapsed time = 0.001 seconds 00:07:54.626 11:31:26 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:07:54.626 00:07:54.626 00:07:54.626 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.626 http://cunit.sourceforge.net/ 00:07:54.626 00:07:54.626 00:07:54.626 Suite: nvme_qpair 00:07:54.626 Test: test3 ...passed 00:07:54.626 Test: test_ctrlr_failed ...passed 00:07:54.626 Test: struct_packing ...passed 00:07:54.626 Test: test_nvme_qpair_process_completions ...[2024-06-10 11:31:26.585178] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:54.626 passed 00:07:54.626 Test: test_nvme_completion_is_retry ...passed 00:07:54.626 Test: test_get_status_string ...passed 00:07:54.627 Test: test_nvme_qpair_add_cmd_error_injection ...[2024-06-10 11:31:26.585635] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:54.627 [2024-06-10 11:31:26.585747] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:07:54.627 [2024-06-10 11:31:26.585862] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:07:54.627 passed 00:07:54.627 Test: test_nvme_qpair_submit_request ...passed 00:07:54.627 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:07:54.627 Test: test_nvme_qpair_manual_complete_request ...passed 00:07:54.627 Test: test_nvme_qpair_init_deinit ...[2024-06-10 11:31:26.586315] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:54.627 passed 00:07:54.627 Test: test_nvme_get_sgl_print_info ...passed 00:07:54.627 00:07:54.627 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.627 suites 1 1 n/a 0 0 00:07:54.627 tests 12 12 12 0 0 00:07:54.627 asserts 154 154 154 0 n/a 00:07:54.627 00:07:54.627 Elapsed time = 0.002 seconds 00:07:54.627 11:31:26 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:07:54.627 00:07:54.627 00:07:54.627 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.627 http://cunit.sourceforge.net/ 00:07:54.627 00:07:54.627 00:07:54.627 Suite: nvme_pcie 00:07:54.627 Test: test_prp_list_append ...[2024-06-10 11:31:26.622888] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:54.627 [2024-06-10 11:31:26.623293] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:07:54.627 [2024-06-10 11:31:26.623358] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:07:54.627 [2024-06-10 11:31:26.623649] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:54.627 passed 00:07:54.627 Test: test_nvme_pcie_hotplug_monitor ...[2024-06-10 11:31:26.623761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:54.627 passed 00:07:54.627 Test: test_shadow_doorbell_update ...passed 00:07:54.627 Test: test_build_contig_hw_sgl_request ...passed 00:07:54.627 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:07:54.627 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:07:54.627 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:07:54.627 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-06-10 11:31:26.623979] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:54.627 passed 00:07:54.627 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:07:54.627 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:07:54.627 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-06-10 11:31:26.624082] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:07:54.627 passed 00:07:54.627 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:07:54.627 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-06-10 11:31:26.624168] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:07:54.627 [2024-06-10 11:31:26.624226] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:07:54.627 passed 00:07:54.627 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:07:54.627 00:07:54.627 [2024-06-10 11:31:26.624281] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:07:54.627 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.627 suites 1 1 n/a 0 0 00:07:54.627 tests 14 14 14 0 0 00:07:54.627 asserts 235 235 235 0 n/a 00:07:54.627 00:07:54.627 Elapsed time = 0.001 seconds 00:07:54.627 11:31:26 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:07:54.627 00:07:54.627 00:07:54.627 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.627 http://cunit.sourceforge.net/ 00:07:54.627 00:07:54.627 00:07:54.627 Suite: nvme_ns_cmd 00:07:54.627 Test: nvme_poll_group_create_test ...passed 00:07:54.627 Test: nvme_poll_group_add_remove_test ...passed 00:07:54.627 Test: nvme_poll_group_process_completions ...passed 00:07:54.627 Test: nvme_poll_group_destroy_test ...passed 00:07:54.627 Test: nvme_poll_group_get_free_stats ...passed 00:07:54.627 00:07:54.627 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.627 suites 1 1 n/a 0 0 00:07:54.627 tests 5 5 5 0 0 00:07:54.627 asserts 75 75 75 0 n/a 00:07:54.627 00:07:54.627 Elapsed time = 0.001 seconds 00:07:54.627 11:31:26 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:07:54.897 00:07:54.897 00:07:54.897 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.897 http://cunit.sourceforge.net/ 00:07:54.897 00:07:54.897 00:07:54.897 Suite: nvme_quirks 00:07:54.897 Test: test_nvme_quirks_striping ...passed 00:07:54.897 00:07:54.897 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.897 suites 1 1 n/a 0 0 00:07:54.897 tests 1 1 1 0 0 00:07:54.897 asserts 5 5 5 0 n/a 00:07:54.897 00:07:54.897 Elapsed time = 0.000 seconds 00:07:54.897 11:31:26 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:07:54.897 00:07:54.897 00:07:54.897 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.897 http://cunit.sourceforge.net/ 00:07:54.898 00:07:54.898 00:07:54.898 Suite: nvme_tcp 00:07:54.898 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:07:54.898 Test: test_nvme_tcp_build_iovs ...passed 00:07:54.898 Test: test_nvme_tcp_build_sgl_request ...[2024-06-10 11:31:26.725849] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 825:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffe5ecbb9a0, and the iovcnt=16, remaining_size=28672 00:07:54.898 passed 00:07:54.898 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:07:54.898 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:07:54.898 Test: test_nvme_tcp_req_complete_safe ...passed 00:07:54.898 Test: test_nvme_tcp_req_get ...passed 00:07:54.898 Test: test_nvme_tcp_req_init ...passed 00:07:54.898 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:07:54.898 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:07:54.898 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:07:54.898 Test: test_nvme_tcp_alloc_reqs ...[2024-06-10 11:31:26.726448] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5ecbd6c0 is same with the state(6) to be set 00:07:54.898 passed 00:07:54.898 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-06-10 11:31:26.726826] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5ecbc870 is same with the state(5) to be set 00:07:54.898 passed 00:07:54.898 Test: test_nvme_tcp_pdu_ch_handle ...[2024-06-10 11:31:26.726896] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffe5ecbd400 00:07:54.898 [2024-06-10 11:31:26.726958] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1226:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:07:54.898 [2024-06-10 11:31:26.727054] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5ecbcd30 is same with the state(5) to be set 00:07:54.898 [2024-06-10 11:31:26.727125] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1177:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:07:54.898 [2024-06-10 11:31:26.727215] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5ecbcd30 is same with the state(5) to be set 00:07:54.898 [2024-06-10 11:31:26.727266] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:07:54.898 [2024-06-10 11:31:26.727307] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5ecbcd30 is same with the state(5) to be set 00:07:54.898 [2024-06-10 11:31:26.727374] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5ecbcd30 is same with the state(5) to be set 00:07:54.898 [2024-06-10 11:31:26.727416] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5ecbcd30 is same with the state(5) to be set 00:07:54.898 [2024-06-10 11:31:26.727490] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5ecbcd30 is same with the state(5) to be set 00:07:54.898 [2024-06-10 11:31:26.727534] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5ecbcd30 is same with the state(5) to be set 00:07:54.898 passed 00:07:54.898 Test: test_nvme_tcp_qpair_connect_sock ...[2024-06-10 11:31:26.727587] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5ecbcd30 is same with the state(5) to be set 00:07:54.898 [2024-06-10 11:31:26.727764] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:07:54.898 [2024-06-10 11:31:26.727821] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:54.898 [2024-06-10 11:31:26.728123] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:54.898 passed 00:07:54.898 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:07:54.898 Test: test_nvme_tcp_c2h_payload_handle ...[2024-06-10 11:31:26.728251] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1341:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffe5ecbcf40): PDU Sequence Error 00:07:54.898 passed 00:07:54.898 Test: test_nvme_tcp_icresp_handle ...[2024-06-10 11:31:26.728330] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1567:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:07:54.898 [2024-06-10 11:31:26.728380] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1574:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:07:54.898 [2024-06-10 11:31:26.728450] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5ecbc880 is same with the state(5) to be set 00:07:54.898 [2024-06-10 11:31:26.728501] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:07:54.898 [2024-06-10 11:31:26.728551] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5ecbc880 is same with the state(5) to be set 00:07:54.898 passed 00:07:54.898 Test: test_nvme_tcp_pdu_payload_handle ...[2024-06-10 11:31:26.728607] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5ecbc880 is same with the state(0) to be set 00:07:54.898 [2024-06-10 11:31:26.728682] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1341:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffe5ecbd400): PDU Sequence Error 00:07:54.898 passed 00:07:54.898 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:07:54.898 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:07:54.898 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-06-10 11:31:26.728790] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1644:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffe5ecbbb40 00:07:54.898 [2024-06-10 11:31:26.728948] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 354:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffe5ecbb1c0, errno=0, rc=0 00:07:54.898 [2024-06-10 11:31:26.729023] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5ecbb1c0 is same with the state(5) to be set 00:07:54.898 [2024-06-10 11:31:26.729093] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5ecbb1c0 is same with the state(5) to be set 00:07:54.898 [2024-06-10 11:31:26.729147] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffe5ecbb1c0 (0): Success 00:07:54.898 passed 00:07:54.898 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-06-10 11:31:26.729199] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffe5ecbb1c0 (0): Success 00:07:54.898 [2024-06-10 11:31:26.868723] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2507:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:54.898 [2024-06-10 11:31:26.868841] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2507:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:54.898 passed 00:07:54.898 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:07:54.898 Test: test_nvme_tcp_poll_group_get_stats ...[2024-06-10 11:31:26.869066] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:54.898 [2024-06-10 11:31:26.869114] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:54.898 passed 00:07:54.898 Test: test_nvme_tcp_ctrlr_construct ...[2024-06-10 11:31:26.869327] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2507:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:54.898 [2024-06-10 11:31:26.869376] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:54.898 [2024-06-10 11:31:26.869495] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:07:54.898 [2024-06-10 11:31:26.869567] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:54.898 passed 00:07:54.898 Test: test_nvme_tcp_qpair_submit_request ...[2024-06-10 11:31:26.869682] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000007d80 with addr=192.168.1.78, port=23 00:07:54.898 [2024-06-10 11:31:26.869766] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:54.898 [2024-06-10 11:31:26.869926] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 825:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000000c80, and the iovcnt=1, remaining_size=1024 00:07:54.898 passed 00:07:54.898 00:07:54.898 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.898 suites 1 1 n/a 0 0 00:07:54.898 tests 27 27 27 0 0 00:07:54.898 asserts 624 624 624 0 n/a 00:07:54.898 00:07:54.898 Elapsed time = 0.144 seconds 00:07:54.898 [2024-06-10 11:31:26.869984] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1018:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:07:54.898 11:31:26 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:07:54.898 00:07:54.898 00:07:54.898 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.898 http://cunit.sourceforge.net/ 00:07:54.898 00:07:54.898 00:07:54.898 Suite: nvme_transport 00:07:54.898 Test: test_nvme_get_transport ...passed 00:07:54.898 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:07:54.898 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:07:54.898 Test: test_nvme_transport_poll_group_add_remove ...passed 00:07:54.898 Test: test_ctrlr_get_memory_domains ...passed 00:07:54.898 00:07:54.898 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.898 suites 1 1 n/a 0 0 00:07:54.898 tests 5 5 5 0 0 00:07:54.898 asserts 28 28 28 0 n/a 00:07:54.898 00:07:54.898 Elapsed time = 0.000 seconds 00:07:54.898 11:31:26 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:07:55.161 00:07:55.161 00:07:55.161 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.161 http://cunit.sourceforge.net/ 00:07:55.161 00:07:55.161 00:07:55.161 Suite: nvme_io_msg 00:07:55.161 Test: test_nvme_io_msg_send ...passed 00:07:55.161 Test: test_nvme_io_msg_process ...passed 00:07:55.161 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:07:55.161 00:07:55.161 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.161 suites 1 1 n/a 0 0 00:07:55.161 tests 3 3 3 0 0 00:07:55.161 asserts 56 56 56 0 n/a 00:07:55.161 00:07:55.161 Elapsed time = 0.000 seconds 00:07:55.161 11:31:26 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:07:55.161 00:07:55.161 00:07:55.161 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.161 http://cunit.sourceforge.net/ 00:07:55.161 00:07:55.161 00:07:55.161 Suite: nvme_pcie_common 00:07:55.161 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-06-10 11:31:27.004029] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:07:55.161 passed 00:07:55.161 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:07:55.161 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:07:55.161 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-06-10 11:31:27.004828] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:07:55.161 passed 00:07:55.161 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-06-10 11:31:27.004958] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:07:55.161 [2024-06-10 11:31:27.005005] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:07:55.161 passed 00:07:55.161 Test: test_nvme_pcie_poll_group_get_stats ...[2024-06-10 11:31:27.005443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:55.161 [2024-06-10 11:31:27.005499] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:55.161 passed 00:07:55.161 00:07:55.161 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.161 suites 1 1 n/a 0 0 00:07:55.161 tests 6 6 6 0 0 00:07:55.161 asserts 148 148 148 0 n/a 00:07:55.161 00:07:55.161 Elapsed time = 0.002 seconds 00:07:55.161 11:31:27 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:07:55.161 00:07:55.161 00:07:55.161 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.161 http://cunit.sourceforge.net/ 00:07:55.161 00:07:55.161 00:07:55.161 Suite: nvme_fabric 00:07:55.161 Test: test_nvme_fabric_prop_set_cmd ...passed 00:07:55.161 Test: test_nvme_fabric_prop_get_cmd ...passed 00:07:55.161 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:07:55.161 Test: test_nvme_fabric_discover_probe ...passed 00:07:55.161 Test: test_nvme_fabric_qpair_connect ...[2024-06-10 11:31:27.046699] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:07:55.161 passed 00:07:55.161 00:07:55.161 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.161 suites 1 1 n/a 0 0 00:07:55.161 tests 5 5 5 0 0 00:07:55.161 asserts 60 60 60 0 n/a 00:07:55.161 00:07:55.161 Elapsed time = 0.001 seconds 00:07:55.161 11:31:27 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:07:55.161 00:07:55.161 00:07:55.161 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.161 http://cunit.sourceforge.net/ 00:07:55.161 00:07:55.161 00:07:55.161 Suite: nvme_opal 00:07:55.161 Test: test_opal_nvme_security_recv_send_done ...passed 00:07:55.161 Test: test_opal_add_short_atom_header ...passed 00:07:55.161 00:07:55.161 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.161 suites 1 1 n/a 0 0 00:07:55.161 tests 2 2 2 0 0 00:07:55.161 asserts 22 22 22 0 n/a 00:07:55.161 00:07:55.161 Elapsed time = 0.000 seconds 00:07:55.161 [2024-06-10 11:31:27.088090] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:07:55.161 00:07:55.161 real 0m1.419s 00:07:55.161 user 0m0.703s 00:07:55.161 sys 0m0.575s 00:07:55.161 ************************************ 00:07:55.161 END TEST unittest_nvme 00:07:55.161 ************************************ 00:07:55.161 11:31:27 unittest.unittest_nvme -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:55.161 11:31:27 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.161 11:31:27 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:55.161 11:31:27 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:55.161 11:31:27 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:55.162 11:31:27 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:55.162 ************************************ 00:07:55.162 START TEST unittest_log 00:07:55.162 ************************************ 00:07:55.162 11:31:27 unittest.unittest_log -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:55.162 00:07:55.162 00:07:55.162 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.162 http://cunit.sourceforge.net/ 00:07:55.162 00:07:55.162 00:07:55.162 Suite: log 00:07:55.162 Test: log_test ...passed 00:07:55.162 Test: deprecation ...[2024-06-10 11:31:27.195882] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:07:55.162 [2024-06-10 11:31:27.196183] log_ut.c: 57:log_test: *DEBUG*: log test 00:07:55.162 log dump test: 00:07:55.162 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:07:55.162 spdk dump test: 00:07:55.162 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:07:55.162 spdk dump test: 00:07:55.162 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:07:55.162 00000010 65 20 63 68 61 72 73 e chars 00:07:56.541 passed 00:07:56.541 00:07:56.541 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.541 suites 1 1 n/a 0 0 00:07:56.541 tests 2 2 2 0 0 00:07:56.541 asserts 73 73 73 0 n/a 00:07:56.541 00:07:56.541 Elapsed time = 0.001 seconds 00:07:56.541 00:07:56.541 real 0m1.039s 00:07:56.541 user 0m0.020s 00:07:56.541 sys 0m0.019s 00:07:56.541 11:31:28 unittest.unittest_log -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:56.541 11:31:28 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:07:56.541 ************************************ 00:07:56.541 END TEST unittest_log 00:07:56.541 ************************************ 00:07:56.541 11:31:28 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:56.541 11:31:28 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:56.541 11:31:28 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:56.541 11:31:28 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:56.541 ************************************ 00:07:56.541 START TEST unittest_lvol 00:07:56.541 ************************************ 00:07:56.541 11:31:28 unittest.unittest_lvol -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:56.541 00:07:56.541 00:07:56.541 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.541 http://cunit.sourceforge.net/ 00:07:56.541 00:07:56.541 00:07:56.541 Suite: lvol 00:07:56.541 Test: lvs_init_unload_success ...[2024-06-10 11:31:28.305828] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:07:56.541 passed 00:07:56.541 Test: lvs_init_destroy_success ...passed 00:07:56.541 Test: lvs_init_opts_success ...[2024-06-10 11:31:28.306396] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:07:56.541 passed 00:07:56.541 Test: lvs_unload_lvs_is_null_fail ...[2024-06-10 11:31:28.306646] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:07:56.541 passed 00:07:56.541 Test: lvs_names ...[2024-06-10 11:31:28.306755] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:07:56.541 [2024-06-10 11:31:28.306817] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:07:56.541 [2024-06-10 11:31:28.307008] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:07:56.541 passed 00:07:56.541 Test: lvol_create_destroy_success ...passed 00:07:56.541 Test: lvol_create_fail ...[2024-06-10 11:31:28.307636] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:07:56.541 passed 00:07:56.541 Test: lvol_destroy_fail ...[2024-06-10 11:31:28.307768] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:07:56.541 passed 00:07:56.541 Test: lvol_close ...[2024-06-10 11:31:28.308122] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:07:56.541 [2024-06-10 11:31:28.308359] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:07:56.541 [2024-06-10 11:31:28.308417] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:07:56.541 passed 00:07:56.541 Test: lvol_resize ...passed 00:07:56.541 Test: lvol_set_read_only ...passed 00:07:56.541 Test: test_lvs_load ...[2024-06-10 11:31:28.309342] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:07:56.541 [2024-06-10 11:31:28.309388] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:07:56.541 passed 00:07:56.541 Test: lvols_load ...[2024-06-10 11:31:28.309643] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:56.541 [2024-06-10 11:31:28.309759] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:56.541 passed 00:07:56.541 Test: lvol_open ...passed 00:07:56.541 Test: lvol_snapshot ...passed 00:07:56.541 Test: lvol_snapshot_fail ...[2024-06-10 11:31:28.310538] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:07:56.541 passed 00:07:56.541 Test: lvol_clone ...passed 00:07:56.541 Test: lvol_clone_fail ...[2024-06-10 11:31:28.311169] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:07:56.541 passed 00:07:56.541 Test: lvol_iter_clones ...passed 00:07:56.541 Test: lvol_refcnt ...passed 00:07:56.541 Test: lvol_names ...[2024-06-10 11:31:28.311762] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol f63d2771-ec50-4e23-b6d7-747489ee42c9 because it is still open 00:07:56.541 [2024-06-10 11:31:28.311969] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:56.541 [2024-06-10 11:31:28.312081] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:56.541 [2024-06-10 11:31:28.312322] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:07:56.541 passed 00:07:56.541 Test: lvol_create_thin_provisioned ...passed 00:07:56.541 Test: lvol_rename ...[2024-06-10 11:31:28.312844] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:56.541 [2024-06-10 11:31:28.312927] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:07:56.541 passed 00:07:56.541 Test: lvs_rename ...passed 00:07:56.541 Test: lvol_inflate ...[2024-06-10 11:31:28.313210] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:07:56.541 passed 00:07:56.541 Test: lvol_decouple_parent ...[2024-06-10 11:31:28.313435] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:56.541 [2024-06-10 11:31:28.313715] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:56.541 passed 00:07:56.541 Test: lvol_get_xattr ...passed 00:07:56.541 Test: lvol_esnap_reload ...passed 00:07:56.541 Test: lvol_esnap_create_bad_args ...[2024-06-10 11:31:28.314233] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:07:56.541 [2024-06-10 11:31:28.314280] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:56.542 [2024-06-10 11:31:28.314343] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:07:56.542 passed 00:07:56.542 Test: lvol_esnap_create_delete ...[2024-06-10 11:31:28.314473] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:56.542 [2024-06-10 11:31:28.314644] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:07:56.542 passed 00:07:56.542 Test: lvol_esnap_load_esnaps ...passed 00:07:56.542 Test: lvol_esnap_missing ...[2024-06-10 11:31:28.315041] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:07:56.542 [2024-06-10 11:31:28.315230] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:56.542 [2024-06-10 11:31:28.315299] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:56.542 passed 00:07:56.542 Test: lvol_esnap_hotplug ... 00:07:56.542 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:07:56.542 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:07:56.542 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:07:56.542 [2024-06-10 11:31:28.316030] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 9f79f885-7650-477a-8f2d-1c65034735b0: failed to create esnap bs_dev: error -12 00:07:56.542 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:07:56.542 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:07:56.542 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:07:56.542 [2024-06-10 11:31:28.316228] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol c8eb43e1-1a6c-4f3c-9015-0b9d92a5bae3: failed to create esnap bs_dev: error -12 00:07:56.542 [2024-06-10 11:31:28.316345] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol dec6dd63-89d1-4867-8df4-1e5cd4eab8b5: failed to create esnap bs_dev: error -12 00:07:56.542 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:07:56.542 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:07:56.542 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:07:56.542 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:07:56.542 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:07:56.542 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:07:56.542 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:07:56.542 passed 00:07:56.542 Test: lvol_get_by ...passed 00:07:56.542 Test: lvol_shallow_copy ...[2024-06-10 11:31:28.317556] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:07:56.542 [2024-06-10 11:31:28.317606] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol dacab931-327d-4e7d-b407-de927157ba15 shallow copy, ext_dev must not be NULL 00:07:56.542 passed 00:07:56.542 Test: lvol_set_parent ...[2024-06-10 11:31:28.317856] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:07:56.542 [2024-06-10 11:31:28.317897] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:07:56.542 passed 00:07:56.542 Test: lvol_set_external_parent ...[2024-06-10 11:31:28.318169] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:07:56.542 [2024-06-10 11:31:28.318225] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:07:56.542 [2024-06-10 11:31:28.318302] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:07:56.542 passed 00:07:56.542 00:07:56.542 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.542 suites 1 1 n/a 0 0 00:07:56.542 tests 37 37 37 0 0 00:07:56.542 asserts 1505 1505 1505 0 n/a 00:07:56.542 00:07:56.542 Elapsed time = 0.013 seconds 00:07:56.542 00:07:56.542 real 0m0.061s 00:07:56.542 user 0m0.029s 00:07:56.542 sys 0m0.033s 00:07:56.542 11:31:28 unittest.unittest_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:56.542 11:31:28 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:56.542 ************************************ 00:07:56.542 END TEST unittest_lvol 00:07:56.542 ************************************ 00:07:56.542 11:31:28 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:56.542 11:31:28 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:56.542 11:31:28 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:56.542 11:31:28 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:56.542 11:31:28 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:56.542 ************************************ 00:07:56.542 START TEST unittest_nvme_rdma 00:07:56.542 ************************************ 00:07:56.542 11:31:28 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:56.542 00:07:56.542 00:07:56.542 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.542 http://cunit.sourceforge.net/ 00:07:56.542 00:07:56.542 00:07:56.542 Suite: nvme_rdma 00:07:56.542 Test: test_nvme_rdma_build_sgl_request ...[2024-06-10 11:31:28.425026] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:07:56.542 [2024-06-10 11:31:28.425408] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1632:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:56.542 [2024-06-10 11:31:28.425515] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1688:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:07:56.542 passed 00:07:56.542 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:07:56.542 Test: test_nvme_rdma_build_contig_request ...passed 00:07:56.542 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:07:56.542 Test: test_nvme_rdma_create_reqs ...[2024-06-10 11:31:28.425614] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1569:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:56.542 [2024-06-10 11:31:28.425741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1011:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:07:56.542 passed 00:07:56.542 Test: test_nvme_rdma_create_rsps ...passed 00:07:56.542 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-06-10 11:31:28.426124] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 929:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:07:56.542 [2024-06-10 11:31:28.426321] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:56.542 passed 00:07:56.542 Test: test_nvme_rdma_poller_create ...passed 00:07:56.542 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-06-10 11:31:28.426395] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:56.542 [2024-06-10 11:31:28.426571] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 530:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:07:56.542 passed 00:07:56.542 Test: test_nvme_rdma_ctrlr_construct ...passed 00:07:56.542 Test: test_nvme_rdma_req_put_and_get ...passed 00:07:56.542 Test: test_nvme_rdma_req_init ...passed 00:07:56.542 Test: test_nvme_rdma_validate_cm_event ...[2024-06-10 11:31:28.426956] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:07:56.542 passed 00:07:56.542 Test: test_nvme_rdma_qpair_init ...passed 00:07:56.542 Test: test_nvme_rdma_qpair_submit_request ...passed 00:07:56.543 Test: test_nvme_rdma_memory_domain ...passed 00:07:56.543 Test: test_rdma_ctrlr_get_memory_domains ...passed[2024-06-10 11:31:28.427018] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:07:56.543 [2024-06-10 11:31:28.427230] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 353:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:07:56.543 00:07:56.543 Test: test_rdma_get_memory_translation ...passed 00:07:56.543 Test: test_get_rdma_qpair_from_wc ...passed 00:07:56.543 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:07:56.543 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:07:56.543 Test: test_nvme_rdma_qpair_set_poller ...[2024-06-10 11:31:28.427333] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1448:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:07:56.543 [2024-06-10 11:31:28.427405] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:07:56.543 [2024-06-10 11:31:28.427482] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:56.543 [2024-06-10 11:31:28.427539] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:56.543 [2024-06-10 11:31:28.427722] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:56.543 [2024-06-10 11:31:28.427787] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:07:56.543 [2024-06-10 11:31:28.427831] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffc19f93360 on poll group 0x60c000000040 00:07:56.543 [2024-06-10 11:31:28.427902] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:56.543 [2024-06-10 11:31:28.427958] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:07:56.543 [2024-06-10 11:31:28.428008] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffc19f93360 on poll group 0x60c000000040 00:07:56.543 passed 00:07:56.543 00:07:56.543 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.543 suites 1 1 n/a 0 0 00:07:56.543 tests 22 22 22 0 0 00:07:56.543 asserts 412 412 412 0 n/a 00:07:56.543 00:07:56.543 Elapsed time = 0.003 seconds 00:07:56.543 [2024-06-10 11:31:28.428125] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 705:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:56.543 00:07:56.543 real 0m0.051s 00:07:56.543 user 0m0.021s 00:07:56.543 sys 0m0.030s 00:07:56.543 11:31:28 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:56.543 ************************************ 00:07:56.543 END TEST unittest_nvme_rdma 00:07:56.543 ************************************ 00:07:56.543 11:31:28 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:56.543 11:31:28 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:56.543 11:31:28 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:56.543 11:31:28 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:56.543 11:31:28 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:56.543 ************************************ 00:07:56.543 START TEST unittest_nvmf_transport 00:07:56.543 ************************************ 00:07:56.543 11:31:28 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:56.543 00:07:56.543 00:07:56.543 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.543 http://cunit.sourceforge.net/ 00:07:56.543 00:07:56.543 00:07:56.543 Suite: nvmf 00:07:56.543 Test: test_spdk_nvmf_transport_create ...[2024-06-10 11:31:28.535052] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:07:56.543 [2024-06-10 11:31:28.535473] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:07:56.543 [2024-06-10 11:31:28.535557] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:07:56.543 passed 00:07:56.543 Test: test_nvmf_transport_poll_group_create ...[2024-06-10 11:31:28.535714] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:07:56.543 passed 00:07:56.543 Test: test_spdk_nvmf_transport_opts_init ...[2024-06-10 11:31:28.536008] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:07:56.543 [2024-06-10 11:31:28.536111] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:07:56.543 [2024-06-10 11:31:28.536146] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:07:56.543 passed 00:07:56.543 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:07:56.543 00:07:56.543 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.543 suites 1 1 n/a 0 0 00:07:56.543 tests 4 4 4 0 0 00:07:56.543 asserts 49 49 49 0 n/a 00:07:56.543 00:07:56.543 Elapsed time = 0.001 seconds 00:07:56.543 00:07:56.543 real 0m0.044s 00:07:56.543 user 0m0.019s 00:07:56.543 sys 0m0.025s 00:07:56.543 11:31:28 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:56.543 11:31:28 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:07:56.543 ************************************ 00:07:56.543 END TEST unittest_nvmf_transport 00:07:56.543 ************************************ 00:07:56.885 11:31:28 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:56.885 11:31:28 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:56.885 11:31:28 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:56.885 11:31:28 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:56.885 ************************************ 00:07:56.885 START TEST unittest_rdma 00:07:56.885 ************************************ 00:07:56.885 11:31:28 unittest.unittest_rdma -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:56.885 00:07:56.885 00:07:56.885 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.885 http://cunit.sourceforge.net/ 00:07:56.885 00:07:56.885 00:07:56.885 Suite: rdma_common 00:07:56.885 Test: test_spdk_rdma_pd ...passed 00:07:56.885 00:07:56.885 [2024-06-10 11:31:28.635296] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:07:56.885 [2024-06-10 11:31:28.635638] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:07:56.885 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.885 suites 1 1 n/a 0 0 00:07:56.885 tests 1 1 1 0 0 00:07:56.885 asserts 31 31 31 0 n/a 00:07:56.885 00:07:56.885 Elapsed time = 0.001 seconds 00:07:56.885 00:07:56.885 real 0m0.035s 00:07:56.885 user 0m0.007s 00:07:56.885 sys 0m0.029s 00:07:56.885 11:31:28 unittest.unittest_rdma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:56.885 ************************************ 00:07:56.885 END TEST unittest_rdma 00:07:56.885 ************************************ 00:07:56.885 11:31:28 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:56.885 11:31:28 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:56.885 11:31:28 unittest -- unit/unittest.sh@258 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:56.885 11:31:28 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:56.885 11:31:28 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:56.885 11:31:28 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:56.885 ************************************ 00:07:56.885 START TEST unittest_nvme_cuse 00:07:56.885 ************************************ 00:07:56.885 11:31:28 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:56.885 00:07:56.885 00:07:56.885 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.885 http://cunit.sourceforge.net/ 00:07:56.885 00:07:56.885 00:07:56.885 Suite: nvme_cuse 00:07:56.885 Test: test_cuse_nvme_submit_io_read_write ...passed 00:07:56.885 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:07:56.885 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:07:56.885 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:07:56.885 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:07:56.885 Test: test_cuse_nvme_submit_io ...[2024-06-10 11:31:28.743469] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:07:56.885 passed 00:07:56.885 Test: test_cuse_nvme_reset ...passed 00:07:56.885 Test: test_nvme_cuse_stop ...[2024-06-10 11:31:28.743829] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:07:57.452 passed 00:07:57.452 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:07:57.452 00:07:57.452 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.453 suites 1 1 n/a 0 0 00:07:57.453 tests 9 9 9 0 0 00:07:57.453 asserts 118 118 118 0 n/a 00:07:57.453 00:07:57.453 Elapsed time = 0.505 seconds 00:07:57.453 00:07:57.453 real 0m0.546s 00:07:57.453 user 0m0.248s 00:07:57.453 sys 0m0.300s 00:07:57.453 11:31:29 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:57.453 ************************************ 00:07:57.453 END TEST unittest_nvme_cuse 00:07:57.453 ************************************ 00:07:57.453 11:31:29 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:07:57.453 11:31:29 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:07:57.453 11:31:29 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:57.453 11:31:29 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:57.453 11:31:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:57.453 ************************************ 00:07:57.453 START TEST unittest_nvmf 00:07:57.453 ************************************ 00:07:57.453 11:31:29 unittest.unittest_nvmf -- common/autotest_common.sh@1124 -- # unittest_nvmf 00:07:57.453 11:31:29 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:07:57.453 00:07:57.453 00:07:57.453 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.453 http://cunit.sourceforge.net/ 00:07:57.453 00:07:57.453 00:07:57.453 Suite: nvmf 00:07:57.453 Test: test_get_log_page ...[2024-06-10 11:31:29.350568] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:07:57.453 passed 00:07:57.453 Test: test_process_fabrics_cmd ...passed 00:07:57.453 Test: test_connect ...[2024-06-10 11:31:29.350884] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4683:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:07:57.453 [2024-06-10 11:31:29.351454] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1008:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:07:57.453 [2024-06-10 11:31:29.351564] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 871:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:07:57.453 [2024-06-10 11:31:29.351600] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1047:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:07:57.453 [2024-06-10 11:31:29.351641] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:07:57.453 [2024-06-10 11:31:29.351721] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 882:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:07:57.453 [2024-06-10 11:31:29.351776] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 889:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:07:57.453 [2024-06-10 11:31:29.351818] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 895:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:07:57.453 [2024-06-10 11:31:29.351858] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 922:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:07:57.453 [2024-06-10 11:31:29.351952] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:07:57.453 [2024-06-10 11:31:29.352031] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 672:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:07:57.453 [2024-06-10 11:31:29.352279] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 678:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:07:57.453 [2024-06-10 11:31:29.352351] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 684:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:07:57.453 [2024-06-10 11:31:29.352437] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 691:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:07:57.453 [2024-06-10 11:31:29.352525] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:07:57.453 [2024-06-10 11:31:29.352615] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 294:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 00:07:57.453 [2024-06-10 11:31:29.352851] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 802:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil)) 00:07:57.453 [2024-06-10 11:31:29.352908] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 802:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:07:57.453 passed 00:07:57.453 Test: test_get_ns_id_desc_list ...passed 00:07:57.453 Test: test_identify_ns ...[2024-06-10 11:31:29.353137] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:57.453 [2024-06-10 11:31:29.353368] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:07:57.453 [2024-06-10 11:31:29.353464] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:07:57.453 passed 00:07:57.453 Test: test_identify_ns_iocs_specific ...[2024-06-10 11:31:29.353594] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:57.453 [2024-06-10 11:31:29.353807] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:57.453 passed 00:07:57.453 Test: test_reservation_write_exclusive ...passed 00:07:57.453 Test: test_reservation_exclusive_access ...passed 00:07:57.453 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:07:57.453 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:07:57.453 Test: test_reservation_notification_log_page ...passed 00:07:57.453 Test: test_get_dif_ctx ...passed 00:07:57.453 Test: test_set_get_features ...[2024-06-10 11:31:29.354221] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1644:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:57.453 [2024-06-10 11:31:29.354279] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1644:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:57.453 [2024-06-10 11:31:29.354315] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1655:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:07:57.453 [2024-06-10 11:31:29.354352] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1731:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:07:57.453 passed 00:07:57.453 Test: test_identify_ctrlr ...passed 00:07:57.453 Test: test_identify_ctrlr_iocs_specific ...passed 00:07:57.453 Test: test_custom_admin_cmd ...passed 00:07:57.453 Test: test_fused_compare_and_write ...[2024-06-10 11:31:29.354754] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4216:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:07:57.453 [2024-06-10 11:31:29.354801] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4205:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:57.453 passed 00:07:57.453 Test: test_multi_async_event_reqs ...passed 00:07:57.453 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:07:57.453 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:07:57.453 Test: test_multi_async_events ...passed 00:07:57.453 Test: test_rae ...[2024-06-10 11:31:29.354841] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4223:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:57.453 passed 00:07:57.453 Test: test_nvmf_ctrlr_create_destruct ...passed 00:07:57.453 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:07:57.453 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:07:57.453 Test: test_zcopy_read ...passed 00:07:57.453 Test: test_zcopy_write ...passed 00:07:57.453 Test: test_nvmf_property_set ...passed 00:07:57.453 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-06-10 11:31:29.355303] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4683:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:07:57.453 [2024-06-10 11:31:29.355354] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4709:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:07:57.453 [2024-06-10 11:31:29.355506] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1942:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:57.453 [2024-06-10 11:31:29.355553] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1942:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:57.453 passed 00:07:57.453 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:07:57.453 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:07:57.453 Test: test_nvmf_check_qpair_active ...[2024-06-10 11:31:29.355598] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1965:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:07:57.453 [2024-06-10 11:31:29.355623] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1971:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:07:57.453 [2024-06-10 11:31:29.355682] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1983:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:07:57.453 [2024-06-10 11:31:29.355783] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4683:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:07:57.453 [2024-06-10 11:31:29.355827] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4697:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:07:57.453 [2024-06-10 11:31:29.355859] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4709:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:07:57.453 passed 00:07:57.453 00:07:57.453 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.453 suites 1 1 n/a 0 0 00:07:57.453 tests 32 32 32 0 0 00:07:57.453 asserts 977 977 977 0 n/a 00:07:57.453 00:07:57.453 Elapsed time = 0.005 seconds 00:07:57.453 [2024-06-10 11:31:29.355901] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4709:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:07:57.453 [2024-06-10 11:31:29.355919] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4709:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:07:57.454 11:31:29 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:07:57.454 00:07:57.454 00:07:57.454 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.454 http://cunit.sourceforge.net/ 00:07:57.454 00:07:57.454 00:07:57.454 Suite: nvmf 00:07:57.454 Test: test_get_rw_params ...passed 00:07:57.454 Test: test_get_rw_ext_params ...passed 00:07:57.454 Test: test_lba_in_range ...passed 00:07:57.454 Test: test_get_dif_ctx ...passed 00:07:57.454 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:07:57.454 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-06-10 11:31:29.390316] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:07:57.454 [2024-06-10 11:31:29.390687] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:07:57.454 [2024-06-10 11:31:29.390800] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 462:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:07:57.454 passed 00:07:57.454 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-06-10 11:31:29.390865] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:07:57.454 [2024-06-10 11:31:29.390957] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 972:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:07:57.454 passed 00:07:57.454 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-06-10 11:31:29.391088] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:07:57.454 [2024-06-10 11:31:29.391132] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 408:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:07:57.454 [2024-06-10 11:31:29.391213] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:07:57.454 [2024-06-10 11:31:29.391256] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:07:57.454 passed 00:07:57.454 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:07:57.454 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:07:57.454 00:07:57.454 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.454 suites 1 1 n/a 0 0 00:07:57.454 tests 10 10 10 0 0 00:07:57.454 asserts 159 159 159 0 n/a 00:07:57.454 00:07:57.454 Elapsed time = 0.001 seconds 00:07:57.454 11:31:29 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:07:57.454 00:07:57.454 00:07:57.454 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.454 http://cunit.sourceforge.net/ 00:07:57.454 00:07:57.454 00:07:57.454 Suite: nvmf 00:07:57.454 Test: test_discovery_log ...passed 00:07:57.454 Test: test_discovery_log_with_filters ...passed 00:07:57.454 00:07:57.454 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.454 suites 1 1 n/a 0 0 00:07:57.454 tests 2 2 2 0 0 00:07:57.454 asserts 238 238 238 0 n/a 00:07:57.454 00:07:57.454 Elapsed time = 0.002 seconds 00:07:57.454 11:31:29 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:07:57.454 00:07:57.454 00:07:57.454 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.454 http://cunit.sourceforge.net/ 00:07:57.454 00:07:57.454 00:07:57.454 Suite: nvmf 00:07:57.454 Test: nvmf_test_create_subsystem ...[2024-06-10 11:31:29.477616] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:07:57.454 [2024-06-10 11:31:29.477839] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:07:57.454 [2024-06-10 11:31:29.477966] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:07:57.454 [2024-06-10 11:31:29.478040] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:07:57.454 [2024-06-10 11:31:29.478072] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:07:57.454 [2024-06-10 11:31:29.478115] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:07:57.454 [2024-06-10 11:31:29.478188] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:07:57.454 [2024-06-10 11:31:29.478243] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:07:57.454 [2024-06-10 11:31:29.478265] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:07:57.454 [2024-06-10 11:31:29.478297] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:07:57.454 [2024-06-10 11:31:29.478325] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:07:57.454 [2024-06-10 11:31:29.478359] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:07:57.454 [2024-06-10 11:31:29.478448] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:07:57.454 [2024-06-10 11:31:29.478525] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:07:57.454 [2024-06-10 11:31:29.478615] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:07:57.454 [2024-06-10 11:31:29.478650] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:07:57.454 [2024-06-10 11:31:29.478762] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:07:57.454 [2024-06-10 11:31:29.478798] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:07:57.454 [2024-06-10 11:31:29.478831] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:57.454 passed 00:07:57.454 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-06-10 11:31:29.478876] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:07:57.454 [2024-06-10 11:31:29.478921] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:57.454 [2024-06-10 11:31:29.478948] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:07:57.454 passed 00:07:57.454 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-06-10 11:31:29.479123] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:07:57.454 [2024-06-10 11:31:29.479163] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2010:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:07:57.454 passed 00:07:57.455 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:07:57.455 Test: test_spdk_nvmf_ns_visible ...[2024-06-10 11:31:29.479375] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2141:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:07:57.455 passed 00:07:57.455 Test: test_reservation_register ...[2024-06-10 11:31:29.479551] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:07:57.455 [2024-06-10 11:31:29.479876] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3080:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:57.455 passed 00:07:57.455 Test: test_reservation_register_with_ptpl ...[2024-06-10 11:31:29.479996] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3138:nvmf_ns_reservation_register: *ERROR*: No registrant 00:07:57.455 passed 00:07:57.455 Test: test_reservation_acquire_preempt_1 ...[2024-06-10 11:31:29.480756] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3080:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:57.455 passed 00:07:57.455 Test: test_reservation_acquire_release_with_ptpl ...passed 00:07:57.455 Test: test_reservation_release ...passed 00:07:57.455 Test: test_reservation_unregister_notification ...[2024-06-10 11:31:29.481937] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3080:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:57.455 [2024-06-10 11:31:29.482126] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3080:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:57.455 passed 00:07:57.455 Test: test_reservation_release_notification ...passed[2024-06-10 11:31:29.482295] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3080:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:57.455 00:07:57.455 Test: test_reservation_release_notification_write_exclusive ...[2024-06-10 11:31:29.482457] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3080:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:57.455 passed 00:07:57.455 Test: test_reservation_clear_notification ...passed 00:07:57.455 Test: test_reservation_preempt_notification ...[2024-06-10 11:31:29.482623] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3080:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:57.455 passed 00:07:57.455 Test: test_spdk_nvmf_ns_event ...[2024-06-10 11:31:29.482831] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3080:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:57.455 passed 00:07:57.455 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:07:57.455 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:07:57.455 Test: test_spdk_nvmf_subsystem_add_host ...[2024-06-10 11:31:29.483418] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:07:57.455 passed 00:07:57.455 Test: test_nvmf_ns_reservation_report ...passed 00:07:57.455 Test: test_nvmf_nqn_is_valid ...passed 00:07:57.455 Test: test_nvmf_ns_reservation_restore ...passed 00:07:57.455 Test: test_nvmf_subsystem_state_change ...[2024-06-10 11:31:29.483494] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:07:57.455 [2024-06-10 11:31:29.483595] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3443:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:07:57.455 [2024-06-10 11:31:29.483670] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:07:57.455 [2024-06-10 11:31:29.483727] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:fb7375e5-dcdc-4b17-81af-bfc4ecbf613": uuid is not the correct length 00:07:57.455 [2024-06-10 11:31:29.483762] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:07:57.455 [2024-06-10 11:31:29.483844] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2637:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:07:57.455 passed 00:07:57.455 Test: test_nvmf_reservation_custom_ops ...passed 00:07:57.455 00:07:57.455 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.455 suites 1 1 n/a 0 0 00:07:57.455 tests 24 24 24 0 0 00:07:57.455 asserts 499 499 499 0 n/a 00:07:57.455 00:07:57.455 Elapsed time = 0.007 seconds 00:07:57.455 11:31:29 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:07:57.715 00:07:57.715 00:07:57.715 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.715 http://cunit.sourceforge.net/ 00:07:57.715 00:07:57.715 00:07:57.715 Suite: nvmf 00:07:57.715 Test: test_nvmf_tcp_create ...[2024-06-10 11:31:29.552904] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:07:57.715 passed 00:07:57.715 Test: test_nvmf_tcp_destroy ...passed 00:07:57.715 Test: test_nvmf_tcp_poll_group_create ...passed 00:07:57.715 Test: test_nvmf_tcp_send_c2h_data ...passed 00:07:57.715 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:07:57.715 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:07:57.715 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:07:57.715 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-06-10 11:31:29.675157] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:57.715 passed 00:07:57.715 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:07:57.715 Test: test_nvmf_tcp_icreq_handle ...[2024-06-10 11:31:29.675252] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14af0 is same with the state(5) to be set 00:07:57.715 [2024-06-10 11:31:29.675357] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14af0 is same with the state(5) to be set 00:07:57.715 [2024-06-10 11:31:29.675422] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:57.715 [2024-06-10 11:31:29.675464] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14af0 is same with the state(5) to be set 00:07:57.715 [2024-06-10 11:31:29.675572] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2117:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:57.715 [2024-06-10 11:31:29.675684] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:57.715 [2024-06-10 11:31:29.675761] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14af0 is same with the state(5) to be set 00:07:57.715 [2024-06-10 11:31:29.675803] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2117:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:57.715 [2024-06-10 11:31:29.675869] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14af0 is same with the state(5) to be set 00:07:57.715 [2024-06-10 11:31:29.675912] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:57.715 [2024-06-10 11:31:29.675960] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14af0 is same with the state(5) to be set 00:07:57.715 [2024-06-10 11:31:29.676002] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:07:57.715 passed 00:07:57.715 Test: test_nvmf_tcp_check_xfer_type ...passed 00:07:57.715 Test: test_nvmf_tcp_invalid_sgl ...[2024-06-10 11:31:29.676065] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14af0 is same with the state(5) to be set 00:07:57.715 [2024-06-10 11:31:29.676136] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2512:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:07:57.715 passed 00:07:57.715 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-06-10 11:31:29.676190] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:57.715 [2024-06-10 11:31:29.676227] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14af0 is same with the state(5) to be set 00:07:57.715 [2024-06-10 11:31:29.676302] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2244:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7fff04a15850 00:07:57.715 [2024-06-10 11:31:29.676402] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:57.715 [2024-06-10 11:31:29.676484] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14fb0 is same with the state(5) to be set 00:07:57.715 [2024-06-10 11:31:29.676543] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2301:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7fff04a14fb0 00:07:57.715 [2024-06-10 11:31:29.676590] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:57.715 [2024-06-10 11:31:29.676638] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14fb0 is same with the state(5) to be set 00:07:57.715 [2024-06-10 11:31:29.676700] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2254:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:07:57.715 [2024-06-10 11:31:29.676765] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:57.715 [2024-06-10 11:31:29.676828] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14fb0 is same with the state(5) to be set 00:07:57.715 [2024-06-10 11:31:29.676867] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2293:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:07:57.715 [2024-06-10 11:31:29.676910] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:57.715 [2024-06-10 11:31:29.676960] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14fb0 is same with the state(5) to be set 00:07:57.715 [2024-06-10 11:31:29.677005] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:57.715 [2024-06-10 11:31:29.677052] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14fb0 is same with the state(5) to be set 00:07:57.715 [2024-06-10 11:31:29.677132] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:57.715 [2024-06-10 11:31:29.677181] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14fb0 is same with the state(5) to be set 00:07:57.715 [2024-06-10 11:31:29.677239] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:57.715 [2024-06-10 11:31:29.677281] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14fb0 is same with the state(5) to be set 00:07:57.715 [2024-06-10 11:31:29.677325] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:57.715 [2024-06-10 11:31:29.677366] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14fb0 is same with the state(5) to be set 00:07:57.715 [2024-06-10 11:31:29.677442] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:57.715 [2024-06-10 11:31:29.677495] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14fb0 is same with the state(5) to be set 00:07:57.715 passed 00:07:57.715 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-06-10 11:31:29.677556] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:57.715 [2024-06-10 11:31:29.677598] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff04a14fb0 is same with the state(5) to be set 00:07:57.715 passed 00:07:57.715 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-06-10 11:31:29.705008] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:07:57.715 passed 00:07:57.715 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-06-10 11:31:29.705125] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:07:57.715 [2024-06-10 11:31:29.705570] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:07:57.715 passed 00:07:57.715 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-06-10 11:31:29.705626] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:07:57.715 [2024-06-10 11:31:29.705887] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:07:57.715 passed 00:07:57.715 00:07:57.715 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.715 suites 1 1 n/a 0 0 00:07:57.715 tests 17 17 17 0 0 00:07:57.715 asserts 222 222 222 0 n/a 00:07:57.715 00:07:57.715 Elapsed time = 0.181 seconds 00:07:57.715 [2024-06-10 11:31:29.705946] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:07:57.973 11:31:29 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:07:57.973 00:07:57.973 00:07:57.973 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.973 http://cunit.sourceforge.net/ 00:07:57.973 00:07:57.973 00:07:57.973 Suite: nvmf 00:07:57.973 Test: test_nvmf_tgt_create_poll_group ...passed 00:07:57.973 00:07:57.973 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.973 suites 1 1 n/a 0 0 00:07:57.973 tests 1 1 1 0 0 00:07:57.973 asserts 17 17 17 0 n/a 00:07:57.973 00:07:57.973 Elapsed time = 0.022 seconds 00:07:57.973 00:07:57.973 real 0m0.567s 00:07:57.973 user 0m0.234s 00:07:57.973 sys 0m0.335s 00:07:57.973 11:31:29 unittest.unittest_nvmf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:57.973 11:31:29 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:07:57.973 ************************************ 00:07:57.973 END TEST unittest_nvmf 00:07:57.973 ************************************ 00:07:57.973 11:31:29 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:57.973 11:31:29 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:57.973 11:31:29 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:57.973 11:31:29 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:57.973 11:31:29 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:57.973 11:31:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:57.973 ************************************ 00:07:57.973 START TEST unittest_nvmf_rdma 00:07:57.973 ************************************ 00:07:57.973 11:31:29 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:57.973 00:07:57.973 00:07:57.973 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.973 http://cunit.sourceforge.net/ 00:07:57.973 00:07:57.973 00:07:57.973 Suite: nvmf 00:07:57.973 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-06-10 11:31:29.977527] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1858:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:07:57.973 [2024-06-10 11:31:29.978091] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1908:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:07:57.973 [2024-06-10 11:31:29.978328] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1908:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:07:57.973 passed 00:07:57.973 Test: test_spdk_nvmf_rdma_request_process ...passed 00:07:57.973 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:07:57.973 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:07:57.973 Test: test_nvmf_rdma_opts_init ...passed 00:07:57.973 Test: test_nvmf_rdma_request_free_data ...passed 00:07:57.973 Test: test_nvmf_rdma_resources_create ...passed 00:07:57.973 Test: test_nvmf_rdma_qpair_compare ...passed 00:07:57.973 Test: test_nvmf_rdma_resize_cq ...[2024-06-10 11:31:29.983810] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 949:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:07:57.973 Using CQ of insufficient size may lead to CQ overrun 00:07:57.973 [2024-06-10 11:31:29.984154] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 954:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:07:57.973 [2024-06-10 11:31:29.984418] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 962:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:57.973 passed 00:07:57.973 00:07:57.973 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.973 suites 1 1 n/a 0 0 00:07:57.973 tests 9 9 9 0 0 00:07:57.973 asserts 579 579 579 0 n/a 00:07:57.973 00:07:57.973 Elapsed time = 0.005 seconds 00:07:57.973 00:07:57.973 real 0m0.055s 00:07:57.973 user 0m0.029s 00:07:57.973 sys 0m0.022s 00:07:57.973 11:31:30 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:57.973 11:31:30 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:57.973 ************************************ 00:07:57.973 END TEST unittest_nvmf_rdma 00:07:57.973 ************************************ 00:07:58.232 11:31:30 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:58.232 11:31:30 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:07:58.232 11:31:30 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:58.232 11:31:30 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:58.232 11:31:30 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:58.232 ************************************ 00:07:58.232 START TEST unittest_scsi 00:07:58.232 ************************************ 00:07:58.232 11:31:30 unittest.unittest_scsi -- common/autotest_common.sh@1124 -- # unittest_scsi 00:07:58.232 11:31:30 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:07:58.232 00:07:58.232 00:07:58.232 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.232 http://cunit.sourceforge.net/ 00:07:58.232 00:07:58.232 00:07:58.232 Suite: dev_suite 00:07:58.232 Test: dev_destruct_null_dev ...passed 00:07:58.232 Test: dev_destruct_zero_luns ...passed 00:07:58.232 Test: dev_destruct_null_lun ...passed 00:07:58.232 Test: dev_destruct_success ...passed 00:07:58.232 Test: dev_construct_num_luns_zero ...passed 00:07:58.232 Test: dev_construct_no_lun_zero ...[2024-06-10 11:31:30.089390] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:07:58.232 [2024-06-10 11:31:30.089732] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:07:58.232 passed 00:07:58.232 Test: dev_construct_null_lun ...passed 00:07:58.232 Test: dev_construct_name_too_long ...passed 00:07:58.232 Test: dev_construct_success ...passed 00:07:58.232 Test: dev_construct_success_lun_zero_not_first ...[2024-06-10 11:31:30.089790] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:07:58.232 [2024-06-10 11:31:30.089846] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:07:58.232 passed 00:07:58.232 Test: dev_queue_mgmt_task_success ...passed 00:07:58.232 Test: dev_queue_task_success ...passed 00:07:58.232 Test: dev_stop_success ...passed 00:07:58.232 Test: dev_add_port_max_ports ...[2024-06-10 11:31:30.090189] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:07:58.232 passed 00:07:58.232 Test: dev_add_port_construct_failure1 ...passed 00:07:58.232 Test: dev_add_port_construct_failure2 ...[2024-06-10 11:31:30.090313] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:07:58.232 [2024-06-10 11:31:30.090417] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:07:58.232 passed 00:07:58.232 Test: dev_add_port_success1 ...passed 00:07:58.232 Test: dev_add_port_success2 ...passed 00:07:58.232 Test: dev_add_port_success3 ...passed 00:07:58.232 Test: dev_find_port_by_id_num_ports_zero ...passed 00:07:58.232 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:07:58.232 Test: dev_find_port_by_id_success ...passed 00:07:58.232 Test: dev_add_lun_bdev_not_found ...passed 00:07:58.232 Test: dev_add_lun_no_free_lun_id ...[2024-06-10 11:31:30.090985] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:07:58.232 passed 00:07:58.232 Test: dev_add_lun_success1 ...passed 00:07:58.232 Test: dev_add_lun_success2 ...passed 00:07:58.232 Test: dev_check_pending_tasks ...passed 00:07:58.232 Test: dev_iterate_luns ...passed 00:07:58.232 Test: dev_find_free_lun ...passed 00:07:58.232 00:07:58.232 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.232 suites 1 1 n/a 0 0 00:07:58.232 tests 29 29 29 0 0 00:07:58.232 asserts 97 97 97 0 n/a 00:07:58.232 00:07:58.232 Elapsed time = 0.002 seconds 00:07:58.232 11:31:30 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:07:58.232 00:07:58.232 00:07:58.232 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.232 http://cunit.sourceforge.net/ 00:07:58.232 00:07:58.232 00:07:58.232 Suite: lun_suite 00:07:58.232 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-06-10 11:31:30.133860] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:07:58.232 passed 00:07:58.232 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-06-10 11:31:30.134249] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:07:58.232 passed 00:07:58.232 Test: lun_task_mgmt_execute_lun_reset ...passed 00:07:58.232 Test: lun_task_mgmt_execute_target_reset ...passed 00:07:58.232 Test: lun_task_mgmt_execute_invalid_case ...[2024-06-10 11:31:30.134415] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:07:58.232 passed 00:07:58.232 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:07:58.232 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:07:58.232 Test: lun_append_task_null_lun_not_supported ...passed 00:07:58.232 Test: lun_execute_scsi_task_pending ...passed 00:07:58.232 Test: lun_execute_scsi_task_complete ...passed 00:07:58.232 Test: lun_execute_scsi_task_resize ...passed 00:07:58.232 Test: lun_destruct_success ...passed 00:07:58.232 Test: lun_construct_null_ctx ...passed[2024-06-10 11:31:30.134620] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:07:58.232 00:07:58.232 Test: lun_construct_success ...passed 00:07:58.232 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:07:58.232 Test: lun_reset_task_suspend_scsi_task ...passed 00:07:58.232 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:07:58.232 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:07:58.232 00:07:58.232 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.232 suites 1 1 n/a 0 0 00:07:58.232 tests 18 18 18 0 0 00:07:58.232 asserts 153 153 153 0 n/a 00:07:58.232 00:07:58.232 Elapsed time = 0.001 seconds 00:07:58.232 11:31:30 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:07:58.232 00:07:58.232 00:07:58.232 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.232 http://cunit.sourceforge.net/ 00:07:58.232 00:07:58.232 00:07:58.232 Suite: scsi_suite 00:07:58.232 Test: scsi_init ...passed 00:07:58.232 00:07:58.232 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.232 suites 1 1 n/a 0 0 00:07:58.232 tests 1 1 1 0 0 00:07:58.232 asserts 1 1 1 0 n/a 00:07:58.232 00:07:58.232 Elapsed time = 0.000 seconds 00:07:58.232 11:31:30 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:07:58.232 00:07:58.232 00:07:58.232 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.232 http://cunit.sourceforge.net/ 00:07:58.232 00:07:58.232 00:07:58.232 Suite: translation_suite 00:07:58.232 Test: mode_select_6_test ...passed 00:07:58.232 Test: mode_select_6_test2 ...passed 00:07:58.232 Test: mode_sense_6_test ...passed 00:07:58.232 Test: mode_sense_10_test ...passed 00:07:58.232 Test: inquiry_evpd_test ...passed 00:07:58.232 Test: inquiry_standard_test ...passed 00:07:58.232 Test: inquiry_overflow_test ...passed 00:07:58.232 Test: task_complete_test ...passed 00:07:58.232 Test: lba_range_test ...passed 00:07:58.232 Test: xfer_len_test ...[2024-06-10 11:31:30.227249] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:07:58.232 passed 00:07:58.232 Test: xfer_test ...passed 00:07:58.232 Test: scsi_name_padding_test ...passed 00:07:58.232 Test: get_dif_ctx_test ...passed 00:07:58.232 Test: unmap_split_test ...passed 00:07:58.232 00:07:58.232 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.232 suites 1 1 n/a 0 0 00:07:58.232 tests 14 14 14 0 0 00:07:58.232 asserts 1205 1205 1205 0 n/a 00:07:58.232 00:07:58.232 Elapsed time = 0.004 seconds 00:07:58.232 11:31:30 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:07:58.232 00:07:58.232 00:07:58.232 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.232 http://cunit.sourceforge.net/ 00:07:58.232 00:07:58.232 00:07:58.232 Suite: reservation_suite 00:07:58.233 Test: test_reservation_register ...[2024-06-10 11:31:30.266021] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:58.233 passed 00:07:58.233 Test: test_reservation_reserve ...[2024-06-10 11:31:30.266429] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:58.233 [2024-06-10 11:31:30.266514] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:07:58.233 passed 00:07:58.233 Test: test_reservation_preempt_non_all_regs ...[2024-06-10 11:31:30.266627] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:07:58.233 [2024-06-10 11:31:30.266727] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:58.233 [2024-06-10 11:31:30.266822] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:07:58.233 passed 00:07:58.233 Test: test_reservation_preempt_all_regs ...[2024-06-10 11:31:30.266963] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:58.233 passed 00:07:58.233 Test: test_reservation_cmds_conflict ...[2024-06-10 11:31:30.267105] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:58.233 [2024-06-10 11:31:30.267190] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:07:58.233 [2024-06-10 11:31:30.267251] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:58.233 [2024-06-10 11:31:30.267295] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:58.233 passed 00:07:58.233 Test: test_scsi2_reserve_release ...passed 00:07:58.233 Test: test_pr_with_scsi2_reserve_release ...[2024-06-10 11:31:30.267346] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:58.233 [2024-06-10 11:31:30.267386] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:58.233 [2024-06-10 11:31:30.267502] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:58.233 passed 00:07:58.233 00:07:58.233 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.233 suites 1 1 n/a 0 0 00:07:58.233 tests 7 7 7 0 0 00:07:58.233 asserts 257 257 257 0 n/a 00:07:58.233 00:07:58.233 Elapsed time = 0.002 seconds 00:07:58.490 00:07:58.491 real 0m0.222s 00:07:58.491 user 0m0.098s 00:07:58.491 sys 0m0.126s 00:07:58.491 11:31:30 unittest.unittest_scsi -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:58.491 11:31:30 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:07:58.491 ************************************ 00:07:58.491 END TEST unittest_scsi 00:07:58.491 ************************************ 00:07:58.491 11:31:30 unittest -- unit/unittest.sh@278 -- # uname -s 00:07:58.491 11:31:30 unittest -- unit/unittest.sh@278 -- # '[' Linux = Linux ']' 00:07:58.491 11:31:30 unittest -- unit/unittest.sh@279 -- # run_test unittest_sock unittest_sock 00:07:58.491 11:31:30 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:58.491 11:31:30 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:58.491 11:31:30 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:58.491 ************************************ 00:07:58.491 START TEST unittest_sock 00:07:58.491 ************************************ 00:07:58.491 11:31:30 unittest.unittest_sock -- common/autotest_common.sh@1124 -- # unittest_sock 00:07:58.491 11:31:30 unittest.unittest_sock -- unit/unittest.sh@125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:07:58.491 00:07:58.491 00:07:58.491 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.491 http://cunit.sourceforge.net/ 00:07:58.491 00:07:58.491 00:07:58.491 Suite: sock 00:07:58.491 Test: posix_sock ...passed 00:07:58.491 Test: ut_sock ...passed 00:07:58.491 Test: posix_sock_group ...passed 00:07:58.491 Test: ut_sock_group ...passed 00:07:58.491 Test: posix_sock_group_fairness ...passed 00:07:58.491 Test: _posix_sock_close ...passed 00:07:58.491 Test: sock_get_default_opts ...passed 00:07:58.491 Test: ut_sock_impl_get_set_opts ...passed 00:07:58.491 Test: posix_sock_impl_get_set_opts ...passed 00:07:58.491 Test: ut_sock_map ...passed 00:07:58.491 Test: override_impl_opts ...passed 00:07:58.491 Test: ut_sock_group_get_ctx ...passed 00:07:58.491 00:07:58.491 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.491 suites 1 1 n/a 0 0 00:07:58.491 tests 12 12 12 0 0 00:07:58.491 asserts 349 349 349 0 n/a 00:07:58.491 00:07:58.491 Elapsed time = 0.005 seconds 00:07:58.491 11:31:30 unittest.unittest_sock -- unit/unittest.sh@126 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:07:58.491 00:07:58.491 00:07:58.491 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.491 http://cunit.sourceforge.net/ 00:07:58.491 00:07:58.491 00:07:58.491 Suite: posix 00:07:58.491 Test: flush ...passed 00:07:58.491 00:07:58.491 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.491 suites 1 1 n/a 0 0 00:07:58.491 tests 1 1 1 0 0 00:07:58.491 asserts 28 28 28 0 n/a 00:07:58.491 00:07:58.491 Elapsed time = 0.000 seconds 00:07:58.491 11:31:30 unittest.unittest_sock -- unit/unittest.sh@128 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:58.491 00:07:58.491 real 0m0.105s 00:07:58.491 user 0m0.042s 00:07:58.491 sys 0m0.042s 00:07:58.491 11:31:30 unittest.unittest_sock -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:58.491 11:31:30 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x 00:07:58.491 ************************************ 00:07:58.491 END TEST unittest_sock 00:07:58.491 ************************************ 00:07:58.491 11:31:30 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:58.491 11:31:30 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:58.491 11:31:30 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:58.491 11:31:30 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:58.491 ************************************ 00:07:58.491 START TEST unittest_thread 00:07:58.491 ************************************ 00:07:58.491 11:31:30 unittest.unittest_thread -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:58.491 00:07:58.491 00:07:58.491 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.491 http://cunit.sourceforge.net/ 00:07:58.491 00:07:58.491 00:07:58.491 Suite: io_channel 00:07:58.491 Test: thread_alloc ...passed 00:07:58.491 Test: thread_send_msg ...passed 00:07:58.491 Test: thread_poller ...passed 00:07:58.491 Test: poller_pause ...passed 00:07:58.491 Test: thread_for_each ...passed 00:07:58.749 Test: for_each_channel_remove ...passed 00:07:58.749 Test: for_each_channel_unreg ...[2024-06-10 11:31:30.552673] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2173:spdk_io_device_register: *ERROR*: io_device 0x7ffc96dd7740 already registered (old:0x613000000200 new:0x6130000003c0) 00:07:58.749 passed 00:07:58.749 Test: thread_name ...passed 00:07:58.749 Test: channel ...[2024-06-10 11:31:30.555849] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2307:spdk_get_io_channel: *ERROR*: could not find io_device 0x558465c51c80 00:07:58.749 passed 00:07:58.749 Test: channel_destroy_races ...passed 00:07:58.749 Test: thread_exit_test ...[2024-06-10 11:31:30.559878] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 635:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:07:58.749 passed 00:07:58.749 Test: thread_update_stats_test ...passed 00:07:58.749 Test: nested_channel ...passed 00:07:58.749 Test: device_unregister_and_thread_exit_race ...passed 00:07:58.749 Test: cache_closest_timed_poller ...passed 00:07:58.749 Test: multi_timed_pollers_have_same_expiration ...passed 00:07:58.749 Test: io_device_lookup ...passed 00:07:58.749 Test: spdk_spin ...[2024-06-10 11:31:30.568362] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3071:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:58.749 [2024-06-10 11:31:30.568411] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffc96dd7730 00:07:58.749 [2024-06-10 11:31:30.568515] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3109:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:58.749 [2024-06-10 11:31:30.569790] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:58.749 [2024-06-10 11:31:30.569850] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffc96dd7730 00:07:58.749 [2024-06-10 11:31:30.569887] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:58.749 [2024-06-10 11:31:30.569916] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffc96dd7730 00:07:58.749 [2024-06-10 11:31:30.569948] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:58.749 [2024-06-10 11:31:30.569979] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffc96dd7730 00:07:58.749 [2024-06-10 11:31:30.570009] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3053:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:07:58.749 [2024-06-10 11:31:30.570053] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffc96dd7730 00:07:58.749 passed 00:07:58.749 Test: for_each_channel_and_thread_exit_race ...passed 00:07:58.749 Test: for_each_thread_and_thread_exit_race ...passed 00:07:58.749 00:07:58.749 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.749 suites 1 1 n/a 0 0 00:07:58.749 tests 20 20 20 0 0 00:07:58.749 asserts 409 409 409 0 n/a 00:07:58.749 00:07:58.749 Elapsed time = 0.039 seconds 00:07:58.749 00:07:58.749 real 0m0.086s 00:07:58.749 user 0m0.061s 00:07:58.749 sys 0m0.025s 00:07:58.749 11:31:30 unittest.unittest_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:58.749 11:31:30 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.749 ************************************ 00:07:58.749 END TEST unittest_thread 00:07:58.749 ************************************ 00:07:58.749 11:31:30 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:58.749 11:31:30 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:58.749 11:31:30 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:58.749 11:31:30 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:58.749 ************************************ 00:07:58.749 START TEST unittest_iobuf 00:07:58.749 ************************************ 00:07:58.749 11:31:30 unittest.unittest_iobuf -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:58.749 00:07:58.749 00:07:58.749 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.749 http://cunit.sourceforge.net/ 00:07:58.749 00:07:58.749 00:07:58.749 Suite: io_channel 00:07:58.749 Test: iobuf ...passed 00:07:58.749 Test: iobuf_cache ...[2024-06-10 11:31:30.693629] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:58.749 [2024-06-10 11:31:30.693980] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:58.749 [2024-06-10 11:31:30.694137] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:07:58.750 [2024-06-10 11:31:30.694191] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:58.750 [2024-06-10 11:31:30.694283] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:58.750 [2024-06-10 11:31:30.694331] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:58.750 passed 00:07:58.750 00:07:58.750 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.750 suites 1 1 n/a 0 0 00:07:58.750 tests 2 2 2 0 0 00:07:58.750 asserts 107 107 107 0 n/a 00:07:58.750 00:07:58.750 Elapsed time = 0.006 seconds 00:07:58.750 00:07:58.750 real 0m0.053s 00:07:58.750 user 0m0.017s 00:07:58.750 sys 0m0.037s 00:07:58.750 11:31:30 unittest.unittest_iobuf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:58.750 11:31:30 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:07:58.750 ************************************ 00:07:58.750 END TEST unittest_iobuf 00:07:58.750 ************************************ 00:07:58.750 11:31:30 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:07:58.750 11:31:30 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:58.750 11:31:30 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:58.750 11:31:30 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:58.750 ************************************ 00:07:58.750 START TEST unittest_util 00:07:58.750 ************************************ 00:07:58.750 11:31:30 unittest.unittest_util -- common/autotest_common.sh@1124 -- # unittest_util 00:07:58.750 11:31:30 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:07:58.750 00:07:58.750 00:07:58.750 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.750 http://cunit.sourceforge.net/ 00:07:58.750 00:07:58.750 00:07:58.750 Suite: base64 00:07:58.750 Test: test_base64_get_encoded_strlen ...passed 00:07:58.750 Test: test_base64_get_decoded_len ...passed 00:07:58.750 Test: test_base64_encode ...passed 00:07:58.750 Test: test_base64_decode ...passed 00:07:58.750 Test: test_base64_urlsafe_encode ...passed 00:07:58.750 Test: test_base64_urlsafe_decode ...passed 00:07:58.750 00:07:58.750 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.750 suites 1 1 n/a 0 0 00:07:58.750 tests 6 6 6 0 0 00:07:58.750 asserts 112 112 112 0 n/a 00:07:58.750 00:07:58.750 Elapsed time = 0.000 seconds 00:07:59.008 11:31:30 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:07:59.008 00:07:59.008 00:07:59.008 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.008 http://cunit.sourceforge.net/ 00:07:59.008 00:07:59.008 00:07:59.008 Suite: bit_array 00:07:59.008 Test: test_1bit ...passed 00:07:59.008 Test: test_64bit ...passed 00:07:59.008 Test: test_find ...passed 00:07:59.008 Test: test_resize ...passed 00:07:59.008 Test: test_errors ...passed 00:07:59.008 Test: test_count ...passed 00:07:59.008 Test: test_mask_store_load ...passed 00:07:59.008 Test: test_mask_clear ...passed 00:07:59.008 00:07:59.008 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.008 suites 1 1 n/a 0 0 00:07:59.008 tests 8 8 8 0 0 00:07:59.009 asserts 5075 5075 5075 0 n/a 00:07:59.009 00:07:59.009 Elapsed time = 0.002 seconds 00:07:59.009 11:31:30 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:07:59.009 00:07:59.009 00:07:59.009 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.009 http://cunit.sourceforge.net/ 00:07:59.009 00:07:59.009 00:07:59.009 Suite: cpuset 00:07:59.009 Test: test_cpuset ...passed 00:07:59.009 Test: test_cpuset_parse ...[2024-06-10 11:31:30.864922] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:07:59.009 [2024-06-10 11:31:30.865277] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:07:59.009 [2024-06-10 11:31:30.865380] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:07:59.009 [2024-06-10 11:31:30.865472] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:07:59.009 [2024-06-10 11:31:30.865515] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:07:59.009 [2024-06-10 11:31:30.865567] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:07:59.009 [2024-06-10 11:31:30.865604] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:07:59.009 [2024-06-10 11:31:30.865667] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:07:59.009 passed 00:07:59.009 Test: test_cpuset_fmt ...passed 00:07:59.009 00:07:59.009 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.009 suites 1 1 n/a 0 0 00:07:59.009 tests 3 3 3 0 0 00:07:59.009 asserts 65 65 65 0 n/a 00:07:59.009 00:07:59.009 Elapsed time = 0.002 seconds 00:07:59.009 11:31:30 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:07:59.009 00:07:59.009 00:07:59.009 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.009 http://cunit.sourceforge.net/ 00:07:59.009 00:07:59.009 00:07:59.009 Suite: crc16 00:07:59.009 Test: test_crc16_t10dif ...passed 00:07:59.009 Test: test_crc16_t10dif_seed ...passed 00:07:59.009 Test: test_crc16_t10dif_copy ...passed 00:07:59.009 00:07:59.009 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.009 suites 1 1 n/a 0 0 00:07:59.009 tests 3 3 3 0 0 00:07:59.009 asserts 5 5 5 0 n/a 00:07:59.009 00:07:59.009 Elapsed time = 0.000 seconds 00:07:59.009 11:31:30 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:07:59.009 00:07:59.009 00:07:59.009 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.009 http://cunit.sourceforge.net/ 00:07:59.009 00:07:59.009 00:07:59.009 Suite: crc32_ieee 00:07:59.009 Test: test_crc32_ieee ...passed 00:07:59.009 00:07:59.009 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.009 suites 1 1 n/a 0 0 00:07:59.009 tests 1 1 1 0 0 00:07:59.009 asserts 1 1 1 0 n/a 00:07:59.009 00:07:59.009 Elapsed time = 0.000 seconds 00:07:59.009 11:31:30 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:07:59.009 00:07:59.009 00:07:59.009 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.009 http://cunit.sourceforge.net/ 00:07:59.009 00:07:59.009 00:07:59.009 Suite: crc32c 00:07:59.009 Test: test_crc32c ...passed 00:07:59.009 Test: test_crc32c_nvme ...passed 00:07:59.009 00:07:59.009 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.009 suites 1 1 n/a 0 0 00:07:59.009 tests 2 2 2 0 0 00:07:59.009 asserts 16 16 16 0 n/a 00:07:59.009 00:07:59.009 Elapsed time = 0.000 seconds 00:07:59.009 11:31:30 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:07:59.009 00:07:59.009 00:07:59.009 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.009 http://cunit.sourceforge.net/ 00:07:59.009 00:07:59.009 00:07:59.009 Suite: crc64 00:07:59.009 Test: test_crc64_nvme ...passed 00:07:59.009 00:07:59.009 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.009 suites 1 1 n/a 0 0 00:07:59.009 tests 1 1 1 0 0 00:07:59.009 asserts 4 4 4 0 n/a 00:07:59.009 00:07:59.009 Elapsed time = 0.000 seconds 00:07:59.009 11:31:31 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:07:59.009 00:07:59.009 00:07:59.009 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.009 http://cunit.sourceforge.net/ 00:07:59.009 00:07:59.009 00:07:59.009 Suite: string 00:07:59.009 Test: test_parse_ip_addr ...passed 00:07:59.009 Test: test_str_chomp ...passed 00:07:59.009 Test: test_parse_capacity ...passed 00:07:59.009 Test: test_sprintf_append_realloc ...passed 00:07:59.009 Test: test_strtol ...passed 00:07:59.009 Test: test_strtoll ...passed 00:07:59.009 Test: test_strarray ...passed 00:07:59.009 Test: test_strcpy_replace ...passed 00:07:59.009 00:07:59.009 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.009 suites 1 1 n/a 0 0 00:07:59.009 tests 8 8 8 0 0 00:07:59.009 asserts 161 161 161 0 n/a 00:07:59.009 00:07:59.009 Elapsed time = 0.001 seconds 00:07:59.009 11:31:31 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:07:59.269 00:07:59.269 00:07:59.269 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.269 http://cunit.sourceforge.net/ 00:07:59.269 00:07:59.269 00:07:59.269 Suite: dif 00:07:59.269 Test: dif_generate_and_verify_test ...[2024-06-10 11:31:31.083607] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:59.269 [2024-06-10 11:31:31.084153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:59.269 [2024-06-10 11:31:31.084479] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:59.269 [2024-06-10 11:31:31.084771] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:59.269 [2024-06-10 11:31:31.085099] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:59.269 [2024-06-10 11:31:31.085407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:59.269 passed 00:07:59.269 Test: dif_disable_check_test ...[2024-06-10 11:31:31.086456] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:59.269 [2024-06-10 11:31:31.086780] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:59.269 [2024-06-10 11:31:31.087069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:59.269 passed 00:07:59.269 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-06-10 11:31:31.088135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:07:59.269 [2024-06-10 11:31:31.088466] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:07:59.269 [2024-06-10 11:31:31.088792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:07:59.269 [2024-06-10 11:31:31.089157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:07:59.269 [2024-06-10 11:31:31.089495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:59.269 [2024-06-10 11:31:31.089818] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:59.269 [2024-06-10 11:31:31.090139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:59.269 [2024-06-10 11:31:31.090461] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:59.270 [2024-06-10 11:31:31.090926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:59.270 [2024-06-10 11:31:31.091273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:59.270 [2024-06-10 11:31:31.091611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:59.270 passed 00:07:59.270 Test: dif_apptag_mask_test ...[2024-06-10 11:31:31.091942] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:59.270 passed 00:07:59.270 Test: dif_sec_512_md_0_error_test ...[2024-06-10 11:31:31.092254] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:59.270 [2024-06-10 11:31:31.092475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:59.270 passed 00:07:59.270 Test: dif_sec_4096_md_0_error_test ...[2024-06-10 11:31:31.092528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:59.270 passed 00:07:59.270 Test: dif_sec_4100_md_128_error_test ...passed 00:07:59.270 Test: dif_guard_seed_test ...passed 00:07:59.270 Test: dif_guard_value_test ...[2024-06-10 11:31:31.092580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:59.270 [2024-06-10 11:31:31.092643] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:59.270 [2024-06-10 11:31:31.092683] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:59.270 passed 00:07:59.270 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:07:59.270 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:07:59.270 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:59.270 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:59.270 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:59.270 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:07:59.270 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:59.270 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:59.270 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:07:59.270 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:59.270 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:07:59.270 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:07:59.270 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:59.270 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:59.270 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:59.270 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:59.270 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:59.270 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:59.270 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-10 11:31:31.137538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:07:59.270 [2024-06-10 11:31:31.140035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:07:59.270 [2024-06-10 11:31:31.142518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.270 [2024-06-10 11:31:31.145016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.270 [2024-06-10 11:31:31.147496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.270 [2024-06-10 11:31:31.149948] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.270 [2024-06-10 11:31:31.152410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ed7 00:07:59.270 [2024-06-10 11:31:31.153601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=4289 00:07:59.270 [2024-06-10 11:31:31.154817] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:07:59.270 [2024-06-10 11:31:31.157283] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574640, Actual=38574660 00:07:59.270 [2024-06-10 11:31:31.159778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.270 [2024-06-10 11:31:31.162242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.270 [2024-06-10 11:31:31.164710] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.270 [2024-06-10 11:31:31.167163] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.270 [2024-06-10 11:31:31.169612] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d54fa715 00:07:59.270 [2024-06-10 11:31:31.170809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=7ab9f356 00:07:59.270 [2024-06-10 11:31:31.172025] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:07:59.270 [2024-06-10 11:31:31.174597] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a246, Actual=88010a2d4837a266 00:07:59.270 [2024-06-10 11:31:31.177071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.270 [2024-06-10 11:31:31.179539] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.270 [2024-06-10 11:31:31.182431] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.270 [2024-06-10 11:31:31.185479] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.270 [2024-06-10 11:31:31.188169] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=df183dfb3cbbc47 00:07:59.270 [2024-06-10 11:31:31.189753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=16f8d70bf1097ed8 00:07:59.270 passed 00:07:59.270 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-06-10 11:31:31.190192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:07:59.270 [2024-06-10 11:31:31.190609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:07:59.270 [2024-06-10 11:31:31.191008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.270 [2024-06-10 11:31:31.191402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.270 [2024-06-10 11:31:31.191799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.270 [2024-06-10 11:31:31.192125] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.270 [2024-06-10 11:31:31.192468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ed7 00:07:59.270 [2024-06-10 11:31:31.192741] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=4289 00:07:59.270 [2024-06-10 11:31:31.193023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:07:59.270 [2024-06-10 11:31:31.193354] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574640, Actual=38574660 00:07:59.270 [2024-06-10 11:31:31.193685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.270 [2024-06-10 11:31:31.193998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.270 [2024-06-10 11:31:31.194332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.270 [2024-06-10 11:31:31.194627] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.270 [2024-06-10 11:31:31.194947] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d54fa715 00:07:59.270 [2024-06-10 11:31:31.195218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=7ab9f356 00:07:59.270 [2024-06-10 11:31:31.195515] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:07:59.270 [2024-06-10 11:31:31.195811] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a246, Actual=88010a2d4837a266 00:07:59.270 [2024-06-10 11:31:31.196137] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.270 [2024-06-10 11:31:31.196459] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.270 [2024-06-10 11:31:31.196773] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.270 [2024-06-10 11:31:31.197077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.270 [2024-06-10 11:31:31.197434] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=df183dfb3cbbc47 00:07:59.270 [2024-06-10 11:31:31.197709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=16f8d70bf1097ed8 00:07:59.270 passed 00:07:59.270 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-06-10 11:31:31.198020] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:07:59.270 [2024-06-10 11:31:31.198352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:07:59.270 [2024-06-10 11:31:31.198675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.270 [2024-06-10 11:31:31.198983] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.270 [2024-06-10 11:31:31.199309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.270 [2024-06-10 11:31:31.199621] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.270 [2024-06-10 11:31:31.199936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ed7 00:07:59.271 [2024-06-10 11:31:31.200205] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=4289 00:07:59.271 [2024-06-10 11:31:31.200497] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:07:59.271 [2024-06-10 11:31:31.200807] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574640, Actual=38574660 00:07:59.271 [2024-06-10 11:31:31.201121] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.201441] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.201808] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.202162] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.202477] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d54fa715 00:07:59.271 [2024-06-10 11:31:31.202753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=7ab9f356 00:07:59.271 [2024-06-10 11:31:31.203044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:07:59.271 [2024-06-10 11:31:31.203347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a246, Actual=88010a2d4837a266 00:07:59.271 [2024-06-10 11:31:31.203668] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.203977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.204287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.204606] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.204918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=df183dfb3cbbc47 00:07:59.271 [2024-06-10 11:31:31.205182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=16f8d70bf1097ed8 00:07:59.271 passed 00:07:59.271 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-06-10 11:31:31.205520] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:07:59.271 [2024-06-10 11:31:31.205900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:07:59.271 [2024-06-10 11:31:31.206226] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.206530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.206894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.207199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.207516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ed7 00:07:59.271 [2024-06-10 11:31:31.207793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=4289 00:07:59.271 [2024-06-10 11:31:31.208070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:07:59.271 [2024-06-10 11:31:31.208378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574640, Actual=38574660 00:07:59.271 [2024-06-10 11:31:31.208745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.209075] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.209399] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.209737] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.210101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d54fa715 00:07:59.271 [2024-06-10 11:31:31.210381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=7ab9f356 00:07:59.271 [2024-06-10 11:31:31.210669] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:07:59.271 [2024-06-10 11:31:31.210975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a246, Actual=88010a2d4837a266 00:07:59.271 [2024-06-10 11:31:31.211278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.211590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.211898] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.212205] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.212552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=df183dfb3cbbc47 00:07:59.271 [2024-06-10 11:31:31.212825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=16f8d70bf1097ed8 00:07:59.271 passed 00:07:59.271 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-06-10 11:31:31.213166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:07:59.271 [2024-06-10 11:31:31.213468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:07:59.271 [2024-06-10 11:31:31.213799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.214165] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.214503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.214831] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.215144] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ed7 00:07:59.271 [2024-06-10 11:31:31.215397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=4289 00:07:59.271 passed 00:07:59.271 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-06-10 11:31:31.215726] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:07:59.271 [2024-06-10 11:31:31.216037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574640, Actual=38574660 00:07:59.271 [2024-06-10 11:31:31.216363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.216688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.217014] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.217317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.217658] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d54fa715 00:07:59.271 [2024-06-10 11:31:31.217969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=7ab9f356 00:07:59.271 [2024-06-10 11:31:31.218318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:07:59.271 [2024-06-10 11:31:31.218636] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a246, Actual=88010a2d4837a266 00:07:59.271 [2024-06-10 11:31:31.218967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.219287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.219605] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.219916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.220236] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=df183dfb3cbbc47 00:07:59.271 [2024-06-10 11:31:31.220525] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=16f8d70bf1097ed8 00:07:59.271 passed 00:07:59.271 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-06-10 11:31:31.220827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:07:59.271 [2024-06-10 11:31:31.221138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:07:59.271 [2024-06-10 11:31:31.221491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.221847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.271 [2024-06-10 11:31:31.222181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.222489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.271 [2024-06-10 11:31:31.222803] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ed7 00:07:59.271 [2024-06-10 11:31:31.223074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=4289 00:07:59.271 passed 00:07:59.271 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-06-10 11:31:31.223391] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:07:59.271 [2024-06-10 11:31:31.223689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574640, Actual=38574660 00:07:59.272 [2024-06-10 11:31:31.224021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.272 [2024-06-10 11:31:31.224341] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.272 [2024-06-10 11:31:31.224670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.272 [2024-06-10 11:31:31.224975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.272 [2024-06-10 11:31:31.225313] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d54fa715 00:07:59.272 [2024-06-10 11:31:31.225597] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=7ab9f356 00:07:59.272 [2024-06-10 11:31:31.225952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:07:59.272 [2024-06-10 11:31:31.226268] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a246, Actual=88010a2d4837a266 00:07:59.272 [2024-06-10 11:31:31.226578] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.272 [2024-06-10 11:31:31.226900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.272 [2024-06-10 11:31:31.227209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.272 [2024-06-10 11:31:31.227529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.272 [2024-06-10 11:31:31.227851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=df183dfb3cbbc47 00:07:59.272 [2024-06-10 11:31:31.228131] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=16f8d70bf1097ed8 00:07:59.272 passed 00:07:59.272 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:07:59.272 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:59.272 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:59.272 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:59.272 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:59.272 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:59.272 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:59.272 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:59.272 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:59.272 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-10 11:31:31.273501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:07:59.272 [2024-06-10 11:31:31.274602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=75d0, Actual=75f0 00:07:59.272 [2024-06-10 11:31:31.275699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.272 [2024-06-10 11:31:31.276799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.272 [2024-06-10 11:31:31.277949] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.272 [2024-06-10 11:31:31.279071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.272 [2024-06-10 11:31:31.280158] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ed7 00:07:59.272 [2024-06-10 11:31:31.281248] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=91be 00:07:59.272 [2024-06-10 11:31:31.282336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:07:59.272 [2024-06-10 11:31:31.283431] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5a2a3913, Actual=5a2a3933 00:07:59.272 [2024-06-10 11:31:31.284532] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.272 [2024-06-10 11:31:31.285640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.272 [2024-06-10 11:31:31.286742] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.272 [2024-06-10 11:31:31.287833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.272 [2024-06-10 11:31:31.288930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d54fa715 00:07:59.272 [2024-06-10 11:31:31.290019] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=a848f59a 00:07:59.272 [2024-06-10 11:31:31.291119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:07:59.272 [2024-06-10 11:31:31.292233] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b086083afd38a2a0, Actual=b086083afd38a280 00:07:59.272 [2024-06-10 11:31:31.293324] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.272 [2024-06-10 11:31:31.294421] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.272 [2024-06-10 11:31:31.295529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.272 [2024-06-10 11:31:31.296643] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.272 [2024-06-10 11:31:31.297731] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=df183dfb3cbbc47 00:07:59.272 [2024-06-10 11:31:31.298865] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=a69ff73defe4ec1c 00:07:59.272 passed 00:07:59.272 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-06-10 11:31:31.299235] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:07:59.272 [2024-06-10 11:31:31.299501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=75d0, Actual=75f0 00:07:59.272 [2024-06-10 11:31:31.299776] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.272 [2024-06-10 11:31:31.300025] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.272 [2024-06-10 11:31:31.300313] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.272 [2024-06-10 11:31:31.300615] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.272 [2024-06-10 11:31:31.300878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ed7 00:07:59.272 [2024-06-10 11:31:31.301149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=91be 00:07:59.272 [2024-06-10 11:31:31.301424] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:07:59.272 [2024-06-10 11:31:31.301702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5a2a3913, Actual=5a2a3933 00:07:59.272 [2024-06-10 11:31:31.301986] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.272 [2024-06-10 11:31:31.302260] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.272 [2024-06-10 11:31:31.302512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.272 [2024-06-10 11:31:31.302796] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.272 [2024-06-10 11:31:31.303064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d54fa715 00:07:59.272 [2024-06-10 11:31:31.303338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=a848f59a 00:07:59.272 [2024-06-10 11:31:31.303629] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:07:59.272 [2024-06-10 11:31:31.303890] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b086083afd38a2a0, Actual=b086083afd38a280 00:07:59.272 [2024-06-10 11:31:31.304161] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.272 [2024-06-10 11:31:31.304420] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.272 [2024-06-10 11:31:31.304695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.272 [2024-06-10 11:31:31.304961] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.272 [2024-06-10 11:31:31.305256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=df183dfb3cbbc47 00:07:59.272 [2024-06-10 11:31:31.305521] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=a69ff73defe4ec1c 00:07:59.272 passed 00:07:59.272 Test: dix_sec_512_md_0_error ...passed 00:07:59.272 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-06-10 11:31:31.305596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:59.272 passed 00:07:59.272 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:59.272 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:59.531 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:59.531 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:59.531 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:59.531 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:59.531 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:59.531 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:59.531 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-10 11:31:31.349598] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:07:59.531 [2024-06-10 11:31:31.350722] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=75d0, Actual=75f0 00:07:59.532 [2024-06-10 11:31:31.351806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.532 [2024-06-10 11:31:31.352902] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.532 [2024-06-10 11:31:31.354011] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.532 [2024-06-10 11:31:31.355121] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.532 [2024-06-10 11:31:31.356191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ed7 00:07:59.532 [2024-06-10 11:31:31.357292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=91be 00:07:59.532 [2024-06-10 11:31:31.358375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:07:59.532 [2024-06-10 11:31:31.359484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5a2a3913, Actual=5a2a3933 00:07:59.532 [2024-06-10 11:31:31.360604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.532 [2024-06-10 11:31:31.361692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.532 [2024-06-10 11:31:31.362813] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.532 [2024-06-10 11:31:31.363905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.532 [2024-06-10 11:31:31.364988] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d54fa715 00:07:59.532 [2024-06-10 11:31:31.366080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=a848f59a 00:07:59.532 [2024-06-10 11:31:31.367201] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:07:59.532 [2024-06-10 11:31:31.368281] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b086083afd38a2a0, Actual=b086083afd38a280 00:07:59.532 [2024-06-10 11:31:31.369370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.532 [2024-06-10 11:31:31.370453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.532 [2024-06-10 11:31:31.371563] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.532 [2024-06-10 11:31:31.372644] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.532 [2024-06-10 11:31:31.373755] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=df183dfb3cbbc47 00:07:59.532 [2024-06-10 11:31:31.374854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=a69ff73defe4ec1c 00:07:59.532 passed 00:07:59.532 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-06-10 11:31:31.375235] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:07:59.532 [2024-06-10 11:31:31.375498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=75d0, Actual=75f0 00:07:59.532 [2024-06-10 11:31:31.375771] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.532 [2024-06-10 11:31:31.376044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.532 [2024-06-10 11:31:31.376333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.532 [2024-06-10 11:31:31.376601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.532 [2024-06-10 11:31:31.376876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ed7 00:07:59.532 [2024-06-10 11:31:31.377135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=91be 00:07:59.532 [2024-06-10 11:31:31.377409] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:07:59.532 [2024-06-10 11:31:31.377676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5a2a3913, Actual=5a2a3933 00:07:59.532 [2024-06-10 11:31:31.377964] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.532 [2024-06-10 11:31:31.378227] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.532 [2024-06-10 11:31:31.378492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.532 [2024-06-10 11:31:31.378774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.532 [2024-06-10 11:31:31.379043] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d54fa715 00:07:59.532 [2024-06-10 11:31:31.379314] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=a848f59a 00:07:59.532 [2024-06-10 11:31:31.379587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:07:59.532 [2024-06-10 11:31:31.379847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b086083afd38a2a0, Actual=b086083afd38a280 00:07:59.532 [2024-06-10 11:31:31.380112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.532 [2024-06-10 11:31:31.380388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:07:59.532 [2024-06-10 11:31:31.380655] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.532 [2024-06-10 11:31:31.380934] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:07:59.532 [2024-06-10 11:31:31.381212] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=df183dfb3cbbc47 00:07:59.532 [2024-06-10 11:31:31.381475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=a69ff73defe4ec1c 00:07:59.532 passed 00:07:59.532 Test: set_md_interleave_iovs_test ...passed 00:07:59.532 Test: set_md_interleave_iovs_split_test ...passed 00:07:59.532 Test: dif_generate_stream_pi_16_test ...passed 00:07:59.532 Test: dif_generate_stream_test ...passed 00:07:59.532 Test: set_md_interleave_iovs_alignment_test ...[2024-06-10 11:31:31.389187] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:07:59.532 passed 00:07:59.532 Test: dif_generate_split_test ...passed 00:07:59.532 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:07:59.532 Test: dif_verify_split_test ...passed 00:07:59.532 Test: dif_verify_stream_multi_segments_test ...passed 00:07:59.532 Test: update_crc32c_pi_16_test ...passed 00:07:59.532 Test: update_crc32c_test ...passed 00:07:59.532 Test: dif_update_crc32c_split_test ...passed 00:07:59.532 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:07:59.532 Test: get_range_with_md_test ...passed 00:07:59.532 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:07:59.532 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:07:59.532 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:59.532 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:07:59.532 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:07:59.532 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:59.532 Test: dif_generate_and_verify_unmap_test ...passed 00:07:59.532 00:07:59.532 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.532 suites 1 1 n/a 0 0 00:07:59.532 tests 79 79 79 0 0 00:07:59.532 asserts 3584 3584 3584 0 n/a 00:07:59.532 00:07:59.532 Elapsed time = 0.352 seconds 00:07:59.532 11:31:31 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:07:59.532 00:07:59.532 00:07:59.532 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.532 http://cunit.sourceforge.net/ 00:07:59.532 00:07:59.532 00:07:59.532 Suite: iov 00:07:59.532 Test: test_single_iov ...passed 00:07:59.532 Test: test_simple_iov ...passed 00:07:59.532 Test: test_complex_iov ...passed 00:07:59.532 Test: test_iovs_to_buf ...passed 00:07:59.532 Test: test_buf_to_iovs ...passed 00:07:59.532 Test: test_memset ...passed 00:07:59.532 Test: test_iov_one ...passed 00:07:59.532 Test: test_iov_xfer ...passed 00:07:59.532 00:07:59.532 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.532 suites 1 1 n/a 0 0 00:07:59.532 tests 8 8 8 0 0 00:07:59.532 asserts 156 156 156 0 n/a 00:07:59.532 00:07:59.532 Elapsed time = 0.000 seconds 00:07:59.532 11:31:31 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:07:59.532 00:07:59.532 00:07:59.532 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.532 http://cunit.sourceforge.net/ 00:07:59.532 00:07:59.532 00:07:59.532 Suite: math 00:07:59.532 Test: test_serial_number_arithmetic ...passed 00:07:59.532 Suite: erase 00:07:59.532 Test: test_memset_s ...passed 00:07:59.532 00:07:59.532 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.532 suites 2 2 n/a 0 0 00:07:59.532 tests 2 2 2 0 0 00:07:59.532 asserts 18 18 18 0 n/a 00:07:59.532 00:07:59.532 Elapsed time = 0.000 seconds 00:07:59.532 11:31:31 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:07:59.532 00:07:59.532 00:07:59.532 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.532 http://cunit.sourceforge.net/ 00:07:59.532 00:07:59.533 00:07:59.533 Suite: pipe 00:07:59.533 Test: test_create_destroy ...passed 00:07:59.533 Test: test_write_get_buffer ...passed 00:07:59.533 Test: test_write_advance ...passed 00:07:59.533 Test: test_read_get_buffer ...passed 00:07:59.533 Test: test_read_advance ...passed 00:07:59.533 Test: test_data ...passed 00:07:59.533 00:07:59.533 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.533 suites 1 1 n/a 0 0 00:07:59.533 tests 6 6 6 0 0 00:07:59.533 asserts 251 251 251 0 n/a 00:07:59.533 00:07:59.533 Elapsed time = 0.000 seconds 00:07:59.533 11:31:31 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:07:59.791 00:07:59.791 00:07:59.791 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.791 http://cunit.sourceforge.net/ 00:07:59.791 00:07:59.791 00:07:59.791 Suite: xor 00:07:59.791 Test: test_xor_gen ...passed 00:07:59.791 00:07:59.791 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.791 suites 1 1 n/a 0 0 00:07:59.791 tests 1 1 1 0 0 00:07:59.791 asserts 17 17 17 0 n/a 00:07:59.791 00:07:59.791 Elapsed time = 0.007 seconds 00:07:59.791 00:07:59.791 real 0m0.843s 00:07:59.791 user 0m0.583s 00:07:59.791 sys 0m0.266s 00:07:59.791 ************************************ 00:07:59.791 END TEST unittest_util 00:07:59.791 ************************************ 00:07:59.791 11:31:31 unittest.unittest_util -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:59.791 11:31:31 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:07:59.791 11:31:31 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:59.791 11:31:31 unittest -- unit/unittest.sh@285 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:59.791 11:31:31 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:59.791 11:31:31 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:59.791 11:31:31 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:59.791 ************************************ 00:07:59.791 START TEST unittest_vhost 00:07:59.791 ************************************ 00:07:59.791 11:31:31 unittest.unittest_vhost -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:59.791 00:07:59.791 00:07:59.791 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.791 http://cunit.sourceforge.net/ 00:07:59.791 00:07:59.791 00:07:59.791 Suite: vhost_suite 00:07:59.791 Test: desc_to_iov_test ...[2024-06-10 11:31:31.707424] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:07:59.791 passed 00:07:59.791 Test: create_controller_test ...[2024-06-10 11:31:31.712062] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:59.791 [2024-06-10 11:31:31.712204] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:07:59.791 [2024-06-10 11:31:31.712336] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:59.791 [2024-06-10 11:31:31.712438] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:07:59.791 [2024-06-10 11:31:31.712522] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:07:59.792 [2024-06-10 11:31:31.712914] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1781:vhost_user_dev_init: *ERROR*: Resulting socket path for controller is too long: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 00:07:59.792 [2024-06-10 11:31:31.713944] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 137:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:07:59.792 passed 00:07:59.792 Test: session_find_by_vid_test ...passed 00:07:59.792 Test: remove_controller_test ...[2024-06-10 11:31:31.716145] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1866:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:07:59.792 passed 00:07:59.792 Test: vq_avail_ring_get_test ...passed 00:07:59.792 Test: vq_packed_ring_test ...passed 00:07:59.792 Test: vhost_blk_construct_test ...passed 00:07:59.792 00:07:59.792 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.792 suites 1 1 n/a 0 0 00:07:59.792 tests 7 7 7 0 0 00:07:59.792 asserts 147 147 147 0 n/a 00:07:59.792 00:07:59.792 Elapsed time = 0.013 seconds 00:07:59.792 00:07:59.792 real 0m0.068s 00:07:59.792 user 0m0.044s 00:07:59.792 sys 0m0.024s 00:07:59.792 11:31:31 unittest.unittest_vhost -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:59.792 11:31:31 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x 00:07:59.792 ************************************ 00:07:59.792 END TEST unittest_vhost 00:07:59.792 ************************************ 00:07:59.792 11:31:31 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:59.792 11:31:31 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:59.792 11:31:31 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:59.792 11:31:31 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:59.792 ************************************ 00:07:59.792 START TEST unittest_dma 00:07:59.792 ************************************ 00:07:59.792 11:31:31 unittest.unittest_dma -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:59.792 00:07:59.792 00:07:59.792 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.792 http://cunit.sourceforge.net/ 00:07:59.792 00:07:59.792 00:07:59.792 Suite: dma_suite 00:07:59.792 Test: test_dma ...[2024-06-10 11:31:31.834540] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:07:59.792 passed 00:07:59.792 00:07:59.792 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.792 suites 1 1 n/a 0 0 00:07:59.792 tests 1 1 1 0 0 00:07:59.792 asserts 54 54 54 0 n/a 00:07:59.792 00:07:59.792 Elapsed time = 0.001 seconds 00:08:00.051 00:08:00.051 real 0m0.041s 00:08:00.051 user 0m0.021s 00:08:00.051 sys 0m0.021s 00:08:00.051 11:31:31 unittest.unittest_dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:00.051 11:31:31 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:08:00.051 ************************************ 00:08:00.051 END TEST unittest_dma 00:08:00.051 ************************************ 00:08:00.051 11:31:31 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:08:00.051 11:31:31 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:00.051 11:31:31 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:00.051 11:31:31 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:00.051 ************************************ 00:08:00.051 START TEST unittest_init 00:08:00.051 ************************************ 00:08:00.051 11:31:31 unittest.unittest_init -- common/autotest_common.sh@1124 -- # unittest_init 00:08:00.051 11:31:31 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:08:00.051 00:08:00.051 00:08:00.051 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.051 http://cunit.sourceforge.net/ 00:08:00.051 00:08:00.051 00:08:00.051 Suite: subsystem_suite 00:08:00.051 Test: subsystem_sort_test_depends_on_single ...passed 00:08:00.051 Test: subsystem_sort_test_depends_on_multiple ...passed 00:08:00.051 Test: subsystem_sort_test_missing_dependency ...passed 00:08:00.051 00:08:00.051 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.051 suites 1 1 n/a 0 0 00:08:00.051 tests 3 3 3 0 0 00:08:00.051 asserts 20 20 20 0 n/a 00:08:00.051 00:08:00.051 Elapsed time = 0.001 seconds 00:08:00.051 [2024-06-10 11:31:31.929678] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:08:00.051 [2024-06-10 11:31:31.930023] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:08:00.051 00:08:00.051 real 0m0.043s 00:08:00.051 user 0m0.028s 00:08:00.051 sys 0m0.016s 00:08:00.051 11:31:31 unittest.unittest_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:00.051 11:31:31 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:08:00.051 ************************************ 00:08:00.051 END TEST unittest_init 00:08:00.051 ************************************ 00:08:00.051 11:31:31 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:00.051 11:31:31 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:00.051 11:31:31 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:00.051 11:31:31 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:00.051 ************************************ 00:08:00.051 START TEST unittest_keyring 00:08:00.051 ************************************ 00:08:00.051 11:31:32 unittest.unittest_keyring -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:00.051 00:08:00.051 00:08:00.051 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.051 http://cunit.sourceforge.net/ 00:08:00.051 00:08:00.051 00:08:00.051 Suite: keyring 00:08:00.051 Test: test_keyring_add_remove ...[2024-06-10 11:31:32.022735] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:08:00.051 [2024-06-10 11:31:32.023112] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:08:00.051 [2024-06-10 11:31:32.023200] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:08:00.051 passed 00:08:00.051 Test: test_keyring_get_put ...passed 00:08:00.051 00:08:00.051 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.051 suites 1 1 n/a 0 0 00:08:00.051 tests 2 2 2 0 0 00:08:00.051 asserts 44 44 44 0 n/a 00:08:00.051 00:08:00.051 Elapsed time = 0.001 seconds 00:08:00.051 00:08:00.051 real 0m0.039s 00:08:00.051 user 0m0.029s 00:08:00.051 sys 0m0.009s 00:08:00.051 ************************************ 00:08:00.051 END TEST unittest_keyring 00:08:00.051 ************************************ 00:08:00.051 11:31:32 unittest.unittest_keyring -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:00.051 11:31:32 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:08:00.051 11:31:32 unittest -- unit/unittest.sh@292 -- # '[' yes = yes ']' 00:08:00.051 11:31:32 unittest -- unit/unittest.sh@292 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:08:00.051 11:31:32 unittest -- unit/unittest.sh@293 -- # hostname 00:08:00.051 11:31:32 unittest -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:00.308 geninfo: WARNING: invalid characters removed from testname! 00:08:32.435 11:32:02 unittest -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:08:36.618 11:32:08 unittest -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:39.983 11:32:11 unittest -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:42.527 11:32:14 unittest -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:45.808 11:32:17 unittest -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:49.135 11:32:20 unittest -- unit/unittest.sh@299 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:51.705 11:32:23 unittest -- unit/unittest.sh@300 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:54.233 11:32:25 unittest -- unit/unittest.sh@301 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:54.233 11:32:25 unittest -- unit/unittest.sh@302 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:54.801 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:54.801 Found 321 entries. 00:08:54.801 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:08:54.801 Writing .css and .png files. 00:08:54.801 Generating output. 00:08:54.801 Processing file include/linux/virtio_ring.h 00:08:55.061 Processing file include/spdk/base64.h 00:08:55.061 Processing file include/spdk/histogram_data.h 00:08:55.061 Processing file include/spdk/util.h 00:08:55.061 Processing file include/spdk/endian.h 00:08:55.061 Processing file include/spdk/nvmf_transport.h 00:08:55.061 Processing file include/spdk/thread.h 00:08:55.061 Processing file include/spdk/nvme.h 00:08:55.061 Processing file include/spdk/nvme_spec.h 00:08:55.061 Processing file include/spdk/trace.h 00:08:55.061 Processing file include/spdk/mmio.h 00:08:55.061 Processing file include/spdk/bdev_module.h 00:08:55.319 Processing file include/spdk_internal/utf.h 00:08:55.319 Processing file include/spdk_internal/sgl.h 00:08:55.319 Processing file include/spdk_internal/sock.h 00:08:55.319 Processing file include/spdk_internal/rdma.h 00:08:55.319 Processing file include/spdk_internal/virtio.h 00:08:55.319 Processing file include/spdk_internal/nvme_tcp.h 00:08:55.319 Processing file lib/accel/accel_rpc.c 00:08:55.319 Processing file lib/accel/accel.c 00:08:55.319 Processing file lib/accel/accel_sw.c 00:08:55.577 Processing file lib/bdev/scsi_nvme.c 00:08:55.577 Processing file lib/bdev/bdev_zone.c 00:08:55.577 Processing file lib/bdev/part.c 00:08:55.577 Processing file lib/bdev/bdev.c 00:08:55.577 Processing file lib/bdev/bdev_rpc.c 00:08:55.835 Processing file lib/blob/blobstore.h 00:08:55.835 Processing file lib/blob/request.c 00:08:55.835 Processing file lib/blob/zeroes.c 00:08:55.835 Processing file lib/blob/blobstore.c 00:08:55.835 Processing file lib/blob/blob_bs_dev.c 00:08:56.093 Processing file lib/blobfs/tree.c 00:08:56.093 Processing file lib/blobfs/blobfs.c 00:08:56.093 Processing file lib/conf/conf.c 00:08:56.093 Processing file lib/dma/dma.c 00:08:56.351 Processing file lib/env_dpdk/threads.c 00:08:56.351 Processing file lib/env_dpdk/pci_virtio.c 00:08:56.351 Processing file lib/env_dpdk/pci.c 00:08:56.351 Processing file lib/env_dpdk/pci_vmd.c 00:08:56.351 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:08:56.351 Processing file lib/env_dpdk/init.c 00:08:56.351 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:08:56.351 Processing file lib/env_dpdk/pci_event.c 00:08:56.351 Processing file lib/env_dpdk/memory.c 00:08:56.351 Processing file lib/env_dpdk/pci_ioat.c 00:08:56.351 Processing file lib/env_dpdk/env.c 00:08:56.351 Processing file lib/env_dpdk/sigbus_handler.c 00:08:56.351 Processing file lib/env_dpdk/pci_idxd.c 00:08:56.351 Processing file lib/env_dpdk/pci_dpdk.c 00:08:56.609 Processing file lib/event/app_rpc.c 00:08:56.609 Processing file lib/event/reactor.c 00:08:56.609 Processing file lib/event/scheduler_static.c 00:08:56.609 Processing file lib/event/log_rpc.c 00:08:56.609 Processing file lib/event/app.c 00:08:56.867 Processing file lib/ftl/ftl_band.h 00:08:56.867 Processing file lib/ftl/ftl_trace.c 00:08:56.867 Processing file lib/ftl/ftl_core.c 00:08:56.867 Processing file lib/ftl/ftl_sb.c 00:08:56.867 Processing file lib/ftl/ftl_writer.h 00:08:56.867 Processing file lib/ftl/ftl_nv_cache_io.h 00:08:56.867 Processing file lib/ftl/ftl_nv_cache.h 00:08:56.867 Processing file lib/ftl/ftl_init.c 00:08:56.867 Processing file lib/ftl/ftl_reloc.c 00:08:56.867 Processing file lib/ftl/ftl_band_ops.c 00:08:56.867 Processing file lib/ftl/ftl_writer.c 00:08:56.867 Processing file lib/ftl/ftl_rq.c 00:08:56.867 Processing file lib/ftl/ftl_l2p_cache.c 00:08:56.867 Processing file lib/ftl/ftl_core.h 00:08:56.867 Processing file lib/ftl/ftl_band.c 00:08:56.867 Processing file lib/ftl/ftl_debug.c 00:08:56.867 Processing file lib/ftl/ftl_debug.h 00:08:56.867 Processing file lib/ftl/ftl_io.c 00:08:56.867 Processing file lib/ftl/ftl_nv_cache.c 00:08:56.867 Processing file lib/ftl/ftl_layout.c 00:08:56.867 Processing file lib/ftl/ftl_l2p_flat.c 00:08:56.867 Processing file lib/ftl/ftl_io.h 00:08:56.867 Processing file lib/ftl/ftl_l2p.c 00:08:56.867 Processing file lib/ftl/ftl_p2l.c 00:08:56.867 Processing file lib/ftl/base/ftl_base_dev.c 00:08:56.867 Processing file lib/ftl/base/ftl_base_bdev.c 00:08:57.125 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:08:57.125 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:08:57.125 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:08:57.125 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:08:57.125 Processing file lib/ftl/mngt/ftl_mngt.c 00:08:57.125 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:08:57.125 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:08:57.125 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:08:57.125 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:08:57.125 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:08:57.125 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:08:57.125 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:08:57.125 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:08:57.125 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:08:57.125 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:08:57.383 Processing file lib/ftl/upgrade/ftl_p2l_upgrade.c 00:08:57.384 Processing file lib/ftl/upgrade/ftl_band_upgrade.c 00:08:57.384 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:08:57.384 Processing file lib/ftl/upgrade/ftl_chunk_upgrade.c 00:08:57.384 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:08:57.384 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:08:57.384 Processing file lib/ftl/upgrade/ftl_trim_upgrade.c 00:08:57.384 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:08:57.384 Processing file lib/ftl/utils/ftl_property.c 00:08:57.384 Processing file lib/ftl/utils/ftl_mempool.c 00:08:57.384 Processing file lib/ftl/utils/ftl_bitmap.c 00:08:57.384 Processing file lib/ftl/utils/ftl_addr_utils.h 00:08:57.384 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:08:57.384 Processing file lib/ftl/utils/ftl_property.h 00:08:57.384 Processing file lib/ftl/utils/ftl_conf.c 00:08:57.384 Processing file lib/ftl/utils/ftl_md.c 00:08:57.384 Processing file lib/ftl/utils/ftl_df.h 00:08:57.642 Processing file lib/idxd/idxd_user.c 00:08:57.642 Processing file lib/idxd/idxd.c 00:08:57.642 Processing file lib/idxd/idxd_internal.h 00:08:57.642 Processing file lib/init/subsystem.c 00:08:57.642 Processing file lib/init/subsystem_rpc.c 00:08:57.642 Processing file lib/init/rpc.c 00:08:57.642 Processing file lib/init/json_config.c 00:08:57.642 Processing file lib/ioat/ioat.c 00:08:57.642 Processing file lib/ioat/ioat_internal.h 00:08:58.220 Processing file lib/iscsi/init_grp.c 00:08:58.220 Processing file lib/iscsi/portal_grp.c 00:08:58.220 Processing file lib/iscsi/iscsi_subsystem.c 00:08:58.220 Processing file lib/iscsi/task.h 00:08:58.220 Processing file lib/iscsi/task.c 00:08:58.220 Processing file lib/iscsi/iscsi.c 00:08:58.220 Processing file lib/iscsi/tgt_node.c 00:08:58.220 Processing file lib/iscsi/iscsi.h 00:08:58.220 Processing file lib/iscsi/iscsi_rpc.c 00:08:58.220 Processing file lib/iscsi/conn.c 00:08:58.220 Processing file lib/iscsi/param.c 00:08:58.220 Processing file lib/iscsi/md5.c 00:08:58.220 Processing file lib/json/json_util.c 00:08:58.220 Processing file lib/json/json_parse.c 00:08:58.220 Processing file lib/json/json_write.c 00:08:58.478 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:08:58.478 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:08:58.478 Processing file lib/jsonrpc/jsonrpc_client.c 00:08:58.478 Processing file lib/jsonrpc/jsonrpc_server.c 00:08:58.478 Processing file lib/keyring/keyring_rpc.c 00:08:58.478 Processing file lib/keyring/keyring.c 00:08:58.478 Processing file lib/log/log.c 00:08:58.478 Processing file lib/log/log_deprecated.c 00:08:58.478 Processing file lib/log/log_flags.c 00:08:58.478 Processing file lib/lvol/lvol.c 00:08:58.737 Processing file lib/nbd/nbd_rpc.c 00:08:58.737 Processing file lib/nbd/nbd.c 00:08:58.737 Processing file lib/notify/notify.c 00:08:58.737 Processing file lib/notify/notify_rpc.c 00:08:59.304 Processing file lib/nvme/nvme_io_msg.c 00:08:59.304 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:08:59.304 Processing file lib/nvme/nvme_fabric.c 00:08:59.304 Processing file lib/nvme/nvme_cuse.c 00:08:59.304 Processing file lib/nvme/nvme_qpair.c 00:08:59.304 Processing file lib/nvme/nvme_rdma.c 00:08:59.304 Processing file lib/nvme/nvme_quirks.c 00:08:59.304 Processing file lib/nvme/nvme_discovery.c 00:08:59.304 Processing file lib/nvme/nvme_poll_group.c 00:08:59.304 Processing file lib/nvme/nvme_zns.c 00:08:59.304 Processing file lib/nvme/nvme_pcie.c 00:08:59.304 Processing file lib/nvme/nvme_tcp.c 00:08:59.304 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:08:59.304 Processing file lib/nvme/nvme_pcie_common.c 00:08:59.304 Processing file lib/nvme/nvme_ctrlr.c 00:08:59.304 Processing file lib/nvme/nvme_ns.c 00:08:59.304 Processing file lib/nvme/nvme_pcie_internal.h 00:08:59.304 Processing file lib/nvme/nvme_ns_cmd.c 00:08:59.304 Processing file lib/nvme/nvme_internal.h 00:08:59.304 Processing file lib/nvme/nvme_auth.c 00:08:59.304 Processing file lib/nvme/nvme_opal.c 00:08:59.304 Processing file lib/nvme/nvme.c 00:08:59.304 Processing file lib/nvme/nvme_transport.c 00:08:59.304 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:08:59.869 Processing file lib/nvmf/ctrlr_bdev.c 00:08:59.869 Processing file lib/nvmf/nvmf.c 00:08:59.869 Processing file lib/nvmf/nvmf_internal.h 00:08:59.869 Processing file lib/nvmf/auth.c 00:08:59.869 Processing file lib/nvmf/tcp.c 00:08:59.869 Processing file lib/nvmf/rdma.c 00:08:59.869 Processing file lib/nvmf/nvmf_rpc.c 00:08:59.869 Processing file lib/nvmf/ctrlr_discovery.c 00:08:59.869 Processing file lib/nvmf/transport.c 00:08:59.869 Processing file lib/nvmf/subsystem.c 00:08:59.869 Processing file lib/nvmf/ctrlr.c 00:08:59.869 Processing file lib/rdma/rdma_verbs.c 00:08:59.869 Processing file lib/rdma/common.c 00:09:00.127 Processing file lib/rpc/rpc.c 00:09:00.127 Processing file lib/scsi/task.c 00:09:00.127 Processing file lib/scsi/scsi_rpc.c 00:09:00.127 Processing file lib/scsi/scsi_pr.c 00:09:00.127 Processing file lib/scsi/scsi.c 00:09:00.127 Processing file lib/scsi/scsi_bdev.c 00:09:00.127 Processing file lib/scsi/dev.c 00:09:00.127 Processing file lib/scsi/port.c 00:09:00.127 Processing file lib/scsi/lun.c 00:09:00.384 Processing file lib/sock/sock_rpc.c 00:09:00.384 Processing file lib/sock/sock.c 00:09:00.384 Processing file lib/thread/thread.c 00:09:00.384 Processing file lib/thread/iobuf.c 00:09:00.384 Processing file lib/trace/trace_flags.c 00:09:00.384 Processing file lib/trace/trace.c 00:09:00.384 Processing file lib/trace/trace_rpc.c 00:09:00.642 Processing file lib/trace_parser/trace.cpp 00:09:00.642 Processing file lib/ut/ut.c 00:09:00.642 Processing file lib/ut_mock/mock.c 00:09:00.900 Processing file lib/util/strerror_tls.c 00:09:00.900 Processing file lib/util/dif.c 00:09:00.900 Processing file lib/util/fd.c 00:09:00.900 Processing file lib/util/fd_group.c 00:09:00.900 Processing file lib/util/crc32c.c 00:09:00.900 Processing file lib/util/hexlify.c 00:09:00.900 Processing file lib/util/zipf.c 00:09:00.900 Processing file lib/util/iov.c 00:09:00.900 Processing file lib/util/file.c 00:09:00.900 Processing file lib/util/string.c 00:09:00.900 Processing file lib/util/uuid.c 00:09:00.900 Processing file lib/util/bit_array.c 00:09:00.900 Processing file lib/util/math.c 00:09:00.900 Processing file lib/util/cpuset.c 00:09:00.900 Processing file lib/util/crc32.c 00:09:00.900 Processing file lib/util/xor.c 00:09:00.900 Processing file lib/util/crc64.c 00:09:00.900 Processing file lib/util/base64.c 00:09:00.900 Processing file lib/util/crc16.c 00:09:00.900 Processing file lib/util/pipe.c 00:09:00.900 Processing file lib/util/crc32_ieee.c 00:09:00.900 Processing file lib/vfio_user/host/vfio_user_pci.c 00:09:00.900 Processing file lib/vfio_user/host/vfio_user.c 00:09:01.157 Processing file lib/vhost/rte_vhost_user.c 00:09:01.157 Processing file lib/vhost/vhost.c 00:09:01.157 Processing file lib/vhost/vhost_scsi.c 00:09:01.157 Processing file lib/vhost/vhost_rpc.c 00:09:01.157 Processing file lib/vhost/vhost_internal.h 00:09:01.157 Processing file lib/vhost/vhost_blk.c 00:09:01.415 Processing file lib/virtio/virtio_pci.c 00:09:01.415 Processing file lib/virtio/virtio.c 00:09:01.415 Processing file lib/virtio/virtio_vfio_user.c 00:09:01.415 Processing file lib/virtio/virtio_vhost_user.c 00:09:01.415 Processing file lib/vmd/led.c 00:09:01.415 Processing file lib/vmd/vmd.c 00:09:01.415 Processing file module/accel/dsa/accel_dsa.c 00:09:01.415 Processing file module/accel/dsa/accel_dsa_rpc.c 00:09:01.673 Processing file module/accel/error/accel_error_rpc.c 00:09:01.673 Processing file module/accel/error/accel_error.c 00:09:01.673 Processing file module/accel/iaa/accel_iaa.c 00:09:01.673 Processing file module/accel/iaa/accel_iaa_rpc.c 00:09:01.673 Processing file module/accel/ioat/accel_ioat.c 00:09:01.673 Processing file module/accel/ioat/accel_ioat_rpc.c 00:09:01.933 Processing file module/bdev/aio/bdev_aio.c 00:09:01.933 Processing file module/bdev/aio/bdev_aio_rpc.c 00:09:01.933 Processing file module/bdev/delay/vbdev_delay.c 00:09:01.933 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:09:01.933 Processing file module/bdev/error/vbdev_error_rpc.c 00:09:01.933 Processing file module/bdev/error/vbdev_error.c 00:09:02.191 Processing file module/bdev/ftl/bdev_ftl.c 00:09:02.191 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:09:02.191 Processing file module/bdev/gpt/vbdev_gpt.c 00:09:02.191 Processing file module/bdev/gpt/gpt.h 00:09:02.191 Processing file module/bdev/gpt/gpt.c 00:09:02.191 Processing file module/bdev/iscsi/bdev_iscsi.c 00:09:02.191 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:09:02.485 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:09:02.485 Processing file module/bdev/lvol/vbdev_lvol.c 00:09:02.485 Processing file module/bdev/malloc/bdev_malloc.c 00:09:02.485 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:09:02.485 Processing file module/bdev/null/bdev_null_rpc.c 00:09:02.485 Processing file module/bdev/null/bdev_null.c 00:09:02.794 Processing file module/bdev/nvme/bdev_nvme.c 00:09:02.795 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:09:02.795 Processing file module/bdev/nvme/vbdev_opal.c 00:09:02.795 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:09:02.795 Processing file module/bdev/nvme/nvme_rpc.c 00:09:02.795 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:09:02.795 Processing file module/bdev/nvme/bdev_mdns_client.c 00:09:02.795 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:09:02.795 Processing file module/bdev/passthru/vbdev_passthru.c 00:09:03.054 Processing file module/bdev/raid/bdev_raid_rpc.c 00:09:03.054 Processing file module/bdev/raid/raid0.c 00:09:03.054 Processing file module/bdev/raid/bdev_raid_sb.c 00:09:03.054 Processing file module/bdev/raid/bdev_raid.c 00:09:03.054 Processing file module/bdev/raid/concat.c 00:09:03.054 Processing file module/bdev/raid/raid5f.c 00:09:03.054 Processing file module/bdev/raid/raid1.c 00:09:03.054 Processing file module/bdev/raid/bdev_raid.h 00:09:03.054 Processing file module/bdev/split/vbdev_split_rpc.c 00:09:03.054 Processing file module/bdev/split/vbdev_split.c 00:09:03.313 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:09:03.313 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:09:03.313 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:09:03.313 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:09:03.313 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:09:03.313 Processing file module/blob/bdev/blob_bdev.c 00:09:03.571 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:09:03.571 Processing file module/blobfs/bdev/blobfs_bdev.c 00:09:03.571 Processing file module/env_dpdk/env_dpdk_rpc.c 00:09:03.571 Processing file module/event/subsystems/accel/accel.c 00:09:03.571 Processing file module/event/subsystems/bdev/bdev.c 00:09:03.829 Processing file module/event/subsystems/iobuf/iobuf.c 00:09:03.829 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:09:03.829 Processing file module/event/subsystems/iscsi/iscsi.c 00:09:03.829 Processing file module/event/subsystems/keyring/keyring.c 00:09:03.829 Processing file module/event/subsystems/nbd/nbd.c 00:09:04.088 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:09:04.088 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:09:04.088 Processing file module/event/subsystems/scheduler/scheduler.c 00:09:04.088 Processing file module/event/subsystems/scsi/scsi.c 00:09:04.088 Processing file module/event/subsystems/sock/sock.c 00:09:04.088 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:09:04.347 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:09:04.347 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:09:04.347 Processing file module/event/subsystems/vmd/vmd.c 00:09:04.347 Processing file module/keyring/file/keyring_rpc.c 00:09:04.347 Processing file module/keyring/file/keyring.c 00:09:04.347 Processing file module/keyring/linux/keyring.c 00:09:04.347 Processing file module/keyring/linux/keyring_rpc.c 00:09:04.606 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:09:04.606 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:09:04.606 Processing file module/scheduler/gscheduler/gscheduler.c 00:09:04.606 Processing file module/sock/sock_kernel.h 00:09:04.866 Processing file module/sock/posix/posix.c 00:09:04.866 Writing directory view page. 00:09:04.866 Overall coverage rate: 00:09:04.866 lines......: 38.7% (40790 of 105370 lines) 00:09:04.866 functions..: 42.3% (3711 of 8766 functions) 00:09:04.866 00:09:04.866 00:09:04.866 ===================== 00:09:04.866 All unit tests passed 00:09:04.866 ===================== 00:09:04.866 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:04.866 11:32:36 unittest -- unit/unittest.sh@305 -- # set +x 00:09:04.866 00:09:04.866 00:09:04.866 ************************************ 00:09:04.866 END TEST unittest 00:09:04.866 ************************************ 00:09:04.866 00:09:04.866 real 4m7.714s 00:09:04.866 user 3m36.634s 00:09:04.866 sys 0m22.914s 00:09:04.866 11:32:36 unittest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:04.866 11:32:36 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:04.866 11:32:36 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:09:04.866 11:32:36 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:04.866 11:32:36 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:04.866 11:32:36 -- spdk/autotest.sh@162 -- # timing_enter lib 00:09:04.866 11:32:36 -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:04.866 11:32:36 -- common/autotest_common.sh@10 -- # set +x 00:09:04.866 11:32:36 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:09:04.866 11:32:36 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:04.866 11:32:36 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:04.866 11:32:36 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:04.866 11:32:36 -- common/autotest_common.sh@10 -- # set +x 00:09:04.866 ************************************ 00:09:04.866 START TEST env 00:09:04.866 ************************************ 00:09:04.866 11:32:36 env -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:04.866 * Looking for test storage... 00:09:04.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:04.866 11:32:36 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:04.866 11:32:36 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:04.866 11:32:36 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:04.866 11:32:36 env -- common/autotest_common.sh@10 -- # set +x 00:09:04.866 ************************************ 00:09:04.866 START TEST env_memory 00:09:04.866 ************************************ 00:09:04.866 11:32:36 env.env_memory -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:04.866 00:09:04.866 00:09:04.866 CUnit - A unit testing framework for C - Version 2.1-3 00:09:04.866 http://cunit.sourceforge.net/ 00:09:04.866 00:09:04.866 00:09:04.866 Suite: memory 00:09:04.866 Test: alloc and free memory map ...[2024-06-10 11:32:36.923150] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:05.124 passed 00:09:05.124 Test: mem map translation ...[2024-06-10 11:32:36.961061] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:05.124 [2024-06-10 11:32:36.961191] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:05.124 [2024-06-10 11:32:36.961299] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:05.124 [2024-06-10 11:32:36.961372] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:05.124 passed 00:09:05.124 Test: mem map registration ...[2024-06-10 11:32:37.027347] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:05.124 [2024-06-10 11:32:37.027503] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:05.124 passed 00:09:05.124 Test: mem map adjacent registrations ...passed 00:09:05.124 00:09:05.124 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.124 suites 1 1 n/a 0 0 00:09:05.124 tests 4 4 4 0 0 00:09:05.124 asserts 152 152 152 0 n/a 00:09:05.124 00:09:05.124 Elapsed time = 0.225 seconds 00:09:05.124 00:09:05.124 real 0m0.267s 00:09:05.124 user 0m0.229s 00:09:05.124 sys 0m0.039s 00:09:05.124 11:32:37 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:05.124 11:32:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:05.124 ************************************ 00:09:05.124 END TEST env_memory 00:09:05.124 ************************************ 00:09:05.382 11:32:37 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:05.382 11:32:37 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:05.382 11:32:37 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:05.382 11:32:37 env -- common/autotest_common.sh@10 -- # set +x 00:09:05.382 ************************************ 00:09:05.382 START TEST env_vtophys 00:09:05.382 ************************************ 00:09:05.382 11:32:37 env.env_vtophys -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:05.382 EAL: lib.eal log level changed from notice to debug 00:09:05.382 EAL: Detected lcore 0 as core 0 on socket 0 00:09:05.382 EAL: Detected lcore 1 as core 0 on socket 0 00:09:05.382 EAL: Detected lcore 2 as core 0 on socket 0 00:09:05.382 EAL: Detected lcore 3 as core 0 on socket 0 00:09:05.382 EAL: Detected lcore 4 as core 0 on socket 0 00:09:05.382 EAL: Detected lcore 5 as core 0 on socket 0 00:09:05.382 EAL: Detected lcore 6 as core 0 on socket 0 00:09:05.382 EAL: Detected lcore 7 as core 0 on socket 0 00:09:05.382 EAL: Detected lcore 8 as core 0 on socket 0 00:09:05.382 EAL: Detected lcore 9 as core 0 on socket 0 00:09:05.382 EAL: Maximum logical cores by configuration: 128 00:09:05.382 EAL: Detected CPU lcores: 10 00:09:05.382 EAL: Detected NUMA nodes: 1 00:09:05.382 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:05.382 EAL: Checking presence of .so 'librte_eal.so.24' 00:09:05.382 EAL: Checking presence of .so 'librte_eal.so' 00:09:05.382 EAL: Detected static linkage of DPDK 00:09:05.382 EAL: No shared files mode enabled, IPC will be disabled 00:09:05.382 EAL: Selected IOVA mode 'PA' 00:09:05.382 EAL: Probing VFIO support... 00:09:05.382 EAL: IOMMU type 1 (Type 1) is supported 00:09:05.382 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:05.382 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:05.382 EAL: VFIO support initialized 00:09:05.382 EAL: Ask a virtual area of 0x2e000 bytes 00:09:05.382 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:05.382 EAL: Setting up physically contiguous memory... 00:09:05.382 EAL: Setting maximum number of open files to 1048576 00:09:05.382 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:05.382 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:05.382 EAL: Ask a virtual area of 0x61000 bytes 00:09:05.382 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:05.382 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:05.382 EAL: Ask a virtual area of 0x400000000 bytes 00:09:05.382 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:05.382 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:05.382 EAL: Ask a virtual area of 0x61000 bytes 00:09:05.382 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:05.382 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:05.382 EAL: Ask a virtual area of 0x400000000 bytes 00:09:05.382 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:05.382 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:05.382 EAL: Ask a virtual area of 0x61000 bytes 00:09:05.382 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:05.382 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:05.382 EAL: Ask a virtual area of 0x400000000 bytes 00:09:05.382 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:05.382 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:05.382 EAL: Ask a virtual area of 0x61000 bytes 00:09:05.382 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:05.382 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:05.382 EAL: Ask a virtual area of 0x400000000 bytes 00:09:05.382 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:05.382 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:05.382 EAL: Hugepages will be freed exactly as allocated. 00:09:05.382 EAL: No shared files mode enabled, IPC is disabled 00:09:05.382 EAL: No shared files mode enabled, IPC is disabled 00:09:05.382 EAL: TSC frequency is ~2100000 KHz 00:09:05.382 EAL: Main lcore 0 is ready (tid=7fce459e7a80;cpuset=[0]) 00:09:05.382 EAL: Trying to obtain current memory policy. 00:09:05.382 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:05.382 EAL: Restoring previous memory policy: 0 00:09:05.382 EAL: request: mp_malloc_sync 00:09:05.382 EAL: No shared files mode enabled, IPC is disabled 00:09:05.382 EAL: Heap on socket 0 was expanded by 2MB 00:09:05.382 EAL: No shared files mode enabled, IPC is disabled 00:09:05.382 EAL: Mem event callback 'spdk:(nil)' registered 00:09:05.382 00:09:05.382 00:09:05.382 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.382 http://cunit.sourceforge.net/ 00:09:05.382 00:09:05.382 00:09:05.382 Suite: components_suite 00:09:05.946 Test: vtophys_malloc_test ...passed 00:09:05.946 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:05.946 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:05.946 EAL: Restoring previous memory policy: 0 00:09:05.946 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.946 EAL: request: mp_malloc_sync 00:09:05.946 EAL: No shared files mode enabled, IPC is disabled 00:09:05.946 EAL: Heap on socket 0 was expanded by 4MB 00:09:05.946 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.946 EAL: request: mp_malloc_sync 00:09:05.947 EAL: No shared files mode enabled, IPC is disabled 00:09:05.947 EAL: Heap on socket 0 was shrunk by 4MB 00:09:05.947 EAL: Trying to obtain current memory policy. 00:09:05.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:05.947 EAL: Restoring previous memory policy: 0 00:09:05.947 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.947 EAL: request: mp_malloc_sync 00:09:05.947 EAL: No shared files mode enabled, IPC is disabled 00:09:05.947 EAL: Heap on socket 0 was expanded by 6MB 00:09:05.947 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.947 EAL: request: mp_malloc_sync 00:09:05.947 EAL: No shared files mode enabled, IPC is disabled 00:09:05.947 EAL: Heap on socket 0 was shrunk by 6MB 00:09:05.947 EAL: Trying to obtain current memory policy. 00:09:05.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:05.947 EAL: Restoring previous memory policy: 0 00:09:05.947 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.947 EAL: request: mp_malloc_sync 00:09:05.947 EAL: No shared files mode enabled, IPC is disabled 00:09:05.947 EAL: Heap on socket 0 was expanded by 10MB 00:09:05.947 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.947 EAL: request: mp_malloc_sync 00:09:05.947 EAL: No shared files mode enabled, IPC is disabled 00:09:05.947 EAL: Heap on socket 0 was shrunk by 10MB 00:09:06.204 EAL: Trying to obtain current memory policy. 00:09:06.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:06.204 EAL: Restoring previous memory policy: 0 00:09:06.204 EAL: Calling mem event callback 'spdk:(nil)' 00:09:06.204 EAL: request: mp_malloc_sync 00:09:06.204 EAL: No shared files mode enabled, IPC is disabled 00:09:06.204 EAL: Heap on socket 0 was expanded by 18MB 00:09:06.204 EAL: Calling mem event callback 'spdk:(nil)' 00:09:06.204 EAL: request: mp_malloc_sync 00:09:06.204 EAL: No shared files mode enabled, IPC is disabled 00:09:06.204 EAL: Heap on socket 0 was shrunk by 18MB 00:09:06.204 EAL: Trying to obtain current memory policy. 00:09:06.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:06.204 EAL: Restoring previous memory policy: 0 00:09:06.204 EAL: Calling mem event callback 'spdk:(nil)' 00:09:06.204 EAL: request: mp_malloc_sync 00:09:06.204 EAL: No shared files mode enabled, IPC is disabled 00:09:06.204 EAL: Heap on socket 0 was expanded by 34MB 00:09:06.204 EAL: Calling mem event callback 'spdk:(nil)' 00:09:06.204 EAL: request: mp_malloc_sync 00:09:06.204 EAL: No shared files mode enabled, IPC is disabled 00:09:06.204 EAL: Heap on socket 0 was shrunk by 34MB 00:09:06.204 EAL: Trying to obtain current memory policy. 00:09:06.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:06.461 EAL: Restoring previous memory policy: 0 00:09:06.461 EAL: Calling mem event callback 'spdk:(nil)' 00:09:06.461 EAL: request: mp_malloc_sync 00:09:06.461 EAL: No shared files mode enabled, IPC is disabled 00:09:06.461 EAL: Heap on socket 0 was expanded by 66MB 00:09:06.461 EAL: Calling mem event callback 'spdk:(nil)' 00:09:06.461 EAL: request: mp_malloc_sync 00:09:06.461 EAL: No shared files mode enabled, IPC is disabled 00:09:06.461 EAL: Heap on socket 0 was shrunk by 66MB 00:09:06.718 EAL: Trying to obtain current memory policy. 00:09:06.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:06.718 EAL: Restoring previous memory policy: 0 00:09:06.718 EAL: Calling mem event callback 'spdk:(nil)' 00:09:06.718 EAL: request: mp_malloc_sync 00:09:06.718 EAL: No shared files mode enabled, IPC is disabled 00:09:06.718 EAL: Heap on socket 0 was expanded by 130MB 00:09:06.976 EAL: Calling mem event callback 'spdk:(nil)' 00:09:06.976 EAL: request: mp_malloc_sync 00:09:06.976 EAL: No shared files mode enabled, IPC is disabled 00:09:06.976 EAL: Heap on socket 0 was shrunk by 130MB 00:09:07.233 EAL: Trying to obtain current memory policy. 00:09:07.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:07.233 EAL: Restoring previous memory policy: 0 00:09:07.233 EAL: Calling mem event callback 'spdk:(nil)' 00:09:07.233 EAL: request: mp_malloc_sync 00:09:07.233 EAL: No shared files mode enabled, IPC is disabled 00:09:07.233 EAL: Heap on socket 0 was expanded by 258MB 00:09:07.800 EAL: Calling mem event callback 'spdk:(nil)' 00:09:07.800 EAL: request: mp_malloc_sync 00:09:07.800 EAL: No shared files mode enabled, IPC is disabled 00:09:07.800 EAL: Heap on socket 0 was shrunk by 258MB 00:09:08.058 EAL: Trying to obtain current memory policy. 00:09:08.059 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:08.316 EAL: Restoring previous memory policy: 0 00:09:08.316 EAL: Calling mem event callback 'spdk:(nil)' 00:09:08.316 EAL: request: mp_malloc_sync 00:09:08.316 EAL: No shared files mode enabled, IPC is disabled 00:09:08.316 EAL: Heap on socket 0 was expanded by 514MB 00:09:09.249 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.249 EAL: request: mp_malloc_sync 00:09:09.249 EAL: No shared files mode enabled, IPC is disabled 00:09:09.249 EAL: Heap on socket 0 was shrunk by 514MB 00:09:10.184 EAL: Trying to obtain current memory policy. 00:09:10.184 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:10.445 EAL: Restoring previous memory policy: 0 00:09:10.445 EAL: Calling mem event callback 'spdk:(nil)' 00:09:10.445 EAL: request: mp_malloc_sync 00:09:10.445 EAL: No shared files mode enabled, IPC is disabled 00:09:10.445 EAL: Heap on socket 0 was expanded by 1026MB 00:09:12.346 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.604 EAL: request: mp_malloc_sync 00:09:12.604 EAL: No shared files mode enabled, IPC is disabled 00:09:12.604 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:14.503 passed 00:09:14.503 00:09:14.503 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.503 suites 1 1 n/a 0 0 00:09:14.503 tests 2 2 2 0 0 00:09:14.503 asserts 6335 6335 6335 0 n/a 00:09:14.503 00:09:14.503 Elapsed time = 8.923 seconds 00:09:14.503 EAL: Calling mem event callback 'spdk:(nil)' 00:09:14.503 EAL: request: mp_malloc_sync 00:09:14.503 EAL: No shared files mode enabled, IPC is disabled 00:09:14.503 EAL: Heap on socket 0 was shrunk by 2MB 00:09:14.503 EAL: No shared files mode enabled, IPC is disabled 00:09:14.503 EAL: No shared files mode enabled, IPC is disabled 00:09:14.503 EAL: No shared files mode enabled, IPC is disabled 00:09:14.503 00:09:14.503 real 0m9.246s 00:09:14.503 user 0m8.130s 00:09:14.503 sys 0m0.968s 00:09:14.503 11:32:46 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:14.503 11:32:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:14.503 ************************************ 00:09:14.503 END TEST env_vtophys 00:09:14.503 ************************************ 00:09:14.503 11:32:46 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:14.503 11:32:46 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:14.503 11:32:46 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:14.503 11:32:46 env -- common/autotest_common.sh@10 -- # set +x 00:09:14.503 ************************************ 00:09:14.503 START TEST env_pci 00:09:14.503 ************************************ 00:09:14.503 11:32:46 env.env_pci -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:14.503 00:09:14.503 00:09:14.503 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.503 http://cunit.sourceforge.net/ 00:09:14.503 00:09:14.503 00:09:14.503 Suite: pci 00:09:14.503 Test: pci_hook ...[2024-06-10 11:32:46.544570] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 111969 has claimed it 00:09:14.762 passed 00:09:14.762 00:09:14.762 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.762 suites 1 1 n/a 0 0 00:09:14.762 tests 1 1 1 0 0 00:09:14.762 asserts 25 25 25 0 n/a 00:09:14.762 00:09:14.762 Elapsed time = 0.007 secondsEAL: Cannot find device (10000:00:01.0) 00:09:14.762 EAL: Failed to attach device on primary process 00:09:14.762 00:09:14.762 00:09:14.762 real 0m0.112s 00:09:14.762 user 0m0.058s 00:09:14.762 sys 0m0.054s 00:09:14.762 11:32:46 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:14.762 11:32:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:14.762 ************************************ 00:09:14.762 END TEST env_pci 00:09:14.762 ************************************ 00:09:14.762 11:32:46 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:14.762 11:32:46 env -- env/env.sh@15 -- # uname 00:09:14.762 11:32:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:14.762 11:32:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:14.762 11:32:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:14.762 11:32:46 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:09:14.762 11:32:46 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:14.762 11:32:46 env -- common/autotest_common.sh@10 -- # set +x 00:09:14.762 ************************************ 00:09:14.762 START TEST env_dpdk_post_init 00:09:14.762 ************************************ 00:09:14.762 11:32:46 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:14.762 EAL: Detected CPU lcores: 10 00:09:14.762 EAL: Detected NUMA nodes: 1 00:09:14.762 EAL: Detected static linkage of DPDK 00:09:14.762 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:14.762 EAL: Selected IOVA mode 'PA' 00:09:14.762 EAL: VFIO support initialized 00:09:15.021 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:15.021 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:15.021 Starting DPDK initialization... 00:09:15.021 Starting SPDK post initialization... 00:09:15.021 SPDK NVMe probe 00:09:15.021 Attaching to 0000:00:10.0 00:09:15.021 Attached to 0000:00:10.0 00:09:15.021 Cleaning up... 00:09:15.021 00:09:15.021 real 0m0.334s 00:09:15.021 user 0m0.087s 00:09:15.021 sys 0m0.148s 00:09:15.021 11:32:47 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:15.021 ************************************ 00:09:15.021 END TEST env_dpdk_post_init 00:09:15.021 11:32:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:15.021 ************************************ 00:09:15.021 11:32:47 env -- env/env.sh@26 -- # uname 00:09:15.021 11:32:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:15.021 11:32:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:15.021 11:32:47 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:15.021 11:32:47 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:15.021 11:32:47 env -- common/autotest_common.sh@10 -- # set +x 00:09:15.021 ************************************ 00:09:15.021 START TEST env_mem_callbacks 00:09:15.021 ************************************ 00:09:15.021 11:32:47 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:15.280 EAL: Detected CPU lcores: 10 00:09:15.280 EAL: Detected NUMA nodes: 1 00:09:15.280 EAL: Detected static linkage of DPDK 00:09:15.280 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:15.280 EAL: Selected IOVA mode 'PA' 00:09:15.280 EAL: VFIO support initialized 00:09:15.280 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:15.280 00:09:15.280 00:09:15.280 CUnit - A unit testing framework for C - Version 2.1-3 00:09:15.280 http://cunit.sourceforge.net/ 00:09:15.280 00:09:15.280 00:09:15.280 Suite: memory 00:09:15.280 Test: test ... 00:09:15.280 register 0x200000200000 2097152 00:09:15.280 malloc 3145728 00:09:15.280 register 0x200000400000 4194304 00:09:15.280 buf 0x2000004fffc0 len 3145728 PASSED 00:09:15.280 malloc 64 00:09:15.280 buf 0x2000004ffec0 len 64 PASSED 00:09:15.280 malloc 4194304 00:09:15.280 register 0x200000800000 6291456 00:09:15.280 buf 0x2000009fffc0 len 4194304 PASSED 00:09:15.280 free 0x2000004fffc0 3145728 00:09:15.280 free 0x2000004ffec0 64 00:09:15.280 unregister 0x200000400000 4194304 PASSED 00:09:15.280 free 0x2000009fffc0 4194304 00:09:15.280 unregister 0x200000800000 6291456 PASSED 00:09:15.570 malloc 8388608 00:09:15.570 register 0x200000400000 10485760 00:09:15.570 buf 0x2000005fffc0 len 8388608 PASSED 00:09:15.570 free 0x2000005fffc0 8388608 00:09:15.570 unregister 0x200000400000 10485760 PASSED 00:09:15.570 passed 00:09:15.570 00:09:15.570 Run Summary: Type Total Ran Passed Failed Inactive 00:09:15.570 suites 1 1 n/a 0 0 00:09:15.570 tests 1 1 1 0 0 00:09:15.570 asserts 15 15 15 0 n/a 00:09:15.570 00:09:15.570 Elapsed time = 0.080 seconds 00:09:15.570 00:09:15.570 real 0m0.350s 00:09:15.570 user 0m0.127s 00:09:15.570 sys 0m0.124s 00:09:15.570 11:32:47 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:15.570 11:32:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:15.570 ************************************ 00:09:15.570 END TEST env_mem_callbacks 00:09:15.570 ************************************ 00:09:15.570 ************************************ 00:09:15.570 END TEST env 00:09:15.570 ************************************ 00:09:15.570 00:09:15.570 real 0m10.709s 00:09:15.570 user 0m8.815s 00:09:15.570 sys 0m1.564s 00:09:15.570 11:32:47 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:15.570 11:32:47 env -- common/autotest_common.sh@10 -- # set +x 00:09:15.570 11:32:47 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:15.570 11:32:47 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:15.570 11:32:47 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:15.570 11:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:15.570 ************************************ 00:09:15.570 START TEST rpc 00:09:15.570 ************************************ 00:09:15.570 11:32:47 rpc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:15.570 * Looking for test storage... 00:09:15.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:15.570 11:32:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=112105 00:09:15.570 11:32:47 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:15.570 11:32:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:15.570 11:32:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 112105 00:09:15.570 11:32:47 rpc -- common/autotest_common.sh@830 -- # '[' -z 112105 ']' 00:09:15.570 11:32:47 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.570 11:32:47 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:15.570 11:32:47 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.570 11:32:47 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:15.570 11:32:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.828 [2024-06-10 11:32:47.751252] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:09:15.828 [2024-06-10 11:32:47.751514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112105 ] 00:09:16.086 [2024-06-10 11:32:47.942754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.345 [2024-06-10 11:32:48.172357] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:16.345 [2024-06-10 11:32:48.172439] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 112105' to capture a snapshot of events at runtime. 00:09:16.345 [2024-06-10 11:32:48.172496] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.345 [2024-06-10 11:32:48.172528] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.345 [2024-06-10 11:32:48.172548] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid112105 for offline analysis/debug. 00:09:16.345 [2024-06-10 11:32:48.172623] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.281 11:32:49 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:17.281 11:32:49 rpc -- common/autotest_common.sh@863 -- # return 0 00:09:17.281 11:32:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:17.281 11:32:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:17.281 11:32:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:17.281 11:32:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:17.281 11:32:49 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:17.281 11:32:49 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:17.281 11:32:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.281 ************************************ 00:09:17.281 START TEST rpc_integrity 00:09:17.281 ************************************ 00:09:17.281 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:09:17.281 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:17.281 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.281 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:17.281 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.281 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:17.281 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:17.281 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:17.281 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:17.281 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.281 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:17.281 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.281 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:17.281 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:17.281 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.281 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:17.281 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.281 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:17.281 { 00:09:17.281 "name": "Malloc0", 00:09:17.281 "aliases": [ 00:09:17.281 "872de779-6c15-4363-a694-ec60787483e4" 00:09:17.281 ], 00:09:17.281 "product_name": "Malloc disk", 00:09:17.281 "block_size": 512, 00:09:17.281 "num_blocks": 16384, 00:09:17.281 "uuid": "872de779-6c15-4363-a694-ec60787483e4", 00:09:17.281 "assigned_rate_limits": { 00:09:17.281 "rw_ios_per_sec": 0, 00:09:17.281 "rw_mbytes_per_sec": 0, 00:09:17.281 "r_mbytes_per_sec": 0, 00:09:17.281 "w_mbytes_per_sec": 0 00:09:17.281 }, 00:09:17.281 "claimed": false, 00:09:17.281 "zoned": false, 00:09:17.281 "supported_io_types": { 00:09:17.281 "read": true, 00:09:17.281 "write": true, 00:09:17.281 "unmap": true, 00:09:17.281 "write_zeroes": true, 00:09:17.281 "flush": true, 00:09:17.281 "reset": true, 00:09:17.281 "compare": false, 00:09:17.281 "compare_and_write": false, 00:09:17.281 "abort": true, 00:09:17.281 "nvme_admin": false, 00:09:17.281 "nvme_io": false 00:09:17.281 }, 00:09:17.281 "memory_domains": [ 00:09:17.281 { 00:09:17.281 "dma_device_id": "system", 00:09:17.281 "dma_device_type": 1 00:09:17.281 }, 00:09:17.281 { 00:09:17.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.281 "dma_device_type": 2 00:09:17.281 } 00:09:17.281 ], 00:09:17.281 "driver_specific": {} 00:09:17.281 } 00:09:17.281 ]' 00:09:17.281 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:17.281 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:17.281 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:17.281 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.281 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:17.281 [2024-06-10 11:32:49.273835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:17.281 [2024-06-10 11:32:49.273951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.281 [2024-06-10 11:32:49.274006] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:17.281 [2024-06-10 11:32:49.274051] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.281 [2024-06-10 11:32:49.276929] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.281 [2024-06-10 11:32:49.277015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:17.281 Passthru0 00:09:17.281 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.281 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:17.281 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.281 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:17.281 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.281 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:17.281 { 00:09:17.281 "name": "Malloc0", 00:09:17.281 "aliases": [ 00:09:17.281 "872de779-6c15-4363-a694-ec60787483e4" 00:09:17.281 ], 00:09:17.281 "product_name": "Malloc disk", 00:09:17.281 "block_size": 512, 00:09:17.281 "num_blocks": 16384, 00:09:17.281 "uuid": "872de779-6c15-4363-a694-ec60787483e4", 00:09:17.281 "assigned_rate_limits": { 00:09:17.281 "rw_ios_per_sec": 0, 00:09:17.281 "rw_mbytes_per_sec": 0, 00:09:17.281 "r_mbytes_per_sec": 0, 00:09:17.281 "w_mbytes_per_sec": 0 00:09:17.281 }, 00:09:17.281 "claimed": true, 00:09:17.281 "claim_type": "exclusive_write", 00:09:17.281 "zoned": false, 00:09:17.281 "supported_io_types": { 00:09:17.281 "read": true, 00:09:17.281 "write": true, 00:09:17.281 "unmap": true, 00:09:17.281 "write_zeroes": true, 00:09:17.281 "flush": true, 00:09:17.281 "reset": true, 00:09:17.281 "compare": false, 00:09:17.281 "compare_and_write": false, 00:09:17.281 "abort": true, 00:09:17.281 "nvme_admin": false, 00:09:17.281 "nvme_io": false 00:09:17.281 }, 00:09:17.281 "memory_domains": [ 00:09:17.281 { 00:09:17.281 "dma_device_id": "system", 00:09:17.281 "dma_device_type": 1 00:09:17.281 }, 00:09:17.281 { 00:09:17.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.281 "dma_device_type": 2 00:09:17.281 } 00:09:17.281 ], 00:09:17.281 "driver_specific": {} 00:09:17.281 }, 00:09:17.281 { 00:09:17.281 "name": "Passthru0", 00:09:17.281 "aliases": [ 00:09:17.281 "926d4356-1273-5b57-8001-7c70ce54016b" 00:09:17.281 ], 00:09:17.281 "product_name": "passthru", 00:09:17.281 "block_size": 512, 00:09:17.281 "num_blocks": 16384, 00:09:17.281 "uuid": "926d4356-1273-5b57-8001-7c70ce54016b", 00:09:17.281 "assigned_rate_limits": { 00:09:17.281 "rw_ios_per_sec": 0, 00:09:17.281 "rw_mbytes_per_sec": 0, 00:09:17.281 "r_mbytes_per_sec": 0, 00:09:17.281 "w_mbytes_per_sec": 0 00:09:17.281 }, 00:09:17.281 "claimed": false, 00:09:17.281 "zoned": false, 00:09:17.281 "supported_io_types": { 00:09:17.281 "read": true, 00:09:17.281 "write": true, 00:09:17.281 "unmap": true, 00:09:17.281 "write_zeroes": true, 00:09:17.281 "flush": true, 00:09:17.281 "reset": true, 00:09:17.281 "compare": false, 00:09:17.281 "compare_and_write": false, 00:09:17.281 "abort": true, 00:09:17.281 "nvme_admin": false, 00:09:17.281 "nvme_io": false 00:09:17.281 }, 00:09:17.281 "memory_domains": [ 00:09:17.281 { 00:09:17.281 "dma_device_id": "system", 00:09:17.281 "dma_device_type": 1 00:09:17.281 }, 00:09:17.281 { 00:09:17.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.281 "dma_device_type": 2 00:09:17.281 } 00:09:17.281 ], 00:09:17.281 "driver_specific": { 00:09:17.282 "passthru": { 00:09:17.282 "name": "Passthru0", 00:09:17.282 "base_bdev_name": "Malloc0" 00:09:17.282 } 00:09:17.282 } 00:09:17.282 } 00:09:17.282 ]' 00:09:17.282 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:17.596 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:17.596 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:17.596 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.596 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:17.596 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.596 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:17.596 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.596 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:17.596 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.596 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:17.596 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.596 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:17.596 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.596 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:17.596 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:17.596 11:32:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:17.596 00:09:17.596 real 0m0.313s 00:09:17.596 user 0m0.177s 00:09:17.596 sys 0m0.035s 00:09:17.596 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:17.596 ************************************ 00:09:17.596 END TEST rpc_integrity 00:09:17.596 11:32:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:17.596 ************************************ 00:09:17.597 11:32:49 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:17.597 11:32:49 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:17.597 11:32:49 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:17.597 11:32:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.597 ************************************ 00:09:17.597 START TEST rpc_plugins 00:09:17.597 ************************************ 00:09:17.597 11:32:49 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:09:17.597 11:32:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:17.597 11:32:49 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.597 11:32:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:17.597 11:32:49 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.597 11:32:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:17.597 11:32:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:17.597 11:32:49 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.597 11:32:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:17.597 11:32:49 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.597 11:32:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:17.597 { 00:09:17.597 "name": "Malloc1", 00:09:17.597 "aliases": [ 00:09:17.597 "14e9b3c6-4cd1-48c5-9be3-da7f50add54c" 00:09:17.597 ], 00:09:17.597 "product_name": "Malloc disk", 00:09:17.597 "block_size": 4096, 00:09:17.597 "num_blocks": 256, 00:09:17.597 "uuid": "14e9b3c6-4cd1-48c5-9be3-da7f50add54c", 00:09:17.597 "assigned_rate_limits": { 00:09:17.597 "rw_ios_per_sec": 0, 00:09:17.597 "rw_mbytes_per_sec": 0, 00:09:17.597 "r_mbytes_per_sec": 0, 00:09:17.597 "w_mbytes_per_sec": 0 00:09:17.597 }, 00:09:17.597 "claimed": false, 00:09:17.597 "zoned": false, 00:09:17.597 "supported_io_types": { 00:09:17.597 "read": true, 00:09:17.597 "write": true, 00:09:17.597 "unmap": true, 00:09:17.597 "write_zeroes": true, 00:09:17.597 "flush": true, 00:09:17.597 "reset": true, 00:09:17.597 "compare": false, 00:09:17.597 "compare_and_write": false, 00:09:17.597 "abort": true, 00:09:17.597 "nvme_admin": false, 00:09:17.597 "nvme_io": false 00:09:17.597 }, 00:09:17.597 "memory_domains": [ 00:09:17.597 { 00:09:17.597 "dma_device_id": "system", 00:09:17.597 "dma_device_type": 1 00:09:17.597 }, 00:09:17.597 { 00:09:17.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.597 "dma_device_type": 2 00:09:17.597 } 00:09:17.597 ], 00:09:17.597 "driver_specific": {} 00:09:17.597 } 00:09:17.597 ]' 00:09:17.597 11:32:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:17.597 11:32:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:17.597 11:32:49 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:17.597 11:32:49 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.597 11:32:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:17.597 11:32:49 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.597 11:32:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:17.597 11:32:49 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.597 11:32:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:17.597 11:32:49 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.597 11:32:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:17.597 11:32:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:17.855 11:32:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:17.855 00:09:17.855 real 0m0.146s 00:09:17.855 user 0m0.083s 00:09:17.856 sys 0m0.022s 00:09:17.856 11:32:49 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:17.856 ************************************ 00:09:17.856 END TEST rpc_plugins 00:09:17.856 ************************************ 00:09:17.856 11:32:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:17.856 11:32:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:17.856 11:32:49 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:17.856 11:32:49 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:17.856 11:32:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.856 ************************************ 00:09:17.856 START TEST rpc_trace_cmd_test 00:09:17.856 ************************************ 00:09:17.856 11:32:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:09:17.856 11:32:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:17.856 11:32:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:17.856 11:32:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.856 11:32:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.856 11:32:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.856 11:32:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:17.856 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid112105", 00:09:17.856 "tpoint_group_mask": "0x8", 00:09:17.856 "iscsi_conn": { 00:09:17.856 "mask": "0x2", 00:09:17.856 "tpoint_mask": "0x0" 00:09:17.856 }, 00:09:17.856 "scsi": { 00:09:17.856 "mask": "0x4", 00:09:17.856 "tpoint_mask": "0x0" 00:09:17.856 }, 00:09:17.856 "bdev": { 00:09:17.856 "mask": "0x8", 00:09:17.856 "tpoint_mask": "0xffffffffffffffff" 00:09:17.856 }, 00:09:17.856 "nvmf_rdma": { 00:09:17.856 "mask": "0x10", 00:09:17.856 "tpoint_mask": "0x0" 00:09:17.856 }, 00:09:17.856 "nvmf_tcp": { 00:09:17.856 "mask": "0x20", 00:09:17.856 "tpoint_mask": "0x0" 00:09:17.856 }, 00:09:17.856 "ftl": { 00:09:17.856 "mask": "0x40", 00:09:17.856 "tpoint_mask": "0x0" 00:09:17.856 }, 00:09:17.856 "blobfs": { 00:09:17.856 "mask": "0x80", 00:09:17.856 "tpoint_mask": "0x0" 00:09:17.856 }, 00:09:17.856 "dsa": { 00:09:17.856 "mask": "0x200", 00:09:17.856 "tpoint_mask": "0x0" 00:09:17.856 }, 00:09:17.856 "thread": { 00:09:17.856 "mask": "0x400", 00:09:17.856 "tpoint_mask": "0x0" 00:09:17.856 }, 00:09:17.856 "nvme_pcie": { 00:09:17.856 "mask": "0x800", 00:09:17.856 "tpoint_mask": "0x0" 00:09:17.856 }, 00:09:17.856 "iaa": { 00:09:17.856 "mask": "0x1000", 00:09:17.856 "tpoint_mask": "0x0" 00:09:17.856 }, 00:09:17.856 "nvme_tcp": { 00:09:17.856 "mask": "0x2000", 00:09:17.856 "tpoint_mask": "0x0" 00:09:17.856 }, 00:09:17.856 "bdev_nvme": { 00:09:17.856 "mask": "0x4000", 00:09:17.856 "tpoint_mask": "0x0" 00:09:17.856 }, 00:09:17.856 "sock": { 00:09:17.856 "mask": "0x8000", 00:09:17.856 "tpoint_mask": "0x0" 00:09:17.856 } 00:09:17.856 }' 00:09:17.856 11:32:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:17.856 11:32:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:09:17.856 11:32:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:17.856 11:32:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:17.856 11:32:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:17.856 11:32:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:17.856 11:32:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:17.856 11:32:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:17.856 11:32:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:18.115 11:32:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:18.115 00:09:18.115 real 0m0.221s 00:09:18.115 user 0m0.193s 00:09:18.115 sys 0m0.022s 00:09:18.115 11:32:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:18.115 11:32:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.115 ************************************ 00:09:18.115 END TEST rpc_trace_cmd_test 00:09:18.115 ************************************ 00:09:18.115 11:32:49 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:18.115 11:32:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:18.115 11:32:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:18.115 11:32:49 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:18.115 11:32:49 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:18.115 11:32:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.115 ************************************ 00:09:18.115 START TEST rpc_daemon_integrity 00:09:18.115 ************************************ 00:09:18.115 11:32:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:09:18.115 11:32:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:18.115 11:32:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.115 11:32:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.115 11:32:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.115 11:32:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:18.115 11:32:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:18.115 { 00:09:18.115 "name": "Malloc2", 00:09:18.115 "aliases": [ 00:09:18.115 "27722106-b635-4cda-9795-4394f6861dd5" 00:09:18.115 ], 00:09:18.115 "product_name": "Malloc disk", 00:09:18.115 "block_size": 512, 00:09:18.115 "num_blocks": 16384, 00:09:18.115 "uuid": "27722106-b635-4cda-9795-4394f6861dd5", 00:09:18.115 "assigned_rate_limits": { 00:09:18.115 "rw_ios_per_sec": 0, 00:09:18.115 "rw_mbytes_per_sec": 0, 00:09:18.115 "r_mbytes_per_sec": 0, 00:09:18.115 "w_mbytes_per_sec": 0 00:09:18.115 }, 00:09:18.115 "claimed": false, 00:09:18.115 "zoned": false, 00:09:18.115 "supported_io_types": { 00:09:18.115 "read": true, 00:09:18.115 "write": true, 00:09:18.115 "unmap": true, 00:09:18.115 "write_zeroes": true, 00:09:18.115 "flush": true, 00:09:18.115 "reset": true, 00:09:18.115 "compare": false, 00:09:18.115 "compare_and_write": false, 00:09:18.115 "abort": true, 00:09:18.115 "nvme_admin": false, 00:09:18.115 "nvme_io": false 00:09:18.115 }, 00:09:18.115 "memory_domains": [ 00:09:18.115 { 00:09:18.115 "dma_device_id": "system", 00:09:18.115 "dma_device_type": 1 00:09:18.115 }, 00:09:18.115 { 00:09:18.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.115 "dma_device_type": 2 00:09:18.115 } 00:09:18.115 ], 00:09:18.115 "driver_specific": {} 00:09:18.115 } 00:09:18.115 ]' 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.115 [2024-06-10 11:32:50.131534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:18.115 [2024-06-10 11:32:50.131633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.115 [2024-06-10 11:32:50.131694] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:18.115 [2024-06-10 11:32:50.131719] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.115 [2024-06-10 11:32:50.134444] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.115 [2024-06-10 11:32:50.134530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:18.115 Passthru0 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.115 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:18.115 { 00:09:18.115 "name": "Malloc2", 00:09:18.115 "aliases": [ 00:09:18.115 "27722106-b635-4cda-9795-4394f6861dd5" 00:09:18.115 ], 00:09:18.115 "product_name": "Malloc disk", 00:09:18.115 "block_size": 512, 00:09:18.115 "num_blocks": 16384, 00:09:18.115 "uuid": "27722106-b635-4cda-9795-4394f6861dd5", 00:09:18.115 "assigned_rate_limits": { 00:09:18.115 "rw_ios_per_sec": 0, 00:09:18.115 "rw_mbytes_per_sec": 0, 00:09:18.115 "r_mbytes_per_sec": 0, 00:09:18.115 "w_mbytes_per_sec": 0 00:09:18.115 }, 00:09:18.115 "claimed": true, 00:09:18.115 "claim_type": "exclusive_write", 00:09:18.115 "zoned": false, 00:09:18.115 "supported_io_types": { 00:09:18.115 "read": true, 00:09:18.115 "write": true, 00:09:18.115 "unmap": true, 00:09:18.115 "write_zeroes": true, 00:09:18.115 "flush": true, 00:09:18.115 "reset": true, 00:09:18.115 "compare": false, 00:09:18.115 "compare_and_write": false, 00:09:18.115 "abort": true, 00:09:18.115 "nvme_admin": false, 00:09:18.115 "nvme_io": false 00:09:18.115 }, 00:09:18.115 "memory_domains": [ 00:09:18.115 { 00:09:18.115 "dma_device_id": "system", 00:09:18.115 "dma_device_type": 1 00:09:18.115 }, 00:09:18.115 { 00:09:18.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.115 "dma_device_type": 2 00:09:18.115 } 00:09:18.115 ], 00:09:18.115 "driver_specific": {} 00:09:18.115 }, 00:09:18.115 { 00:09:18.115 "name": "Passthru0", 00:09:18.115 "aliases": [ 00:09:18.115 "e2b772ca-e424-525e-a1e5-1486fd9bd911" 00:09:18.115 ], 00:09:18.115 "product_name": "passthru", 00:09:18.115 "block_size": 512, 00:09:18.115 "num_blocks": 16384, 00:09:18.115 "uuid": "e2b772ca-e424-525e-a1e5-1486fd9bd911", 00:09:18.115 "assigned_rate_limits": { 00:09:18.115 "rw_ios_per_sec": 0, 00:09:18.116 "rw_mbytes_per_sec": 0, 00:09:18.116 "r_mbytes_per_sec": 0, 00:09:18.116 "w_mbytes_per_sec": 0 00:09:18.116 }, 00:09:18.116 "claimed": false, 00:09:18.116 "zoned": false, 00:09:18.116 "supported_io_types": { 00:09:18.116 "read": true, 00:09:18.116 "write": true, 00:09:18.116 "unmap": true, 00:09:18.116 "write_zeroes": true, 00:09:18.116 "flush": true, 00:09:18.116 "reset": true, 00:09:18.116 "compare": false, 00:09:18.116 "compare_and_write": false, 00:09:18.116 "abort": true, 00:09:18.116 "nvme_admin": false, 00:09:18.116 "nvme_io": false 00:09:18.116 }, 00:09:18.116 "memory_domains": [ 00:09:18.116 { 00:09:18.116 "dma_device_id": "system", 00:09:18.116 "dma_device_type": 1 00:09:18.116 }, 00:09:18.116 { 00:09:18.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.116 "dma_device_type": 2 00:09:18.116 } 00:09:18.116 ], 00:09:18.116 "driver_specific": { 00:09:18.116 "passthru": { 00:09:18.116 "name": "Passthru0", 00:09:18.116 "base_bdev_name": "Malloc2" 00:09:18.116 } 00:09:18.116 } 00:09:18.116 } 00:09:18.116 ]' 00:09:18.116 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:18.374 00:09:18.374 real 0m0.317s 00:09:18.374 user 0m0.182s 00:09:18.374 sys 0m0.032s 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:18.374 11:32:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.374 ************************************ 00:09:18.374 END TEST rpc_daemon_integrity 00:09:18.374 ************************************ 00:09:18.374 11:32:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:18.374 11:32:50 rpc -- rpc/rpc.sh@84 -- # killprocess 112105 00:09:18.374 11:32:50 rpc -- common/autotest_common.sh@949 -- # '[' -z 112105 ']' 00:09:18.374 11:32:50 rpc -- common/autotest_common.sh@953 -- # kill -0 112105 00:09:18.374 11:32:50 rpc -- common/autotest_common.sh@954 -- # uname 00:09:18.374 11:32:50 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:18.374 11:32:50 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 112105 00:09:18.374 11:32:50 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:18.374 11:32:50 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:18.374 killing process with pid 112105 00:09:18.374 11:32:50 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 112105' 00:09:18.374 11:32:50 rpc -- common/autotest_common.sh@968 -- # kill 112105 00:09:18.374 11:32:50 rpc -- common/autotest_common.sh@973 -- # wait 112105 00:09:21.700 ************************************ 00:09:21.700 END TEST rpc 00:09:21.700 ************************************ 00:09:21.700 00:09:21.700 real 0m5.625s 00:09:21.700 user 0m6.396s 00:09:21.700 sys 0m0.821s 00:09:21.700 11:32:53 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:21.700 11:32:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.700 11:32:53 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:21.700 11:32:53 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:21.700 11:32:53 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:21.700 11:32:53 -- common/autotest_common.sh@10 -- # set +x 00:09:21.700 ************************************ 00:09:21.700 START TEST skip_rpc 00:09:21.700 ************************************ 00:09:21.700 11:32:53 skip_rpc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:21.700 * Looking for test storage... 00:09:21.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:21.700 11:32:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:21.700 11:32:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:21.700 11:32:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:21.700 11:32:53 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:21.701 11:32:53 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:21.701 11:32:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.701 ************************************ 00:09:21.701 START TEST skip_rpc 00:09:21.701 ************************************ 00:09:21.701 11:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:09:21.701 11:32:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=112371 00:09:21.701 11:32:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:21.701 11:32:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:21.701 11:32:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:21.701 [2024-06-10 11:32:53.385055] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:09:21.701 [2024-06-10 11:32:53.385237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112371 ] 00:09:21.701 [2024-06-10 11:32:53.556941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.960 [2024-06-10 11:32:53.790009] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 112371 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 112371 ']' 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 112371 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 112371 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:27.224 killing process with pid 112371 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 112371' 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 112371 00:09:27.224 11:32:58 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 112371 00:09:29.126 00:09:29.126 real 0m7.789s 00:09:29.126 user 0m7.329s 00:09:29.126 sys 0m0.367s 00:09:29.126 11:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:29.126 11:33:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.126 ************************************ 00:09:29.126 END TEST skip_rpc 00:09:29.126 ************************************ 00:09:29.126 11:33:01 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:29.126 11:33:01 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:29.126 11:33:01 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:29.126 11:33:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.126 ************************************ 00:09:29.126 START TEST skip_rpc_with_json 00:09:29.126 ************************************ 00:09:29.126 11:33:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:09:29.126 11:33:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:29.126 11:33:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=112496 00:09:29.126 11:33:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:29.126 11:33:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:29.126 11:33:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 112496 00:09:29.126 11:33:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 112496 ']' 00:09:29.126 11:33:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.126 11:33:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:29.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.126 11:33:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.126 11:33:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:29.126 11:33:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:29.384 [2024-06-10 11:33:01.230009] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:09:29.384 [2024-06-10 11:33:01.230200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112496 ] 00:09:29.384 [2024-06-10 11:33:01.392653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.642 [2024-06-10 11:33:01.615311] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.577 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:30.577 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:09:30.577 11:33:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:30.577 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:30.577 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:30.577 [2024-06-10 11:33:02.506312] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:30.577 request: 00:09:30.577 { 00:09:30.577 "trtype": "tcp", 00:09:30.577 "method": "nvmf_get_transports", 00:09:30.577 "req_id": 1 00:09:30.577 } 00:09:30.577 Got JSON-RPC error response 00:09:30.577 response: 00:09:30.577 { 00:09:30.577 "code": -19, 00:09:30.577 "message": "No such device" 00:09:30.577 } 00:09:30.577 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:09:30.577 11:33:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:30.577 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:30.577 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:30.577 [2024-06-10 11:33:02.514398] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.577 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:30.577 11:33:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:30.577 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:30.577 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:30.835 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:30.835 11:33:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:30.835 { 00:09:30.835 "subsystems": [ 00:09:30.835 { 00:09:30.835 "subsystem": "scheduler", 00:09:30.835 "config": [ 00:09:30.835 { 00:09:30.835 "method": "framework_set_scheduler", 00:09:30.835 "params": { 00:09:30.835 "name": "static" 00:09:30.835 } 00:09:30.835 } 00:09:30.835 ] 00:09:30.835 }, 00:09:30.835 { 00:09:30.835 "subsystem": "vmd", 00:09:30.835 "config": [] 00:09:30.835 }, 00:09:30.835 { 00:09:30.835 "subsystem": "sock", 00:09:30.835 "config": [ 00:09:30.835 { 00:09:30.835 "method": "sock_set_default_impl", 00:09:30.835 "params": { 00:09:30.835 "impl_name": "posix" 00:09:30.835 } 00:09:30.835 }, 00:09:30.835 { 00:09:30.835 "method": "sock_impl_set_options", 00:09:30.836 "params": { 00:09:30.836 "impl_name": "ssl", 00:09:30.836 "recv_buf_size": 4096, 00:09:30.836 "send_buf_size": 4096, 00:09:30.836 "enable_recv_pipe": true, 00:09:30.836 "enable_quickack": false, 00:09:30.836 "enable_placement_id": 0, 00:09:30.836 "enable_zerocopy_send_server": true, 00:09:30.836 "enable_zerocopy_send_client": false, 00:09:30.836 "zerocopy_threshold": 0, 00:09:30.836 "tls_version": 0, 00:09:30.836 "enable_ktls": false 00:09:30.836 } 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "method": "sock_impl_set_options", 00:09:30.836 "params": { 00:09:30.836 "impl_name": "posix", 00:09:30.836 "recv_buf_size": 2097152, 00:09:30.836 "send_buf_size": 2097152, 00:09:30.836 "enable_recv_pipe": true, 00:09:30.836 "enable_quickack": false, 00:09:30.836 "enable_placement_id": 0, 00:09:30.836 "enable_zerocopy_send_server": true, 00:09:30.836 "enable_zerocopy_send_client": false, 00:09:30.836 "zerocopy_threshold": 0, 00:09:30.836 "tls_version": 0, 00:09:30.836 "enable_ktls": false 00:09:30.836 } 00:09:30.836 } 00:09:30.836 ] 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "subsystem": "iobuf", 00:09:30.836 "config": [ 00:09:30.836 { 00:09:30.836 "method": "iobuf_set_options", 00:09:30.836 "params": { 00:09:30.836 "small_pool_count": 8192, 00:09:30.836 "large_pool_count": 1024, 00:09:30.836 "small_bufsize": 8192, 00:09:30.836 "large_bufsize": 135168 00:09:30.836 } 00:09:30.836 } 00:09:30.836 ] 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "subsystem": "keyring", 00:09:30.836 "config": [] 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "subsystem": "accel", 00:09:30.836 "config": [ 00:09:30.836 { 00:09:30.836 "method": "accel_set_options", 00:09:30.836 "params": { 00:09:30.836 "small_cache_size": 128, 00:09:30.836 "large_cache_size": 16, 00:09:30.836 "task_count": 2048, 00:09:30.836 "sequence_count": 2048, 00:09:30.836 "buf_count": 2048 00:09:30.836 } 00:09:30.836 } 00:09:30.836 ] 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "subsystem": "bdev", 00:09:30.836 "config": [ 00:09:30.836 { 00:09:30.836 "method": "bdev_set_options", 00:09:30.836 "params": { 00:09:30.836 "bdev_io_pool_size": 65535, 00:09:30.836 "bdev_io_cache_size": 256, 00:09:30.836 "bdev_auto_examine": true, 00:09:30.836 "iobuf_small_cache_size": 128, 00:09:30.836 "iobuf_large_cache_size": 16 00:09:30.836 } 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "method": "bdev_raid_set_options", 00:09:30.836 "params": { 00:09:30.836 "process_window_size_kb": 1024 00:09:30.836 } 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "method": "bdev_nvme_set_options", 00:09:30.836 "params": { 00:09:30.836 "action_on_timeout": "none", 00:09:30.836 "timeout_us": 0, 00:09:30.836 "timeout_admin_us": 0, 00:09:30.836 "keep_alive_timeout_ms": 10000, 00:09:30.836 "arbitration_burst": 0, 00:09:30.836 "low_priority_weight": 0, 00:09:30.836 "medium_priority_weight": 0, 00:09:30.836 "high_priority_weight": 0, 00:09:30.836 "nvme_adminq_poll_period_us": 10000, 00:09:30.836 "nvme_ioq_poll_period_us": 0, 00:09:30.836 "io_queue_requests": 0, 00:09:30.836 "delay_cmd_submit": true, 00:09:30.836 "transport_retry_count": 4, 00:09:30.836 "bdev_retry_count": 3, 00:09:30.836 "transport_ack_timeout": 0, 00:09:30.836 "ctrlr_loss_timeout_sec": 0, 00:09:30.836 "reconnect_delay_sec": 0, 00:09:30.836 "fast_io_fail_timeout_sec": 0, 00:09:30.836 "disable_auto_failback": false, 00:09:30.836 "generate_uuids": false, 00:09:30.836 "transport_tos": 0, 00:09:30.836 "nvme_error_stat": false, 00:09:30.836 "rdma_srq_size": 0, 00:09:30.836 "io_path_stat": false, 00:09:30.836 "allow_accel_sequence": false, 00:09:30.836 "rdma_max_cq_size": 0, 00:09:30.836 "rdma_cm_event_timeout_ms": 0, 00:09:30.836 "dhchap_digests": [ 00:09:30.836 "sha256", 00:09:30.836 "sha384", 00:09:30.836 "sha512" 00:09:30.836 ], 00:09:30.836 "dhchap_dhgroups": [ 00:09:30.836 "null", 00:09:30.836 "ffdhe2048", 00:09:30.836 "ffdhe3072", 00:09:30.836 "ffdhe4096", 00:09:30.836 "ffdhe6144", 00:09:30.836 "ffdhe8192" 00:09:30.836 ] 00:09:30.836 } 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "method": "bdev_nvme_set_hotplug", 00:09:30.836 "params": { 00:09:30.836 "period_us": 100000, 00:09:30.836 "enable": false 00:09:30.836 } 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "method": "bdev_iscsi_set_options", 00:09:30.836 "params": { 00:09:30.836 "timeout_sec": 30 00:09:30.836 } 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "method": "bdev_wait_for_examine" 00:09:30.836 } 00:09:30.836 ] 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "subsystem": "nvmf", 00:09:30.836 "config": [ 00:09:30.836 { 00:09:30.836 "method": "nvmf_set_config", 00:09:30.836 "params": { 00:09:30.836 "discovery_filter": "match_any", 00:09:30.836 "admin_cmd_passthru": { 00:09:30.836 "identify_ctrlr": false 00:09:30.836 } 00:09:30.836 } 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "method": "nvmf_set_max_subsystems", 00:09:30.836 "params": { 00:09:30.836 "max_subsystems": 1024 00:09:30.836 } 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "method": "nvmf_set_crdt", 00:09:30.836 "params": { 00:09:30.836 "crdt1": 0, 00:09:30.836 "crdt2": 0, 00:09:30.836 "crdt3": 0 00:09:30.836 } 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "method": "nvmf_create_transport", 00:09:30.836 "params": { 00:09:30.836 "trtype": "TCP", 00:09:30.836 "max_queue_depth": 128, 00:09:30.836 "max_io_qpairs_per_ctrlr": 127, 00:09:30.836 "in_capsule_data_size": 4096, 00:09:30.836 "max_io_size": 131072, 00:09:30.836 "io_unit_size": 131072, 00:09:30.836 "max_aq_depth": 128, 00:09:30.836 "num_shared_buffers": 511, 00:09:30.836 "buf_cache_size": 4294967295, 00:09:30.836 "dif_insert_or_strip": false, 00:09:30.836 "zcopy": false, 00:09:30.836 "c2h_success": true, 00:09:30.836 "sock_priority": 0, 00:09:30.836 "abort_timeout_sec": 1, 00:09:30.836 "ack_timeout": 0, 00:09:30.836 "data_wr_pool_size": 0 00:09:30.836 } 00:09:30.836 } 00:09:30.836 ] 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "subsystem": "nbd", 00:09:30.836 "config": [] 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "subsystem": "vhost_blk", 00:09:30.836 "config": [] 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "subsystem": "scsi", 00:09:30.836 "config": null 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "subsystem": "iscsi", 00:09:30.836 "config": [ 00:09:30.836 { 00:09:30.836 "method": "iscsi_set_options", 00:09:30.836 "params": { 00:09:30.836 "node_base": "iqn.2016-06.io.spdk", 00:09:30.836 "max_sessions": 128, 00:09:30.836 "max_connections_per_session": 2, 00:09:30.836 "max_queue_depth": 64, 00:09:30.836 "default_time2wait": 2, 00:09:30.836 "default_time2retain": 20, 00:09:30.836 "first_burst_length": 8192, 00:09:30.836 "immediate_data": true, 00:09:30.836 "allow_duplicated_isid": false, 00:09:30.836 "error_recovery_level": 0, 00:09:30.836 "nop_timeout": 60, 00:09:30.836 "nop_in_interval": 30, 00:09:30.836 "disable_chap": false, 00:09:30.836 "require_chap": false, 00:09:30.836 "mutual_chap": false, 00:09:30.836 "chap_group": 0, 00:09:30.836 "max_large_datain_per_connection": 64, 00:09:30.836 "max_r2t_per_connection": 4, 00:09:30.836 "pdu_pool_size": 36864, 00:09:30.836 "immediate_data_pool_size": 16384, 00:09:30.836 "data_out_pool_size": 2048 00:09:30.836 } 00:09:30.836 } 00:09:30.836 ] 00:09:30.836 }, 00:09:30.836 { 00:09:30.836 "subsystem": "vhost_scsi", 00:09:30.836 "config": [] 00:09:30.836 } 00:09:30.836 ] 00:09:30.836 } 00:09:30.836 11:33:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:30.836 11:33:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 112496 00:09:30.836 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 112496 ']' 00:09:30.836 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 112496 00:09:30.836 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:09:30.836 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:30.836 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 112496 00:09:30.836 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:30.836 killing process with pid 112496 00:09:30.836 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:30.836 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 112496' 00:09:30.836 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 112496 00:09:30.836 11:33:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 112496 00:09:34.126 11:33:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=112562 00:09:34.126 11:33:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:34.126 11:33:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:39.391 11:33:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 112562 00:09:39.391 11:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 112562 ']' 00:09:39.391 11:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 112562 00:09:39.391 11:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:09:39.391 11:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:39.391 11:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 112562 00:09:39.391 11:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:39.391 killing process with pid 112562 00:09:39.391 11:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:39.391 11:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 112562' 00:09:39.391 11:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 112562 00:09:39.391 11:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 112562 00:09:41.291 11:33:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:41.291 11:33:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:41.291 00:09:41.291 real 0m12.128s 00:09:41.291 user 0m11.576s 00:09:41.291 sys 0m0.832s 00:09:41.291 11:33:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:41.291 11:33:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:41.291 ************************************ 00:09:41.291 END TEST skip_rpc_with_json 00:09:41.291 ************************************ 00:09:41.291 11:33:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:41.291 11:33:13 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:41.291 11:33:13 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:41.291 11:33:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.291 ************************************ 00:09:41.291 START TEST skip_rpc_with_delay 00:09:41.291 ************************************ 00:09:41.291 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:09:41.291 11:33:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:41.291 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:09:41.291 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:41.291 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:41.291 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:41.291 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:41.291 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:41.291 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:41.291 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:41.291 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:41.291 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:41.291 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:41.550 [2024-06-10 11:33:13.404943] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:41.550 [2024-06-10 11:33:13.405163] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:41.550 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:09:41.551 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:41.551 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:41.551 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:41.551 00:09:41.551 real 0m0.140s 00:09:41.551 user 0m0.075s 00:09:41.551 sys 0m0.065s 00:09:41.551 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:41.551 11:33:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:41.551 ************************************ 00:09:41.551 END TEST skip_rpc_with_delay 00:09:41.551 ************************************ 00:09:41.551 11:33:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:41.551 11:33:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:41.551 11:33:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:41.551 11:33:13 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:41.551 11:33:13 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:41.551 11:33:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.551 ************************************ 00:09:41.551 START TEST exit_on_failed_rpc_init 00:09:41.551 ************************************ 00:09:41.551 11:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:09:41.551 11:33:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=112708 00:09:41.551 11:33:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:41.551 11:33:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 112708 00:09:41.551 11:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 112708 ']' 00:09:41.551 11:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.551 11:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:41.551 11:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.551 11:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:41.551 11:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:41.809 [2024-06-10 11:33:13.622282] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:09:41.809 [2024-06-10 11:33:13.622549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112708 ] 00:09:41.809 [2024-06-10 11:33:13.795574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.066 [2024-06-10 11:33:14.020693] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.997 11:33:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:42.997 11:33:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:09:42.997 11:33:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:42.997 11:33:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:42.997 11:33:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:09:42.997 11:33:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:42.997 11:33:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:42.997 11:33:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:42.997 11:33:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:42.997 11:33:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:42.997 11:33:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:42.997 11:33:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:42.998 11:33:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:42.998 11:33:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:42.998 11:33:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:42.998 [2024-06-10 11:33:15.026918] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:09:42.998 [2024-06-10 11:33:15.027162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112731 ] 00:09:43.254 [2024-06-10 11:33:15.217387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.512 [2024-06-10 11:33:15.501915] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.512 [2024-06-10 11:33:15.502065] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:43.512 [2024-06-10 11:33:15.502125] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:43.512 [2024-06-10 11:33:15.502170] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:44.078 11:33:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:09:44.078 11:33:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:44.078 11:33:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:09:44.078 11:33:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:09:44.078 11:33:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:09:44.078 11:33:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:44.078 11:33:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:44.078 11:33:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 112708 00:09:44.078 11:33:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 112708 ']' 00:09:44.078 11:33:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 112708 00:09:44.078 11:33:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:09:44.078 11:33:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:44.078 11:33:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 112708 00:09:44.078 11:33:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:44.078 killing process with pid 112708 00:09:44.078 11:33:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:44.078 11:33:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 112708' 00:09:44.078 11:33:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 112708 00:09:44.078 11:33:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 112708 00:09:47.418 00:09:47.418 real 0m5.258s 00:09:47.418 user 0m5.944s 00:09:47.418 sys 0m0.658s 00:09:47.418 ************************************ 00:09:47.418 END TEST exit_on_failed_rpc_init 00:09:47.418 ************************************ 00:09:47.418 11:33:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:47.418 11:33:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:47.418 11:33:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:47.418 ************************************ 00:09:47.418 END TEST skip_rpc 00:09:47.418 ************************************ 00:09:47.418 00:09:47.418 real 0m25.625s 00:09:47.418 user 0m25.106s 00:09:47.418 sys 0m2.065s 00:09:47.418 11:33:18 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:47.418 11:33:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:47.418 11:33:18 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:47.418 11:33:18 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:47.418 11:33:18 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:47.418 11:33:18 -- common/autotest_common.sh@10 -- # set +x 00:09:47.418 ************************************ 00:09:47.418 START TEST rpc_client 00:09:47.418 ************************************ 00:09:47.418 11:33:18 rpc_client -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:47.418 * Looking for test storage... 00:09:47.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:47.418 11:33:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:47.418 OK 00:09:47.418 11:33:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:47.418 ************************************ 00:09:47.418 END TEST rpc_client 00:09:47.418 ************************************ 00:09:47.418 00:09:47.418 real 0m0.167s 00:09:47.418 user 0m0.102s 00:09:47.418 sys 0m0.080s 00:09:47.418 11:33:19 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:47.418 11:33:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:47.418 11:33:19 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:47.418 11:33:19 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:47.418 11:33:19 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:47.418 11:33:19 -- common/autotest_common.sh@10 -- # set +x 00:09:47.418 ************************************ 00:09:47.418 START TEST json_config 00:09:47.418 ************************************ 00:09:47.418 11:33:19 json_config -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bbee0fed-6b03-45f8-b18d-6b37b07a5bb9 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=bbee0fed-6b03-45f8-b18d-6b37b07a5bb9 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:47.418 11:33:19 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.418 11:33:19 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.418 11:33:19 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.418 11:33:19 json_config -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:47.418 11:33:19 json_config -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:47.418 11:33:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:47.418 11:33:19 json_config -- paths/export.sh@5 -- # export PATH 00:09:47.418 11:33:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@47 -- # : 0 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.418 INFO: JSON configuration test init 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.418 11:33:19 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:09:47.418 11:33:19 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:47.418 11:33:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:09:47.418 11:33:19 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:47.418 11:33:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:47.418 Waiting for target to run... 00:09:47.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:47.418 11:33:19 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:09:47.418 11:33:19 json_config -- json_config/common.sh@9 -- # local app=target 00:09:47.418 11:33:19 json_config -- json_config/common.sh@10 -- # shift 00:09:47.418 11:33:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:47.418 11:33:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:47.418 11:33:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:47.418 11:33:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:47.418 11:33:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:47.418 11:33:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=112899 00:09:47.419 11:33:19 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:47.419 11:33:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:47.419 11:33:19 json_config -- json_config/common.sh@25 -- # waitforlisten 112899 /var/tmp/spdk_tgt.sock 00:09:47.419 11:33:19 json_config -- common/autotest_common.sh@830 -- # '[' -z 112899 ']' 00:09:47.419 11:33:19 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:47.419 11:33:19 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:47.419 11:33:19 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:47.419 11:33:19 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:47.419 11:33:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:47.419 [2024-06-10 11:33:19.276795] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:09:47.419 [2024-06-10 11:33:19.277277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112899 ] 00:09:47.677 [2024-06-10 11:33:19.684387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.936 [2024-06-10 11:33:19.914506] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.552 11:33:20 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:48.553 11:33:20 json_config -- common/autotest_common.sh@863 -- # return 0 00:09:48.553 11:33:20 json_config -- json_config/common.sh@26 -- # echo '' 00:09:48.553 00:09:48.553 11:33:20 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:09:48.553 11:33:20 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:09:48.553 11:33:20 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:48.553 11:33:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:48.553 11:33:20 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:09:48.553 11:33:20 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:09:48.553 11:33:20 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:48.553 11:33:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:48.553 11:33:20 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:48.553 11:33:20 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:09:48.553 11:33:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:49.489 11:33:21 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:09:49.489 11:33:21 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:49.489 11:33:21 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:49.489 11:33:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:49.489 11:33:21 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:49.489 11:33:21 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:49.489 11:33:21 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:49.489 11:33:21 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:09:49.489 11:33:21 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:09:49.489 11:33:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:49.746 11:33:21 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:09:49.746 11:33:21 json_config -- json_config/json_config.sh@48 -- # local get_types 00:09:49.746 11:33:21 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:09:49.746 11:33:21 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:09:49.746 11:33:21 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:49.746 11:33:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:49.746 11:33:21 json_config -- json_config/json_config.sh@55 -- # return 0 00:09:49.746 11:33:21 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:09:49.746 11:33:21 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:09:49.746 11:33:21 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:09:49.746 11:33:21 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:49.746 11:33:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:49.746 11:33:21 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:09:49.746 11:33:21 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:09:49.746 11:33:21 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:09:49.746 11:33:21 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:09:49.746 11:33:21 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:09:49.746 11:33:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:49.746 11:33:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:49.746 11:33:21 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:09:49.746 11:33:21 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:49.746 11:33:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:50.004 11:33:22 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:09:50.005 11:33:22 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:50.005 11:33:22 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:50.005 11:33:22 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:09:50.005 11:33:22 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:09:50.005 11:33:22 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:09:50.005 11:33:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:09:50.570 Nvme0n1p0 Nvme0n1p1 00:09:50.570 11:33:22 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:09:50.570 11:33:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:09:50.570 [2024-06-10 11:33:22.575396] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:50.570 [2024-06-10 11:33:22.575763] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:50.570 00:09:50.570 11:33:22 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:09:50.570 11:33:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:09:50.828 Malloc3 00:09:51.130 11:33:22 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:51.130 11:33:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:51.130 [2024-06-10 11:33:23.091092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:51.130 [2024-06-10 11:33:23.091373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.130 [2024-06-10 11:33:23.091535] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:51.131 [2024-06-10 11:33:23.091642] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.131 [2024-06-10 11:33:23.094406] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.131 [2024-06-10 11:33:23.094596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:51.131 PTBdevFromMalloc3 00:09:51.131 11:33:23 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:09:51.131 11:33:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:09:51.389 Null0 00:09:51.389 11:33:23 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:09:51.389 11:33:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:09:51.647 Malloc0 00:09:51.647 11:33:23 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:09:51.647 11:33:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:09:51.906 Malloc1 00:09:51.906 11:33:23 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:09:51.906 11:33:23 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:09:52.474 102400+0 records in 00:09:52.474 102400+0 records out 00:09:52.474 104857600 bytes (105 MB, 100 MiB) copied, 0.509437 s, 206 MB/s 00:09:52.474 11:33:24 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:09:52.474 11:33:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:09:52.733 aio_disk 00:09:52.733 11:33:24 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:09:52.733 11:33:24 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:52.733 11:33:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:52.990 83c64a18-58c5-432e-9ed3-f552fb533ee5 00:09:52.990 11:33:24 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:09:52.990 11:33:24 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:09:52.990 11:33:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:09:53.557 11:33:25 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:09:53.557 11:33:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:09:53.557 11:33:25 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:53.557 11:33:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:53.816 11:33:25 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:53.816 11:33:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:54.075 11:33:25 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:09:54.075 11:33:25 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:09:54.075 11:33:25 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:8d357f8f-0295-4c57-b053-9f6d2dddd3cf bdev_register:07829297-e23c-499c-add1-2329a8595a9a bdev_register:91240207-e755-47ea-9537-1e3edf8fbbbc bdev_register:03919ab3-d747-4742-938f-767130594967 00:09:54.075 11:33:25 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:09:54.075 11:33:25 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:09:54.075 11:33:25 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:09:54.075 11:33:25 json_config -- json_config/json_config.sh@71 -- # sort 00:09:54.075 11:33:25 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:8d357f8f-0295-4c57-b053-9f6d2dddd3cf bdev_register:07829297-e23c-499c-add1-2329a8595a9a bdev_register:91240207-e755-47ea-9537-1e3edf8fbbbc bdev_register:03919ab3-d747-4742-938f-767130594967 00:09:54.075 11:33:25 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:09:54.075 11:33:25 json_config -- json_config/json_config.sh@72 -- # sort 00:09:54.075 11:33:25 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:09:54.075 11:33:25 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:09:54.075 11:33:25 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:54.075 11:33:25 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:54.075 11:33:25 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:09:54.075 11:33:25 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:54.075 11:33:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:09:54.334 11:33:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:8d357f8f-0295-4c57-b053-9f6d2dddd3cf 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:07829297-e23c-499c-add1-2329a8595a9a 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:91240207-e755-47ea-9537-1e3edf8fbbbc 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:03919ab3-d747-4742-938f-767130594967 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:03919ab3-d747-4742-938f-767130594967 bdev_register:07829297-e23c-499c-add1-2329a8595a9a bdev_register:8d357f8f-0295-4c57-b053-9f6d2dddd3cf bdev_register:91240207-e755-47ea-9537-1e3edf8fbbbc bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\3\9\1\9\a\b\3\-\d\7\4\7\-\4\7\4\2\-\9\3\8\f\-\7\6\7\1\3\0\5\9\4\9\6\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\7\8\2\9\2\9\7\-\e\2\3\c\-\4\9\9\c\-\a\d\d\1\-\2\3\2\9\a\8\5\9\5\a\9\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\d\3\5\7\f\8\f\-\0\2\9\5\-\4\c\5\7\-\b\0\5\3\-\9\f\6\d\2\d\d\d\d\3\c\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\1\2\4\0\2\0\7\-\e\7\5\5\-\4\7\e\a\-\9\5\3\7\-\1\e\3\e\d\f\8\f\b\b\b\c\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@86 -- # cat 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:03919ab3-d747-4742-938f-767130594967 bdev_register:07829297-e23c-499c-add1-2329a8595a9a bdev_register:8d357f8f-0295-4c57-b053-9f6d2dddd3cf bdev_register:91240207-e755-47ea-9537-1e3edf8fbbbc bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:09:54.335 Expected events matched: 00:09:54.335 bdev_register:03919ab3-d747-4742-938f-767130594967 00:09:54.335 bdev_register:07829297-e23c-499c-add1-2329a8595a9a 00:09:54.335 bdev_register:8d357f8f-0295-4c57-b053-9f6d2dddd3cf 00:09:54.335 bdev_register:91240207-e755-47ea-9537-1e3edf8fbbbc 00:09:54.335 bdev_register:Malloc0 00:09:54.335 bdev_register:Malloc0p0 00:09:54.335 bdev_register:Malloc0p1 00:09:54.335 bdev_register:Malloc0p2 00:09:54.335 bdev_register:Malloc1 00:09:54.335 bdev_register:Malloc3 00:09:54.335 bdev_register:Null0 00:09:54.335 bdev_register:Nvme0n1 00:09:54.335 bdev_register:Nvme0n1p0 00:09:54.335 bdev_register:Nvme0n1p1 00:09:54.335 bdev_register:PTBdevFromMalloc3 00:09:54.335 bdev_register:aio_disk 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:09:54.335 11:33:26 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:54.335 11:33:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:09:54.335 11:33:26 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:54.335 11:33:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:09:54.335 11:33:26 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:54.335 11:33:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:54.594 MallocBdevForConfigChangeCheck 00:09:54.594 11:33:26 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:09:54.594 11:33:26 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:54.594 11:33:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:54.594 11:33:26 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:09:54.594 11:33:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:54.853 11:33:26 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:09:54.853 INFO: shutting down applications... 00:09:54.853 11:33:26 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:09:54.853 11:33:26 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:09:54.853 11:33:26 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:09:54.853 11:33:26 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:55.119 [2024-06-10 11:33:27.090538] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:09:55.404 Calling clear_vhost_scsi_subsystem 00:09:55.404 Calling clear_iscsi_subsystem 00:09:55.404 Calling clear_vhost_blk_subsystem 00:09:55.404 Calling clear_nbd_subsystem 00:09:55.404 Calling clear_nvmf_subsystem 00:09:55.404 Calling clear_bdev_subsystem 00:09:55.404 11:33:27 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:55.404 11:33:27 json_config -- json_config/json_config.sh@343 -- # count=100 00:09:55.404 11:33:27 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:09:55.404 11:33:27 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:55.404 11:33:27 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:55.404 11:33:27 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:55.663 11:33:27 json_config -- json_config/json_config.sh@345 -- # break 00:09:55.663 11:33:27 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:09:55.663 11:33:27 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:09:55.663 11:33:27 json_config -- json_config/common.sh@31 -- # local app=target 00:09:55.663 11:33:27 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:55.663 11:33:27 json_config -- json_config/common.sh@35 -- # [[ -n 112899 ]] 00:09:55.663 11:33:27 json_config -- json_config/common.sh@38 -- # kill -SIGINT 112899 00:09:55.663 11:33:27 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:55.663 11:33:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:55.663 11:33:27 json_config -- json_config/common.sh@41 -- # kill -0 112899 00:09:55.663 11:33:27 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:56.229 11:33:28 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:56.229 11:33:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:56.229 11:33:28 json_config -- json_config/common.sh@41 -- # kill -0 112899 00:09:56.229 11:33:28 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:56.796 11:33:28 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:56.796 11:33:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:56.796 11:33:28 json_config -- json_config/common.sh@41 -- # kill -0 112899 00:09:56.796 11:33:28 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:57.362 SPDK target shutdown done 00:09:57.362 INFO: relaunching applications... 00:09:57.362 11:33:29 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:57.362 11:33:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:57.362 11:33:29 json_config -- json_config/common.sh@41 -- # kill -0 112899 00:09:57.362 11:33:29 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:57.362 11:33:29 json_config -- json_config/common.sh@43 -- # break 00:09:57.362 11:33:29 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:57.362 11:33:29 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:57.362 11:33:29 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:09:57.362 11:33:29 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:57.362 11:33:29 json_config -- json_config/common.sh@9 -- # local app=target 00:09:57.362 11:33:29 json_config -- json_config/common.sh@10 -- # shift 00:09:57.362 11:33:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:57.362 11:33:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:57.362 11:33:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:57.362 11:33:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:57.362 11:33:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:57.362 11:33:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=113178 00:09:57.362 11:33:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:57.362 11:33:29 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:57.362 Waiting for target to run... 00:09:57.362 11:33:29 json_config -- json_config/common.sh@25 -- # waitforlisten 113178 /var/tmp/spdk_tgt.sock 00:09:57.362 11:33:29 json_config -- common/autotest_common.sh@830 -- # '[' -z 113178 ']' 00:09:57.362 11:33:29 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:57.362 11:33:29 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:57.362 11:33:29 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:57.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:57.362 11:33:29 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:57.362 11:33:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:57.362 [2024-06-10 11:33:29.317497] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:09:57.362 [2024-06-10 11:33:29.318013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113178 ] 00:09:57.929 [2024-06-10 11:33:29.753390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.929 [2024-06-10 11:33:29.986764] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.864 [2024-06-10 11:33:30.873954] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:58.864 [2024-06-10 11:33:30.874293] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:58.864 [2024-06-10 11:33:30.881947] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:58.864 [2024-06-10 11:33:30.882212] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:58.864 [2024-06-10 11:33:30.889956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:58.864 [2024-06-10 11:33:30.890178] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:58.864 [2024-06-10 11:33:30.890311] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:59.122 [2024-06-10 11:33:30.987535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:59.122 [2024-06-10 11:33:30.987890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.122 [2024-06-10 11:33:30.988123] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:59.122 [2024-06-10 11:33:30.988259] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.122 [2024-06-10 11:33:30.989046] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.122 [2024-06-10 11:33:30.989233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:59.122 00:09:59.122 11:33:31 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:59.122 11:33:31 json_config -- common/autotest_common.sh@863 -- # return 0 00:09:59.122 11:33:31 json_config -- json_config/common.sh@26 -- # echo '' 00:09:59.122 INFO: Checking if target configuration is the same... 00:09:59.122 11:33:31 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:09:59.122 11:33:31 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:59.122 11:33:31 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:59.122 11:33:31 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:09:59.122 11:33:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:59.122 + '[' 2 -ne 2 ']' 00:09:59.122 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:59.122 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:59.122 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:59.122 +++ basename /dev/fd/62 00:09:59.122 ++ mktemp /tmp/62.XXX 00:09:59.122 + tmp_file_1=/tmp/62.vRb 00:09:59.122 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:59.122 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:59.122 + tmp_file_2=/tmp/spdk_tgt_config.json.6Ud 00:09:59.122 + ret=0 00:09:59.122 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:59.690 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:59.690 + diff -u /tmp/62.vRb /tmp/spdk_tgt_config.json.6Ud 00:09:59.690 + echo 'INFO: JSON config files are the same' 00:09:59.690 INFO: JSON config files are the same 00:09:59.690 + rm /tmp/62.vRb /tmp/spdk_tgt_config.json.6Ud 00:09:59.690 + exit 0 00:09:59.690 11:33:31 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:09:59.690 11:33:31 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:59.690 INFO: changing configuration and checking if this can be detected... 00:09:59.690 11:33:31 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:59.690 11:33:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:59.949 11:33:31 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:09:59.949 11:33:31 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:59.949 11:33:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:59.949 + '[' 2 -ne 2 ']' 00:09:59.949 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:59.949 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:59.949 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:59.949 +++ basename /dev/fd/62 00:09:59.949 ++ mktemp /tmp/62.XXX 00:09:59.949 + tmp_file_1=/tmp/62.CJK 00:09:59.949 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:59.949 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:59.949 + tmp_file_2=/tmp/spdk_tgt_config.json.n9B 00:09:59.949 + ret=0 00:09:59.949 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:00.209 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:00.209 + diff -u /tmp/62.CJK /tmp/spdk_tgt_config.json.n9B 00:10:00.209 + ret=1 00:10:00.209 + echo '=== Start of file: /tmp/62.CJK ===' 00:10:00.209 + cat /tmp/62.CJK 00:10:00.209 + echo '=== End of file: /tmp/62.CJK ===' 00:10:00.209 + echo '' 00:10:00.209 + echo '=== Start of file: /tmp/spdk_tgt_config.json.n9B ===' 00:10:00.209 + cat /tmp/spdk_tgt_config.json.n9B 00:10:00.209 + echo '=== End of file: /tmp/spdk_tgt_config.json.n9B ===' 00:10:00.209 + echo '' 00:10:00.209 + rm /tmp/62.CJK /tmp/spdk_tgt_config.json.n9B 00:10:00.209 + exit 1 00:10:00.209 INFO: configuration change detected. 00:10:00.209 11:33:32 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:10:00.209 11:33:32 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:10:00.209 11:33:32 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:10:00.209 11:33:32 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:00.209 11:33:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:00.209 11:33:32 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:10:00.209 11:33:32 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:10:00.209 11:33:32 json_config -- json_config/json_config.sh@317 -- # [[ -n 113178 ]] 00:10:00.209 11:33:32 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:10:00.209 11:33:32 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:10:00.209 11:33:32 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:00.209 11:33:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:00.209 11:33:32 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:10:00.209 11:33:32 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:10:00.209 11:33:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:10:00.480 11:33:32 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:10:00.480 11:33:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:10:00.771 11:33:32 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:10:00.771 11:33:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:10:01.340 11:33:33 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:10:01.340 11:33:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:10:01.340 11:33:33 json_config -- json_config/json_config.sh@193 -- # uname -s 00:10:01.340 11:33:33 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:10:01.340 11:33:33 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:10:01.340 11:33:33 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:10:01.340 11:33:33 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:10:01.340 11:33:33 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:01.340 11:33:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:01.599 11:33:33 json_config -- json_config/json_config.sh@323 -- # killprocess 113178 00:10:01.599 11:33:33 json_config -- common/autotest_common.sh@949 -- # '[' -z 113178 ']' 00:10:01.599 11:33:33 json_config -- common/autotest_common.sh@953 -- # kill -0 113178 00:10:01.599 11:33:33 json_config -- common/autotest_common.sh@954 -- # uname 00:10:01.599 11:33:33 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:01.599 11:33:33 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 113178 00:10:01.599 11:33:33 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:01.599 11:33:33 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:01.599 11:33:33 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 113178' 00:10:01.599 killing process with pid 113178 00:10:01.599 11:33:33 json_config -- common/autotest_common.sh@968 -- # kill 113178 00:10:01.599 11:33:33 json_config -- common/autotest_common.sh@973 -- # wait 113178 00:10:02.978 11:33:34 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:02.978 11:33:34 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:10:02.978 11:33:34 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:02.978 11:33:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:02.978 INFO: Success 00:10:02.978 11:33:34 json_config -- json_config/json_config.sh@328 -- # return 0 00:10:02.978 11:33:34 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:10:02.978 00:10:02.978 real 0m15.576s 00:10:02.978 user 0m21.793s 00:10:02.978 sys 0m2.698s 00:10:02.978 ************************************ 00:10:02.978 END TEST json_config 00:10:02.978 ************************************ 00:10:02.978 11:33:34 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:02.978 11:33:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:02.978 11:33:34 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:02.978 11:33:34 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:02.978 11:33:34 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:02.978 11:33:34 -- common/autotest_common.sh@10 -- # set +x 00:10:02.978 ************************************ 00:10:02.978 START TEST json_config_extra_key 00:10:02.978 ************************************ 00:10:02.978 11:33:34 json_config_extra_key -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:02.978 11:33:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:02.978 11:33:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:02.978 11:33:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.978 11:33:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.978 11:33:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.978 11:33:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.978 11:33:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.978 11:33:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.978 11:33:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a70a76c0-7275-4e7b-9697-8326fc208969 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a70a76c0-7275-4e7b-9697-8326fc208969 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:02.979 11:33:34 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.979 11:33:34 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.979 11:33:34 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.979 11:33:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:02.979 11:33:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:02.979 11:33:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:02.979 11:33:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:02.979 11:33:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:02.979 11:33:34 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:02.979 11:33:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:02.979 11:33:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:02.979 11:33:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:02.979 11:33:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:02.979 11:33:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:02.979 11:33:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:02.979 11:33:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:02.979 11:33:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:02.979 11:33:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:02.979 11:33:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:02.979 11:33:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:02.979 INFO: launching applications... 00:10:02.979 11:33:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:02.979 11:33:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:02.979 11:33:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:02.979 11:33:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:02.979 11:33:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:02.979 11:33:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:02.979 11:33:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:02.979 11:33:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:02.979 11:33:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=113367 00:10:02.979 11:33:34 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:02.979 11:33:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:02.979 Waiting for target to run... 00:10:02.979 11:33:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 113367 /var/tmp/spdk_tgt.sock 00:10:02.979 11:33:34 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 113367 ']' 00:10:02.979 11:33:34 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:02.979 11:33:34 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:02.979 11:33:34 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:02.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:02.979 11:33:34 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:02.979 11:33:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:02.979 [2024-06-10 11:33:34.908566] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:10:02.979 [2024-06-10 11:33:34.908988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113367 ] 00:10:03.560 [2024-06-10 11:33:35.319539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.560 [2024-06-10 11:33:35.583504] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.496 00:10:04.496 INFO: shutting down applications... 00:10:04.496 11:33:36 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:04.496 11:33:36 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:10:04.496 11:33:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:04.496 11:33:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:04.496 11:33:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:04.496 11:33:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:04.496 11:33:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:04.496 11:33:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 113367 ]] 00:10:04.496 11:33:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 113367 00:10:04.496 11:33:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:04.496 11:33:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:04.496 11:33:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113367 00:10:04.496 11:33:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:04.755 11:33:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:04.755 11:33:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:04.755 11:33:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113367 00:10:04.755 11:33:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:05.323 11:33:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:05.323 11:33:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:05.323 11:33:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113367 00:10:05.323 11:33:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:05.939 11:33:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:05.939 11:33:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:05.939 11:33:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113367 00:10:05.939 11:33:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:06.504 11:33:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:06.504 11:33:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:06.504 11:33:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113367 00:10:06.504 11:33:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:07.071 11:33:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:07.071 11:33:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:07.071 11:33:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113367 00:10:07.071 11:33:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:07.329 11:33:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:07.329 11:33:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:07.329 11:33:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113367 00:10:07.329 11:33:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:07.895 SPDK target shutdown done 00:10:07.895 Success 00:10:07.895 11:33:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:07.895 11:33:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:07.895 11:33:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113367 00:10:07.895 11:33:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:07.895 11:33:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:07.895 11:33:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:07.895 11:33:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:07.895 11:33:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:07.895 00:10:07.895 real 0m5.099s 00:10:07.895 user 0m4.629s 00:10:07.895 sys 0m0.585s 00:10:07.895 ************************************ 00:10:07.895 END TEST json_config_extra_key 00:10:07.895 ************************************ 00:10:07.895 11:33:39 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:07.895 11:33:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:07.895 11:33:39 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:07.895 11:33:39 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:07.895 11:33:39 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:07.895 11:33:39 -- common/autotest_common.sh@10 -- # set +x 00:10:07.895 ************************************ 00:10:07.895 START TEST alias_rpc 00:10:07.895 ************************************ 00:10:07.895 11:33:39 alias_rpc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:08.153 * Looking for test storage... 00:10:08.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:08.153 11:33:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:08.153 11:33:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:08.153 11:33:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=113493 00:10:08.153 11:33:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 113493 00:10:08.153 11:33:39 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 113493 ']' 00:10:08.153 11:33:39 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.153 11:33:39 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:08.153 11:33:39 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.153 11:33:39 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:08.153 11:33:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.153 [2024-06-10 11:33:40.071453] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:10:08.153 [2024-06-10 11:33:40.072227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113493 ] 00:10:08.412 [2024-06-10 11:33:40.231198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.412 [2024-06-10 11:33:40.458469] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.385 11:33:41 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:09.385 11:33:41 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:10:09.385 11:33:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:09.952 11:33:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 113493 00:10:09.952 11:33:41 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 113493 ']' 00:10:09.952 11:33:41 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 113493 00:10:09.952 11:33:41 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:10:09.952 11:33:41 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:09.952 11:33:41 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 113493 00:10:09.952 killing process with pid 113493 00:10:09.952 11:33:41 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:09.952 11:33:41 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:09.952 11:33:41 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 113493' 00:10:09.952 11:33:41 alias_rpc -- common/autotest_common.sh@968 -- # kill 113493 00:10:09.952 11:33:41 alias_rpc -- common/autotest_common.sh@973 -- # wait 113493 00:10:12.490 ************************************ 00:10:12.490 END TEST alias_rpc 00:10:12.490 ************************************ 00:10:12.490 00:10:12.490 real 0m4.536s 00:10:12.490 user 0m4.635s 00:10:12.490 sys 0m0.526s 00:10:12.490 11:33:44 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:12.490 11:33:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.490 11:33:44 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:10:12.490 11:33:44 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:12.490 11:33:44 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:12.490 11:33:44 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:12.490 11:33:44 -- common/autotest_common.sh@10 -- # set +x 00:10:12.490 ************************************ 00:10:12.490 START TEST spdkcli_tcp 00:10:12.490 ************************************ 00:10:12.490 11:33:44 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:12.748 * Looking for test storage... 00:10:12.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:12.748 11:33:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:12.748 11:33:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:12.748 11:33:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:12.748 11:33:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:12.748 11:33:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:12.748 11:33:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:12.748 11:33:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:12.748 11:33:44 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:12.748 11:33:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:12.748 11:33:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=113604 00:10:12.748 11:33:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:12.748 11:33:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 113604 00:10:12.748 11:33:44 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 113604 ']' 00:10:12.748 11:33:44 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.748 11:33:44 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:12.748 11:33:44 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.749 11:33:44 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:12.749 11:33:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:12.749 [2024-06-10 11:33:44.689932] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:10:12.749 [2024-06-10 11:33:44.690397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113604 ] 00:10:13.006 [2024-06-10 11:33:44.882843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:13.265 [2024-06-10 11:33:45.134926] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.265 [2024-06-10 11:33:45.134923] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.200 11:33:46 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:14.200 11:33:46 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:10:14.200 11:33:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=113635 00:10:14.200 11:33:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:14.200 11:33:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:14.459 [ 00:10:14.459 "spdk_get_version", 00:10:14.459 "rpc_get_methods", 00:10:14.459 "keyring_get_keys", 00:10:14.459 "trace_get_info", 00:10:14.459 "trace_get_tpoint_group_mask", 00:10:14.459 "trace_disable_tpoint_group", 00:10:14.459 "trace_enable_tpoint_group", 00:10:14.459 "trace_clear_tpoint_mask", 00:10:14.459 "trace_set_tpoint_mask", 00:10:14.459 "framework_get_pci_devices", 00:10:14.459 "framework_get_config", 00:10:14.459 "framework_get_subsystems", 00:10:14.459 "iobuf_get_stats", 00:10:14.459 "iobuf_set_options", 00:10:14.459 "sock_get_default_impl", 00:10:14.459 "sock_set_default_impl", 00:10:14.459 "sock_impl_set_options", 00:10:14.459 "sock_impl_get_options", 00:10:14.459 "vmd_rescan", 00:10:14.459 "vmd_remove_device", 00:10:14.459 "vmd_enable", 00:10:14.459 "accel_get_stats", 00:10:14.459 "accel_set_options", 00:10:14.459 "accel_set_driver", 00:10:14.459 "accel_crypto_key_destroy", 00:10:14.459 "accel_crypto_keys_get", 00:10:14.459 "accel_crypto_key_create", 00:10:14.459 "accel_assign_opc", 00:10:14.459 "accel_get_module_info", 00:10:14.459 "accel_get_opc_assignments", 00:10:14.459 "notify_get_notifications", 00:10:14.459 "notify_get_types", 00:10:14.459 "bdev_get_histogram", 00:10:14.459 "bdev_enable_histogram", 00:10:14.459 "bdev_set_qos_limit", 00:10:14.459 "bdev_set_qd_sampling_period", 00:10:14.459 "bdev_get_bdevs", 00:10:14.459 "bdev_reset_iostat", 00:10:14.459 "bdev_get_iostat", 00:10:14.459 "bdev_examine", 00:10:14.459 "bdev_wait_for_examine", 00:10:14.459 "bdev_set_options", 00:10:14.459 "scsi_get_devices", 00:10:14.459 "thread_set_cpumask", 00:10:14.459 "framework_get_scheduler", 00:10:14.459 "framework_set_scheduler", 00:10:14.459 "framework_get_reactors", 00:10:14.459 "thread_get_io_channels", 00:10:14.459 "thread_get_pollers", 00:10:14.459 "thread_get_stats", 00:10:14.459 "framework_monitor_context_switch", 00:10:14.459 "spdk_kill_instance", 00:10:14.459 "log_enable_timestamps", 00:10:14.459 "log_get_flags", 00:10:14.459 "log_clear_flag", 00:10:14.459 "log_set_flag", 00:10:14.459 "log_get_level", 00:10:14.459 "log_set_level", 00:10:14.459 "log_get_print_level", 00:10:14.459 "log_set_print_level", 00:10:14.459 "framework_enable_cpumask_locks", 00:10:14.459 "framework_disable_cpumask_locks", 00:10:14.459 "framework_wait_init", 00:10:14.459 "framework_start_init", 00:10:14.459 "virtio_blk_create_transport", 00:10:14.459 "virtio_blk_get_transports", 00:10:14.459 "vhost_controller_set_coalescing", 00:10:14.459 "vhost_get_controllers", 00:10:14.459 "vhost_delete_controller", 00:10:14.459 "vhost_create_blk_controller", 00:10:14.459 "vhost_scsi_controller_remove_target", 00:10:14.459 "vhost_scsi_controller_add_target", 00:10:14.459 "vhost_start_scsi_controller", 00:10:14.459 "vhost_create_scsi_controller", 00:10:14.459 "nbd_get_disks", 00:10:14.459 "nbd_stop_disk", 00:10:14.459 "nbd_start_disk", 00:10:14.459 "env_dpdk_get_mem_stats", 00:10:14.459 "nvmf_stop_mdns_prr", 00:10:14.459 "nvmf_publish_mdns_prr", 00:10:14.459 "nvmf_subsystem_get_listeners", 00:10:14.459 "nvmf_subsystem_get_qpairs", 00:10:14.459 "nvmf_subsystem_get_controllers", 00:10:14.459 "nvmf_get_stats", 00:10:14.459 "nvmf_get_transports", 00:10:14.459 "nvmf_create_transport", 00:10:14.459 "nvmf_get_targets", 00:10:14.459 "nvmf_delete_target", 00:10:14.459 "nvmf_create_target", 00:10:14.459 "nvmf_subsystem_allow_any_host", 00:10:14.459 "nvmf_subsystem_remove_host", 00:10:14.459 "nvmf_subsystem_add_host", 00:10:14.459 "nvmf_ns_remove_host", 00:10:14.459 "nvmf_ns_add_host", 00:10:14.459 "nvmf_subsystem_remove_ns", 00:10:14.459 "nvmf_subsystem_add_ns", 00:10:14.459 "nvmf_subsystem_listener_set_ana_state", 00:10:14.459 "nvmf_discovery_get_referrals", 00:10:14.459 "nvmf_discovery_remove_referral", 00:10:14.459 "nvmf_discovery_add_referral", 00:10:14.459 "nvmf_subsystem_remove_listener", 00:10:14.459 "nvmf_subsystem_add_listener", 00:10:14.459 "nvmf_delete_subsystem", 00:10:14.459 "nvmf_create_subsystem", 00:10:14.459 "nvmf_get_subsystems", 00:10:14.459 "nvmf_set_crdt", 00:10:14.459 "nvmf_set_config", 00:10:14.459 "nvmf_set_max_subsystems", 00:10:14.459 "iscsi_get_histogram", 00:10:14.459 "iscsi_enable_histogram", 00:10:14.459 "iscsi_set_options", 00:10:14.459 "iscsi_get_auth_groups", 00:10:14.459 "iscsi_auth_group_remove_secret", 00:10:14.459 "iscsi_auth_group_add_secret", 00:10:14.459 "iscsi_delete_auth_group", 00:10:14.459 "iscsi_create_auth_group", 00:10:14.459 "iscsi_set_discovery_auth", 00:10:14.459 "iscsi_get_options", 00:10:14.459 "iscsi_target_node_request_logout", 00:10:14.459 "iscsi_target_node_set_redirect", 00:10:14.459 "iscsi_target_node_set_auth", 00:10:14.459 "iscsi_target_node_add_lun", 00:10:14.459 "iscsi_get_stats", 00:10:14.459 "iscsi_get_connections", 00:10:14.459 "iscsi_portal_group_set_auth", 00:10:14.459 "iscsi_start_portal_group", 00:10:14.459 "iscsi_delete_portal_group", 00:10:14.459 "iscsi_create_portal_group", 00:10:14.459 "iscsi_get_portal_groups", 00:10:14.459 "iscsi_delete_target_node", 00:10:14.459 "iscsi_target_node_remove_pg_ig_maps", 00:10:14.459 "iscsi_target_node_add_pg_ig_maps", 00:10:14.459 "iscsi_create_target_node", 00:10:14.459 "iscsi_get_target_nodes", 00:10:14.459 "iscsi_delete_initiator_group", 00:10:14.459 "iscsi_initiator_group_remove_initiators", 00:10:14.459 "iscsi_initiator_group_add_initiators", 00:10:14.459 "iscsi_create_initiator_group", 00:10:14.459 "iscsi_get_initiator_groups", 00:10:14.459 "keyring_linux_set_options", 00:10:14.459 "keyring_file_remove_key", 00:10:14.459 "keyring_file_add_key", 00:10:14.459 "iaa_scan_accel_module", 00:10:14.459 "dsa_scan_accel_module", 00:10:14.459 "ioat_scan_accel_module", 00:10:14.459 "accel_error_inject_error", 00:10:14.459 "bdev_iscsi_delete", 00:10:14.459 "bdev_iscsi_create", 00:10:14.459 "bdev_iscsi_set_options", 00:10:14.459 "bdev_virtio_attach_controller", 00:10:14.459 "bdev_virtio_scsi_get_devices", 00:10:14.459 "bdev_virtio_detach_controller", 00:10:14.459 "bdev_virtio_blk_set_hotplug", 00:10:14.459 "bdev_ftl_set_property", 00:10:14.459 "bdev_ftl_get_properties", 00:10:14.459 "bdev_ftl_get_stats", 00:10:14.459 "bdev_ftl_unmap", 00:10:14.459 "bdev_ftl_unload", 00:10:14.459 "bdev_ftl_delete", 00:10:14.459 "bdev_ftl_load", 00:10:14.459 "bdev_ftl_create", 00:10:14.459 "bdev_aio_delete", 00:10:14.459 "bdev_aio_rescan", 00:10:14.459 "bdev_aio_create", 00:10:14.459 "blobfs_create", 00:10:14.459 "blobfs_detect", 00:10:14.459 "blobfs_set_cache_size", 00:10:14.459 "bdev_zone_block_delete", 00:10:14.459 "bdev_zone_block_create", 00:10:14.459 "bdev_delay_delete", 00:10:14.459 "bdev_delay_create", 00:10:14.459 "bdev_delay_update_latency", 00:10:14.459 "bdev_split_delete", 00:10:14.459 "bdev_split_create", 00:10:14.459 "bdev_error_inject_error", 00:10:14.459 "bdev_error_delete", 00:10:14.459 "bdev_error_create", 00:10:14.459 "bdev_raid_set_options", 00:10:14.459 "bdev_raid_remove_base_bdev", 00:10:14.459 "bdev_raid_add_base_bdev", 00:10:14.459 "bdev_raid_delete", 00:10:14.459 "bdev_raid_create", 00:10:14.459 "bdev_raid_get_bdevs", 00:10:14.459 "bdev_lvol_set_parent_bdev", 00:10:14.459 "bdev_lvol_set_parent", 00:10:14.459 "bdev_lvol_check_shallow_copy", 00:10:14.459 "bdev_lvol_start_shallow_copy", 00:10:14.459 "bdev_lvol_grow_lvstore", 00:10:14.459 "bdev_lvol_get_lvols", 00:10:14.459 "bdev_lvol_get_lvstores", 00:10:14.459 "bdev_lvol_delete", 00:10:14.459 "bdev_lvol_set_read_only", 00:10:14.460 "bdev_lvol_resize", 00:10:14.460 "bdev_lvol_decouple_parent", 00:10:14.460 "bdev_lvol_inflate", 00:10:14.460 "bdev_lvol_rename", 00:10:14.460 "bdev_lvol_clone_bdev", 00:10:14.460 "bdev_lvol_clone", 00:10:14.460 "bdev_lvol_snapshot", 00:10:14.460 "bdev_lvol_create", 00:10:14.460 "bdev_lvol_delete_lvstore", 00:10:14.460 "bdev_lvol_rename_lvstore", 00:10:14.460 "bdev_lvol_create_lvstore", 00:10:14.460 "bdev_passthru_delete", 00:10:14.460 "bdev_passthru_create", 00:10:14.460 "bdev_nvme_cuse_unregister", 00:10:14.460 "bdev_nvme_cuse_register", 00:10:14.460 "bdev_opal_new_user", 00:10:14.460 "bdev_opal_set_lock_state", 00:10:14.460 "bdev_opal_delete", 00:10:14.460 "bdev_opal_get_info", 00:10:14.460 "bdev_opal_create", 00:10:14.460 "bdev_nvme_opal_revert", 00:10:14.460 "bdev_nvme_opal_init", 00:10:14.460 "bdev_nvme_send_cmd", 00:10:14.460 "bdev_nvme_get_path_iostat", 00:10:14.460 "bdev_nvme_get_mdns_discovery_info", 00:10:14.460 "bdev_nvme_stop_mdns_discovery", 00:10:14.460 "bdev_nvme_start_mdns_discovery", 00:10:14.460 "bdev_nvme_set_multipath_policy", 00:10:14.460 "bdev_nvme_set_preferred_path", 00:10:14.460 "bdev_nvme_get_io_paths", 00:10:14.460 "bdev_nvme_remove_error_injection", 00:10:14.460 "bdev_nvme_add_error_injection", 00:10:14.460 "bdev_nvme_get_discovery_info", 00:10:14.460 "bdev_nvme_stop_discovery", 00:10:14.460 "bdev_nvme_start_discovery", 00:10:14.460 "bdev_nvme_get_controller_health_info", 00:10:14.460 "bdev_nvme_disable_controller", 00:10:14.460 "bdev_nvme_enable_controller", 00:10:14.460 "bdev_nvme_reset_controller", 00:10:14.460 "bdev_nvme_get_transport_statistics", 00:10:14.460 "bdev_nvme_apply_firmware", 00:10:14.460 "bdev_nvme_detach_controller", 00:10:14.460 "bdev_nvme_get_controllers", 00:10:14.460 "bdev_nvme_attach_controller", 00:10:14.460 "bdev_nvme_set_hotplug", 00:10:14.460 "bdev_nvme_set_options", 00:10:14.460 "bdev_null_resize", 00:10:14.460 "bdev_null_delete", 00:10:14.460 "bdev_null_create", 00:10:14.460 "bdev_malloc_delete", 00:10:14.460 "bdev_malloc_create" 00:10:14.460 ] 00:10:14.460 11:33:46 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:14.460 11:33:46 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:14.460 11:33:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:14.460 11:33:46 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:14.460 11:33:46 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 113604 00:10:14.460 11:33:46 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 113604 ']' 00:10:14.460 11:33:46 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 113604 00:10:14.460 11:33:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:10:14.460 11:33:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:14.460 11:33:46 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 113604 00:10:14.460 11:33:46 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:14.460 11:33:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:14.460 killing process with pid 113604 00:10:14.460 11:33:46 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 113604' 00:10:14.460 11:33:46 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 113604 00:10:14.460 11:33:46 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 113604 00:10:17.739 ************************************ 00:10:17.739 END TEST spdkcli_tcp 00:10:17.739 ************************************ 00:10:17.739 00:10:17.739 real 0m4.698s 00:10:17.739 user 0m8.411s 00:10:17.739 sys 0m0.616s 00:10:17.739 11:33:49 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:17.739 11:33:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:17.739 11:33:49 -- spdk/autotest.sh@180 -- # [[ 0 -eq 1 ]] 00:10:17.739 11:33:49 -- spdk/autotest.sh@184 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:17.739 11:33:49 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:17.739 11:33:49 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:17.739 11:33:49 -- common/autotest_common.sh@10 -- # set +x 00:10:17.739 ************************************ 00:10:17.739 START TEST dpdk_mem_utility 00:10:17.739 ************************************ 00:10:17.739 11:33:49 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:17.739 * Looking for test storage... 00:10:17.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:17.739 11:33:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:17.739 11:33:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=113739 00:10:17.739 11:33:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 113739 00:10:17.739 11:33:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.739 11:33:49 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 113739 ']' 00:10:17.739 11:33:49 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.739 11:33:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:17.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.739 11:33:49 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.739 11:33:49 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:17.739 11:33:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:17.739 [2024-06-10 11:33:49.421183] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:10:17.739 [2024-06-10 11:33:49.421406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113739 ] 00:10:17.739 [2024-06-10 11:33:49.600936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.996 [2024-06-10 11:33:49.827363] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.932 11:33:50 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:18.932 11:33:50 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:10:18.932 11:33:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:18.932 11:33:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:18.932 11:33:50 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:18.932 11:33:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:18.932 { 00:10:18.932 "filename": "/tmp/spdk_mem_dump.txt" 00:10:18.932 } 00:10:18.932 11:33:50 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:18.932 11:33:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:18.932 DPDK memory size 820.000000 MiB in 1 heap(s) 00:10:18.932 1 heaps totaling size 820.000000 MiB 00:10:18.932 size: 820.000000 MiB heap id: 0 00:10:18.932 end heaps---------- 00:10:18.932 8 mempools totaling size 598.116089 MiB 00:10:18.932 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:18.932 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:18.932 size: 84.521057 MiB name: bdev_io_113739 00:10:18.932 size: 51.011292 MiB name: evtpool_113739 00:10:18.932 size: 50.003479 MiB name: msgpool_113739 00:10:18.932 size: 21.763794 MiB name: PDU_Pool 00:10:18.932 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:18.932 size: 0.026123 MiB name: Session_Pool 00:10:18.932 end mempools------- 00:10:18.932 6 memzones totaling size 4.142822 MiB 00:10:18.932 size: 1.000366 MiB name: RG_ring_0_113739 00:10:18.932 size: 1.000366 MiB name: RG_ring_1_113739 00:10:18.932 size: 1.000366 MiB name: RG_ring_4_113739 00:10:18.932 size: 1.000366 MiB name: RG_ring_5_113739 00:10:18.932 size: 0.125366 MiB name: RG_ring_2_113739 00:10:18.932 size: 0.015991 MiB name: RG_ring_3_113739 00:10:18.932 end memzones------- 00:10:18.932 11:33:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:18.932 heap id: 0 total size: 820.000000 MiB number of busy elements: 222 number of free elements: 18 00:10:18.932 list of free elements. size: 18.470703 MiB 00:10:18.932 element at address: 0x200000400000 with size: 1.999451 MiB 00:10:18.932 element at address: 0x200000800000 with size: 1.996887 MiB 00:10:18.932 element at address: 0x200007000000 with size: 1.995972 MiB 00:10:18.932 element at address: 0x20000b200000 with size: 1.995972 MiB 00:10:18.932 element at address: 0x200019100040 with size: 0.999939 MiB 00:10:18.933 element at address: 0x200019500040 with size: 0.999939 MiB 00:10:18.933 element at address: 0x200019600000 with size: 0.999329 MiB 00:10:18.933 element at address: 0x200003e00000 with size: 0.996094 MiB 00:10:18.933 element at address: 0x200032200000 with size: 0.994324 MiB 00:10:18.933 element at address: 0x200018e00000 with size: 0.959656 MiB 00:10:18.933 element at address: 0x200019900040 with size: 0.937256 MiB 00:10:18.933 element at address: 0x200000200000 with size: 0.834106 MiB 00:10:18.933 element at address: 0x20001b000000 with size: 0.562195 MiB 00:10:18.933 element at address: 0x200019200000 with size: 0.489197 MiB 00:10:18.933 element at address: 0x200019a00000 with size: 0.485413 MiB 00:10:18.933 element at address: 0x200013800000 with size: 0.469116 MiB 00:10:18.933 element at address: 0x200028400000 with size: 0.399719 MiB 00:10:18.933 element at address: 0x200003a00000 with size: 0.356140 MiB 00:10:18.933 list of standard malloc elements. size: 199.264893 MiB 00:10:18.933 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:10:18.933 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:10:18.933 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:10:18.933 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:18.933 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:10:18.933 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:18.933 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:10:18.933 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:18.933 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:10:18.933 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:10:18.933 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:10:18.933 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:18.933 element at address: 0x200003aff980 with size: 0.000244 MiB 00:10:18.933 element at address: 0x200003affa80 with size: 0.000244 MiB 00:10:18.933 element at address: 0x200003eff000 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:10:18.933 element at address: 0x200013878180 with size: 0.000244 MiB 00:10:18.933 element at address: 0x200013878280 with size: 0.000244 MiB 00:10:18.933 element at address: 0x200013878380 with size: 0.000244 MiB 00:10:18.933 element at address: 0x200013878480 with size: 0.000244 MiB 00:10:18.933 element at address: 0x200013878580 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:10:18.933 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:10:18.933 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:10:18.933 element at address: 0x200019abc680 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:10:18.933 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:10:18.934 element at address: 0x200028466540 with size: 0.000244 MiB 00:10:18.934 element at address: 0x200028466640 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846d300 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846d580 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846d680 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846d780 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846d880 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846d980 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846da80 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846db80 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846de80 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846df80 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846e080 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846e180 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846e280 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846e380 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846e480 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846e580 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846e680 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846e780 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846e880 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846e980 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846f080 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846f180 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846f280 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846f380 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846f480 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846f580 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846f680 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846f780 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846f880 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846f980 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:10:18.934 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:10:18.934 list of memzone associated elements. size: 602.264404 MiB 00:10:18.934 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:10:18.934 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:18.934 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:10:18.935 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:18.935 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:10:18.935 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_113739_0 00:10:18.935 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:10:18.935 associated memzone info: size: 48.002930 MiB name: MP_evtpool_113739_0 00:10:18.935 element at address: 0x200003fff340 with size: 48.003113 MiB 00:10:18.935 associated memzone info: size: 48.002930 MiB name: MP_msgpool_113739_0 00:10:18.935 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:10:18.935 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:18.935 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:10:18.935 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:18.935 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:10:18.935 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_113739 00:10:18.935 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:10:18.935 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_113739 00:10:18.935 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:18.935 associated memzone info: size: 1.007996 MiB name: MP_evtpool_113739 00:10:18.935 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:10:18.935 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:18.935 element at address: 0x200019abc780 with size: 1.008179 MiB 00:10:18.935 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:18.935 element at address: 0x200018efde00 with size: 1.008179 MiB 00:10:18.935 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:18.935 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:10:18.935 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:18.935 element at address: 0x200003eff100 with size: 1.000549 MiB 00:10:18.935 associated memzone info: size: 1.000366 MiB name: RG_ring_0_113739 00:10:18.935 element at address: 0x200003affb80 with size: 1.000549 MiB 00:10:18.935 associated memzone info: size: 1.000366 MiB name: RG_ring_1_113739 00:10:18.935 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:10:18.935 associated memzone info: size: 1.000366 MiB name: RG_ring_4_113739 00:10:18.935 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:10:18.935 associated memzone info: size: 1.000366 MiB name: RG_ring_5_113739 00:10:18.935 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:10:18.935 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_113739 00:10:18.935 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:10:18.935 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:18.935 element at address: 0x200013878680 with size: 0.500549 MiB 00:10:18.935 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:18.935 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:10:18.935 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:18.935 element at address: 0x200003adf740 with size: 0.125549 MiB 00:10:18.935 associated memzone info: size: 0.125366 MiB name: RG_ring_2_113739 00:10:18.935 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:10:18.935 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:18.935 element at address: 0x200028466740 with size: 0.023804 MiB 00:10:18.935 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:18.935 element at address: 0x200003adb500 with size: 0.016174 MiB 00:10:18.935 associated memzone info: size: 0.015991 MiB name: RG_ring_3_113739 00:10:18.935 element at address: 0x20002846c8c0 with size: 0.002502 MiB 00:10:18.935 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:18.935 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:10:18.935 associated memzone info: size: 0.000183 MiB name: MP_msgpool_113739 00:10:18.935 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:10:18.935 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_113739 00:10:18.935 element at address: 0x20002846d400 with size: 0.000366 MiB 00:10:18.935 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:18.935 11:33:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:18.935 11:33:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 113739 00:10:18.935 11:33:50 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 113739 ']' 00:10:18.935 11:33:50 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 113739 00:10:18.935 11:33:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:10:18.935 11:33:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:18.935 11:33:50 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 113739 00:10:18.935 11:33:50 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:18.935 11:33:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:18.935 killing process with pid 113739 00:10:18.935 11:33:50 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 113739' 00:10:18.935 11:33:50 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 113739 00:10:18.935 11:33:50 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 113739 00:10:22.217 ************************************ 00:10:22.217 END TEST dpdk_mem_utility 00:10:22.217 ************************************ 00:10:22.217 00:10:22.217 real 0m4.496s 00:10:22.217 user 0m4.521s 00:10:22.217 sys 0m0.609s 00:10:22.217 11:33:53 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:22.217 11:33:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:22.217 11:33:53 -- spdk/autotest.sh@185 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:22.217 11:33:53 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:22.217 11:33:53 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:22.217 11:33:53 -- common/autotest_common.sh@10 -- # set +x 00:10:22.217 ************************************ 00:10:22.217 START TEST event 00:10:22.217 ************************************ 00:10:22.217 11:33:53 event -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:22.217 * Looking for test storage... 00:10:22.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:22.217 11:33:53 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:22.217 11:33:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:22.217 11:33:53 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:22.217 11:33:53 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:10:22.217 11:33:53 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:22.217 11:33:53 event -- common/autotest_common.sh@10 -- # set +x 00:10:22.217 ************************************ 00:10:22.217 START TEST event_perf 00:10:22.217 ************************************ 00:10:22.217 11:33:53 event.event_perf -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:22.217 Running I/O for 1 seconds...[2024-06-10 11:33:53.946448] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:10:22.217 [2024-06-10 11:33:53.946822] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113853 ] 00:10:22.217 [2024-06-10 11:33:54.175998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.474 [2024-06-10 11:33:54.395623] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.474 [2024-06-10 11:33:54.395694] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.474 [2024-06-10 11:33:54.395830] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.474 [2024-06-10 11:33:54.396023] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.849 Running I/O for 1 seconds... 00:10:23.849 lcore 0: 167114 00:10:23.849 lcore 1: 167114 00:10:23.849 lcore 2: 167114 00:10:23.849 lcore 3: 167112 00:10:23.849 done. 00:10:23.849 ************************************ 00:10:23.849 END TEST event_perf 00:10:23.849 ************************************ 00:10:23.849 00:10:23.849 real 0m1.939s 00:10:23.849 user 0m4.673s 00:10:23.849 sys 0m0.160s 00:10:23.849 11:33:55 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:23.849 11:33:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:23.849 11:33:55 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:23.849 11:33:55 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:10:23.849 11:33:55 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:23.849 11:33:55 event -- common/autotest_common.sh@10 -- # set +x 00:10:23.849 ************************************ 00:10:23.849 START TEST event_reactor 00:10:23.849 ************************************ 00:10:23.849 11:33:55 event.event_reactor -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:24.141 [2024-06-10 11:33:55.929185] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:10:24.141 [2024-06-10 11:33:55.929718] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113906 ] 00:10:24.141 [2024-06-10 11:33:56.157989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.449 [2024-06-10 11:33:56.445668] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.350 test_start 00:10:26.350 oneshot 00:10:26.350 tick 100 00:10:26.350 tick 100 00:10:26.350 tick 250 00:10:26.350 tick 100 00:10:26.350 tick 100 00:10:26.350 tick 100 00:10:26.350 tick 250 00:10:26.350 tick 500 00:10:26.350 tick 100 00:10:26.350 tick 100 00:10:26.350 tick 250 00:10:26.350 tick 100 00:10:26.350 tick 100 00:10:26.350 test_end 00:10:26.350 ************************************ 00:10:26.350 END TEST event_reactor 00:10:26.350 ************************************ 00:10:26.350 00:10:26.350 real 0m2.067s 00:10:26.350 user 0m1.813s 00:10:26.350 sys 0m0.148s 00:10:26.350 11:33:57 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:26.350 11:33:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:26.350 11:33:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:26.350 11:33:57 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:10:26.350 11:33:57 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:26.350 11:33:57 event -- common/autotest_common.sh@10 -- # set +x 00:10:26.350 ************************************ 00:10:26.350 START TEST event_reactor_perf 00:10:26.350 ************************************ 00:10:26.350 11:33:57 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:26.350 [2024-06-10 11:33:58.030921] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:10:26.350 [2024-06-10 11:33:58.031498] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113959 ] 00:10:26.350 [2024-06-10 11:33:58.205284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.613 [2024-06-10 11:33:58.468545] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.016 test_start 00:10:28.016 test_end 00:10:28.016 Performance: 317625 events per second 00:10:28.016 ************************************ 00:10:28.016 END TEST event_reactor_perf 00:10:28.016 ************************************ 00:10:28.016 00:10:28.016 real 0m1.971s 00:10:28.016 user 0m1.740s 00:10:28.016 sys 0m0.128s 00:10:28.016 11:33:59 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:28.016 11:33:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:28.016 11:33:59 event -- event/event.sh@49 -- # uname -s 00:10:28.016 11:33:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:28.016 11:33:59 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:28.016 11:33:59 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:28.016 11:33:59 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:28.016 11:33:59 event -- common/autotest_common.sh@10 -- # set +x 00:10:28.016 ************************************ 00:10:28.016 START TEST event_scheduler 00:10:28.016 ************************************ 00:10:28.016 11:33:59 event.event_scheduler -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:28.016 * Looking for test storage... 00:10:28.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:28.274 11:34:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:28.274 11:34:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=114037 00:10:28.274 11:34:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:28.274 11:34:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:28.274 11:34:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 114037 00:10:28.274 11:34:00 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 114037 ']' 00:10:28.274 11:34:00 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.274 11:34:00 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:28.274 11:34:00 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.274 11:34:00 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:28.274 11:34:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:28.274 [2024-06-10 11:34:00.165949] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:10:28.274 [2024-06-10 11:34:00.166611] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114037 ] 00:10:28.532 [2024-06-10 11:34:00.378267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:28.789 [2024-06-10 11:34:00.677034] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.789 [2024-06-10 11:34:00.677156] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.789 [2024-06-10 11:34:00.677088] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.789 [2024-06-10 11:34:00.677151] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.359 11:34:01 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:29.359 11:34:01 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:10:29.359 11:34:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:29.359 11:34:01 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:29.359 11:34:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:29.359 POWER: Env isn't set yet! 00:10:29.359 POWER: Attempting to initialise ACPI cpufreq power management... 00:10:29.359 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:29.359 POWER: Cannot set governor of lcore 0 to userspace 00:10:29.359 POWER: Attempting to initialise PSTAT power management... 00:10:29.359 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:29.359 POWER: Cannot set governor of lcore 0 to performance 00:10:29.359 POWER: Attempting to initialise AMD PSTATE power management... 00:10:29.359 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:29.359 POWER: Cannot set governor of lcore 0 to userspace 00:10:29.359 POWER: Attempting to initialise CPPC power management... 00:10:29.359 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:29.359 POWER: Cannot set governor of lcore 0 to userspace 00:10:29.359 POWER: Attempting to initialise VM power management... 00:10:29.359 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:29.359 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:29.359 POWER: Unable to set Power Management Environment for lcore 0 00:10:29.359 [2024-06-10 11:34:01.176826] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:10:29.359 [2024-06-10 11:34:01.176950] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:10:29.359 [2024-06-10 11:34:01.177074] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:10:29.359 [2024-06-10 11:34:01.177172] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:29.359 [2024-06-10 11:34:01.177236] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:29.359 [2024-06-10 11:34:01.177342] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:29.359 11:34:01 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:29.359 11:34:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:29.359 11:34:01 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:29.359 11:34:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:29.618 [2024-06-10 11:34:01.496738] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:29.618 11:34:01 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:29.618 11:34:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:29.618 11:34:01 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:29.618 11:34:01 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:29.618 11:34:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:29.618 ************************************ 00:10:29.618 START TEST scheduler_create_thread 00:10:29.618 ************************************ 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:29.618 2 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:29.618 3 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:29.618 4 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:29.618 5 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:29.618 6 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:29.618 7 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:29.618 8 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:29.618 9 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:29.618 10 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:29.618 11:34:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:30.574 11:34:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:30.574 11:34:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:30.574 11:34:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:30.574 11:34:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:30.574 11:34:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:31.952 ************************************ 00:10:31.952 END TEST scheduler_create_thread 00:10:31.952 ************************************ 00:10:31.952 11:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:31.952 00:10:31.952 real 0m2.157s 00:10:31.952 user 0m0.022s 00:10:31.952 sys 0m0.000s 00:10:31.952 11:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:31.952 11:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:31.952 11:34:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:31.952 11:34:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 114037 00:10:31.952 11:34:03 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 114037 ']' 00:10:31.952 11:34:03 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 114037 00:10:31.952 11:34:03 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:10:31.952 11:34:03 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:31.952 11:34:03 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 114037 00:10:31.952 11:34:03 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:10:31.952 11:34:03 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:10:31.952 11:34:03 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 114037' 00:10:31.952 killing process with pid 114037 00:10:31.952 11:34:03 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 114037 00:10:31.952 11:34:03 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 114037 00:10:32.211 [2024-06-10 11:34:04.147364] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:33.584 ************************************ 00:10:33.584 END TEST event_scheduler 00:10:33.584 ************************************ 00:10:33.584 00:10:33.584 real 0m5.570s 00:10:33.584 user 0m9.072s 00:10:33.584 sys 0m0.499s 00:10:33.584 11:34:05 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:33.584 11:34:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:33.584 11:34:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:33.584 11:34:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:33.584 11:34:05 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:33.584 11:34:05 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:33.584 11:34:05 event -- common/autotest_common.sh@10 -- # set +x 00:10:33.584 ************************************ 00:10:33.584 START TEST app_repeat 00:10:33.584 ************************************ 00:10:33.584 11:34:05 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:10:33.584 11:34:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.584 11:34:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:33.584 11:34:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:33.584 11:34:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:33.584 11:34:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:33.584 11:34:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:33.584 11:34:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:33.584 11:34:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=114167 00:10:33.584 11:34:05 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:33.584 11:34:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:33.584 11:34:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 114167' 00:10:33.584 Process app_repeat pid: 114167 00:10:33.584 11:34:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:33.584 11:34:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:33.584 spdk_app_start Round 0 00:10:33.584 11:34:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 114167 /var/tmp/spdk-nbd.sock 00:10:33.584 11:34:05 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 114167 ']' 00:10:33.584 11:34:05 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:33.584 11:34:05 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:33.584 11:34:05 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:33.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:33.584 11:34:05 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:33.584 11:34:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:33.842 [2024-06-10 11:34:05.697699] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:10:33.842 [2024-06-10 11:34:05.698339] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114167 ] 00:10:33.842 [2024-06-10 11:34:05.888175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:34.411 [2024-06-10 11:34:06.164814] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.411 [2024-06-10 11:34:06.164815] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.980 11:34:06 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:34.980 11:34:06 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:10:34.980 11:34:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:35.239 Malloc0 00:10:35.239 11:34:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:35.498 Malloc1 00:10:35.498 11:34:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:35.498 11:34:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:35.498 11:34:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:35.498 11:34:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:35.498 11:34:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:35.498 11:34:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:35.498 11:34:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:35.498 11:34:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:35.498 11:34:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:35.498 11:34:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:35.498 11:34:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:35.498 11:34:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:35.498 11:34:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:35.498 11:34:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:35.498 11:34:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:35.498 11:34:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:35.758 /dev/nbd0 00:10:35.758 11:34:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:35.758 11:34:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:35.758 11:34:07 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:10:35.758 11:34:07 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:10:35.758 11:34:07 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:35.758 11:34:07 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:35.758 11:34:07 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:10:35.758 11:34:07 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:10:35.758 11:34:07 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:35.758 11:34:07 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:35.758 11:34:07 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:35.758 1+0 records in 00:10:35.758 1+0 records out 00:10:35.758 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376951 s, 10.9 MB/s 00:10:35.758 11:34:07 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:35.758 11:34:07 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:10:35.758 11:34:07 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:35.758 11:34:07 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:35.758 11:34:07 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:10:35.758 11:34:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:35.758 11:34:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:35.758 11:34:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:36.017 /dev/nbd1 00:10:36.017 11:34:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:36.017 11:34:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:36.017 11:34:08 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:10:36.017 11:34:08 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:10:36.017 11:34:08 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:36.017 11:34:08 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:36.017 11:34:08 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:10:36.017 11:34:08 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:10:36.017 11:34:08 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:36.017 11:34:08 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:36.017 11:34:08 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:36.017 1+0 records in 00:10:36.017 1+0 records out 00:10:36.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057289 s, 7.1 MB/s 00:10:36.017 11:34:08 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:36.017 11:34:08 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:10:36.017 11:34:08 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:36.017 11:34:08 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:36.017 11:34:08 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:10:36.017 11:34:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:36.017 11:34:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:36.017 11:34:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:36.017 11:34:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.017 11:34:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:36.275 { 00:10:36.275 "nbd_device": "/dev/nbd0", 00:10:36.275 "bdev_name": "Malloc0" 00:10:36.275 }, 00:10:36.275 { 00:10:36.275 "nbd_device": "/dev/nbd1", 00:10:36.275 "bdev_name": "Malloc1" 00:10:36.275 } 00:10:36.275 ]' 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:36.275 { 00:10:36.275 "nbd_device": "/dev/nbd0", 00:10:36.275 "bdev_name": "Malloc0" 00:10:36.275 }, 00:10:36.275 { 00:10:36.275 "nbd_device": "/dev/nbd1", 00:10:36.275 "bdev_name": "Malloc1" 00:10:36.275 } 00:10:36.275 ]' 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:36.275 /dev/nbd1' 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:36.275 /dev/nbd1' 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:36.275 256+0 records in 00:10:36.275 256+0 records out 00:10:36.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128759 s, 81.4 MB/s 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:36.275 11:34:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:36.534 256+0 records in 00:10:36.534 256+0 records out 00:10:36.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270848 s, 38.7 MB/s 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:36.534 256+0 records in 00:10:36.534 256+0 records out 00:10:36.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271901 s, 38.6 MB/s 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:36.534 11:34:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:36.793 11:34:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:36.793 11:34:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:36.793 11:34:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:36.793 11:34:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:36.793 11:34:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:36.793 11:34:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:36.793 11:34:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:36.793 11:34:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:36.793 11:34:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:36.793 11:34:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:37.071 11:34:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:37.071 11:34:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:37.071 11:34:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:37.071 11:34:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:37.071 11:34:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:37.071 11:34:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:37.071 11:34:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:37.071 11:34:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:37.071 11:34:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:37.071 11:34:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.071 11:34:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:37.355 11:34:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:37.355 11:34:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:37.355 11:34:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:37.355 11:34:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:37.355 11:34:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:37.355 11:34:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:37.355 11:34:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:37.355 11:34:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:37.355 11:34:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:37.355 11:34:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:37.355 11:34:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:37.355 11:34:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:37.355 11:34:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:37.922 11:34:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:39.296 [2024-06-10 11:34:11.219551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:39.554 [2024-06-10 11:34:11.416533] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.554 [2024-06-10 11:34:11.416537] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.812 [2024-06-10 11:34:11.612246] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:39.812 [2024-06-10 11:34:11.612334] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:40.795 spdk_app_start Round 1 00:10:40.795 11:34:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:40.795 11:34:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:40.795 11:34:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 114167 /var/tmp/spdk-nbd.sock 00:10:40.795 11:34:12 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 114167 ']' 00:10:40.795 11:34:12 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:40.795 11:34:12 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:40.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:40.795 11:34:12 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:40.795 11:34:12 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:40.795 11:34:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:41.054 11:34:12 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:41.054 11:34:12 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:10:41.054 11:34:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:41.313 Malloc0 00:10:41.313 11:34:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:41.879 Malloc1 00:10:41.879 11:34:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:41.879 11:34:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.879 11:34:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:41.879 11:34:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:41.879 11:34:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:41.879 11:34:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:41.879 11:34:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:41.879 11:34:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.879 11:34:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:41.879 11:34:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:41.879 11:34:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:41.879 11:34:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:41.879 11:34:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:41.879 11:34:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:41.879 11:34:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:41.879 11:34:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:42.137 /dev/nbd0 00:10:42.137 11:34:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:42.137 11:34:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:42.137 11:34:14 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:10:42.137 11:34:14 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:10:42.137 11:34:14 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:42.137 11:34:14 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:42.137 11:34:14 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:10:42.137 11:34:14 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:10:42.137 11:34:14 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:42.137 11:34:14 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:42.138 11:34:14 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:42.138 1+0 records in 00:10:42.138 1+0 records out 00:10:42.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231212 s, 17.7 MB/s 00:10:42.138 11:34:14 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:42.138 11:34:14 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:10:42.138 11:34:14 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:42.138 11:34:14 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:42.138 11:34:14 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:10:42.138 11:34:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:42.138 11:34:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:42.138 11:34:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:42.399 /dev/nbd1 00:10:42.399 11:34:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:42.399 11:34:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:42.399 11:34:14 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:10:42.399 11:34:14 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:10:42.399 11:34:14 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:42.399 11:34:14 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:42.399 11:34:14 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:10:42.399 11:34:14 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:10:42.399 11:34:14 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:42.399 11:34:14 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:42.399 11:34:14 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:42.399 1+0 records in 00:10:42.399 1+0 records out 00:10:42.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310887 s, 13.2 MB/s 00:10:42.399 11:34:14 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:42.399 11:34:14 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:10:42.399 11:34:14 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:42.399 11:34:14 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:42.399 11:34:14 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:10:42.399 11:34:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:42.399 11:34:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:42.399 11:34:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:42.399 11:34:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.399 11:34:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:42.660 { 00:10:42.660 "nbd_device": "/dev/nbd0", 00:10:42.660 "bdev_name": "Malloc0" 00:10:42.660 }, 00:10:42.660 { 00:10:42.660 "nbd_device": "/dev/nbd1", 00:10:42.660 "bdev_name": "Malloc1" 00:10:42.660 } 00:10:42.660 ]' 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:42.660 { 00:10:42.660 "nbd_device": "/dev/nbd0", 00:10:42.660 "bdev_name": "Malloc0" 00:10:42.660 }, 00:10:42.660 { 00:10:42.660 "nbd_device": "/dev/nbd1", 00:10:42.660 "bdev_name": "Malloc1" 00:10:42.660 } 00:10:42.660 ]' 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:42.660 /dev/nbd1' 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:42.660 /dev/nbd1' 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:42.660 256+0 records in 00:10:42.660 256+0 records out 00:10:42.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00696176 s, 151 MB/s 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:42.660 11:34:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:42.918 256+0 records in 00:10:42.918 256+0 records out 00:10:42.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025489 s, 41.1 MB/s 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:42.918 256+0 records in 00:10:42.918 256+0 records out 00:10:42.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0349843 s, 30.0 MB/s 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:42.918 11:34:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:43.176 11:34:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:43.176 11:34:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:43.176 11:34:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:43.176 11:34:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:43.176 11:34:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:43.176 11:34:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:43.176 11:34:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:43.176 11:34:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:43.176 11:34:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:43.176 11:34:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:43.434 11:34:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:43.434 11:34:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:43.434 11:34:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:43.434 11:34:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:43.434 11:34:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:43.434 11:34:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:43.434 11:34:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:43.434 11:34:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:43.434 11:34:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:43.434 11:34:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.434 11:34:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:43.693 11:34:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:43.693 11:34:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:43.693 11:34:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:43.693 11:34:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:43.693 11:34:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:43.693 11:34:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:43.693 11:34:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:43.693 11:34:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:43.693 11:34:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:43.693 11:34:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:43.693 11:34:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:43.693 11:34:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:43.693 11:34:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:44.265 11:34:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:46.205 [2024-06-10 11:34:17.792419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:46.205 [2024-06-10 11:34:18.004188] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.205 [2024-06-10 11:34:18.004190] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.205 [2024-06-10 11:34:18.213183] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:46.205 [2024-06-10 11:34:18.213293] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:47.136 spdk_app_start Round 2 00:10:47.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:47.136 11:34:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:47.136 11:34:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:47.136 11:34:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 114167 /var/tmp/spdk-nbd.sock 00:10:47.136 11:34:19 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 114167 ']' 00:10:47.136 11:34:19 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:47.136 11:34:19 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:47.136 11:34:19 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:47.136 11:34:19 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:47.136 11:34:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:47.393 11:34:19 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:47.393 11:34:19 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:10:47.393 11:34:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:47.957 Malloc0 00:10:47.957 11:34:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:48.214 Malloc1 00:10:48.214 11:34:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:48.214 11:34:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:48.214 11:34:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:48.214 11:34:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:48.214 11:34:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:48.214 11:34:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:48.214 11:34:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:48.214 11:34:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:48.214 11:34:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:48.214 11:34:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:48.214 11:34:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:48.214 11:34:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:48.214 11:34:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:48.214 11:34:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:48.214 11:34:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:48.214 11:34:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:48.780 /dev/nbd0 00:10:48.780 11:34:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:48.780 11:34:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:48.780 11:34:20 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:10:48.780 11:34:20 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:10:48.780 11:34:20 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:48.780 11:34:20 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:48.780 11:34:20 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:10:48.780 11:34:20 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:10:48.780 11:34:20 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:48.780 11:34:20 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:48.780 11:34:20 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:48.780 1+0 records in 00:10:48.780 1+0 records out 00:10:48.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444563 s, 9.2 MB/s 00:10:48.780 11:34:20 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:48.780 11:34:20 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:10:48.780 11:34:20 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:48.780 11:34:20 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:48.780 11:34:20 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:10:48.780 11:34:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:48.780 11:34:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:48.780 11:34:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:49.037 /dev/nbd1 00:10:49.037 11:34:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:49.037 11:34:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:49.037 11:34:20 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:10:49.037 11:34:20 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:10:49.037 11:34:20 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:49.037 11:34:20 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:49.037 11:34:20 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:10:49.037 11:34:20 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:10:49.037 11:34:20 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:49.037 11:34:20 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:49.037 11:34:20 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:49.037 1+0 records in 00:10:49.037 1+0 records out 00:10:49.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600697 s, 6.8 MB/s 00:10:49.037 11:34:20 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:49.037 11:34:20 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:10:49.037 11:34:20 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:49.037 11:34:20 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:49.037 11:34:20 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:10:49.037 11:34:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:49.037 11:34:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:49.037 11:34:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:49.037 11:34:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:49.038 11:34:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:49.295 { 00:10:49.295 "nbd_device": "/dev/nbd0", 00:10:49.295 "bdev_name": "Malloc0" 00:10:49.295 }, 00:10:49.295 { 00:10:49.295 "nbd_device": "/dev/nbd1", 00:10:49.295 "bdev_name": "Malloc1" 00:10:49.295 } 00:10:49.295 ]' 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:49.295 { 00:10:49.295 "nbd_device": "/dev/nbd0", 00:10:49.295 "bdev_name": "Malloc0" 00:10:49.295 }, 00:10:49.295 { 00:10:49.295 "nbd_device": "/dev/nbd1", 00:10:49.295 "bdev_name": "Malloc1" 00:10:49.295 } 00:10:49.295 ]' 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:49.295 /dev/nbd1' 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:49.295 /dev/nbd1' 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:49.295 256+0 records in 00:10:49.295 256+0 records out 00:10:49.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00658916 s, 159 MB/s 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:49.295 11:34:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:49.563 256+0 records in 00:10:49.563 256+0 records out 00:10:49.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298596 s, 35.1 MB/s 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:49.563 256+0 records in 00:10:49.563 256+0 records out 00:10:49.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0355015 s, 29.5 MB/s 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.563 11:34:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:49.820 11:34:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:49.820 11:34:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:49.820 11:34:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:49.820 11:34:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:49.820 11:34:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:49.820 11:34:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:49.820 11:34:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:49.820 11:34:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:49.820 11:34:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.820 11:34:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:50.078 11:34:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:50.078 11:34:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:50.078 11:34:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:50.078 11:34:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:50.078 11:34:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:50.078 11:34:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:50.078 11:34:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:50.078 11:34:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:50.078 11:34:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:50.078 11:34:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.078 11:34:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:50.335 11:34:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:50.335 11:34:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:50.335 11:34:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:50.335 11:34:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:50.335 11:34:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:50.335 11:34:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:50.335 11:34:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:50.335 11:34:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:50.335 11:34:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:50.335 11:34:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:50.335 11:34:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:50.335 11:34:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:50.335 11:34:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:50.591 11:34:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:52.490 [2024-06-10 11:34:24.078956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:52.490 [2024-06-10 11:34:24.283657] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.490 [2024-06-10 11:34:24.283657] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.490 [2024-06-10 11:34:24.495567] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:52.490 [2024-06-10 11:34:24.495693] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:53.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:53.863 11:34:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 114167 /var/tmp/spdk-nbd.sock 00:10:53.863 11:34:25 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 114167 ']' 00:10:53.863 11:34:25 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:53.863 11:34:25 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:53.863 11:34:25 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:53.863 11:34:25 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:53.863 11:34:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:53.863 11:34:25 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:53.863 11:34:25 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:10:53.863 11:34:25 event.app_repeat -- event/event.sh@39 -- # killprocess 114167 00:10:53.863 11:34:25 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 114167 ']' 00:10:53.863 11:34:25 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 114167 00:10:53.863 11:34:25 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:10:53.863 11:34:25 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:53.863 11:34:25 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 114167 00:10:54.121 11:34:25 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:54.121 11:34:25 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:54.121 11:34:25 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 114167' 00:10:54.121 killing process with pid 114167 00:10:54.121 11:34:25 event.app_repeat -- common/autotest_common.sh@968 -- # kill 114167 00:10:54.121 11:34:25 event.app_repeat -- common/autotest_common.sh@973 -- # wait 114167 00:10:55.494 spdk_app_start is called in Round 0. 00:10:55.494 Shutdown signal received, stop current app iteration 00:10:55.495 Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 reinitialization... 00:10:55.495 spdk_app_start is called in Round 1. 00:10:55.495 Shutdown signal received, stop current app iteration 00:10:55.495 Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 reinitialization... 00:10:55.495 spdk_app_start is called in Round 2. 00:10:55.495 Shutdown signal received, stop current app iteration 00:10:55.495 Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 reinitialization... 00:10:55.495 spdk_app_start is called in Round 3. 00:10:55.495 Shutdown signal received, stop current app iteration 00:10:55.495 ************************************ 00:10:55.495 END TEST app_repeat 00:10:55.495 ************************************ 00:10:55.495 11:34:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:55.495 11:34:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:55.495 00:10:55.495 real 0m21.686s 00:10:55.495 user 0m46.102s 00:10:55.495 sys 0m3.356s 00:10:55.495 11:34:27 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:55.495 11:34:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:55.495 11:34:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:55.495 11:34:27 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:55.495 11:34:27 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:55.495 11:34:27 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:55.495 11:34:27 event -- common/autotest_common.sh@10 -- # set +x 00:10:55.495 ************************************ 00:10:55.495 START TEST cpu_locks 00:10:55.495 ************************************ 00:10:55.495 11:34:27 event.cpu_locks -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:55.495 * Looking for test storage... 00:10:55.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:55.495 11:34:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:55.495 11:34:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:55.495 11:34:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:55.495 11:34:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:55.495 11:34:27 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:55.495 11:34:27 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:55.495 11:34:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:55.495 ************************************ 00:10:55.495 START TEST default_locks 00:10:55.495 ************************************ 00:10:55.495 11:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:10:55.495 11:34:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=114709 00:10:55.495 11:34:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 114709 00:10:55.495 11:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 114709 ']' 00:10:55.495 11:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.495 11:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:55.495 11:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.495 11:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:55.495 11:34:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:55.495 11:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:55.753 [2024-06-10 11:34:27.594224] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:10:55.753 [2024-06-10 11:34:27.594463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114709 ] 00:10:55.753 [2024-06-10 11:34:27.786617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.011 [2024-06-10 11:34:27.993637] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.947 11:34:28 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:56.947 11:34:28 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:10:56.947 11:34:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 114709 00:10:56.947 11:34:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 114709 00:10:56.947 11:34:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:57.208 11:34:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 114709 00:10:57.208 11:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 114709 ']' 00:10:57.208 11:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 114709 00:10:57.208 11:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:10:57.208 11:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:57.209 11:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 114709 00:10:57.209 11:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:57.209 11:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:57.209 killing process with pid 114709 00:10:57.209 11:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 114709' 00:10:57.209 11:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 114709 00:10:57.209 11:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 114709 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 114709 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 114709 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 114709 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 114709 ']' 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:00.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:00.498 ERROR: process (pid: 114709) is no longer running 00:11:00.498 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 845: kill: (114709) - No such process 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:00.498 00:11:00.498 real 0m4.487s 00:11:00.498 user 0m4.555s 00:11:00.498 sys 0m0.705s 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:00.498 11:34:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:00.498 ************************************ 00:11:00.498 END TEST default_locks 00:11:00.498 ************************************ 00:11:00.498 11:34:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:00.498 11:34:32 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:00.498 11:34:32 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:00.498 11:34:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:00.498 ************************************ 00:11:00.498 START TEST default_locks_via_rpc 00:11:00.498 ************************************ 00:11:00.498 11:34:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:11:00.498 11:34:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=114801 00:11:00.498 11:34:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 114801 00:11:00.498 11:34:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 114801 ']' 00:11:00.498 11:34:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:00.498 11:34:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.498 11:34:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:00.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.498 11:34:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.498 11:34:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:00.498 11:34:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.498 [2024-06-10 11:34:32.136805] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:11:00.498 [2024-06-10 11:34:32.137581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114801 ] 00:11:00.498 [2024-06-10 11:34:32.320684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.498 [2024-06-10 11:34:32.537544] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.433 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:01.433 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:11:01.433 11:34:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:01.433 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:01.433 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.433 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:01.433 11:34:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:01.433 11:34:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:01.433 11:34:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:01.433 11:34:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:01.433 11:34:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:01.433 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:01.433 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.433 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:01.433 11:34:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 114801 00:11:01.433 11:34:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:01.433 11:34:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 114801 00:11:02.000 11:34:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 114801 00:11:02.000 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 114801 ']' 00:11:02.000 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 114801 00:11:02.000 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:11:02.000 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:02.000 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 114801 00:11:02.000 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:02.000 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:02.000 killing process with pid 114801 00:11:02.000 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 114801' 00:11:02.000 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 114801 00:11:02.000 11:34:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 114801 00:11:04.573 00:11:04.573 real 0m4.257s 00:11:04.573 user 0m4.281s 00:11:04.573 sys 0m0.741s 00:11:04.573 11:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:04.573 11:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.573 ************************************ 00:11:04.573 END TEST default_locks_via_rpc 00:11:04.573 ************************************ 00:11:04.573 11:34:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:04.573 11:34:36 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:04.573 11:34:36 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:04.573 11:34:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:04.573 ************************************ 00:11:04.573 START TEST non_locking_app_on_locked_coremask 00:11:04.573 ************************************ 00:11:04.574 11:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:11:04.574 11:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=114885 00:11:04.574 11:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 114885 /var/tmp/spdk.sock 00:11:04.574 11:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 114885 ']' 00:11:04.574 11:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.574 11:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:04.574 11:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:04.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.574 11:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.574 11:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:04.574 11:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:04.574 [2024-06-10 11:34:36.452202] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:11:04.574 [2024-06-10 11:34:36.453033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114885 ] 00:11:04.832 [2024-06-10 11:34:36.632569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.832 [2024-06-10 11:34:36.858551] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.766 11:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:05.766 11:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:11:05.766 11:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=114909 00:11:05.766 11:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:05.766 11:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 114909 /var/tmp/spdk2.sock 00:11:05.766 11:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 114909 ']' 00:11:05.766 11:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:05.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:05.766 11:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:05.766 11:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:05.766 11:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:05.766 11:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:05.766 [2024-06-10 11:34:37.736233] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:11:05.766 [2024-06-10 11:34:37.736423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114909 ] 00:11:06.024 [2024-06-10 11:34:37.891535] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:06.024 [2024-06-10 11:34:37.891617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.283 [2024-06-10 11:34:38.311942] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.839 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:08.839 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:11:08.839 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 114885 00:11:08.839 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 114885 00:11:08.839 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:09.098 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 114885 00:11:09.098 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 114885 ']' 00:11:09.098 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 114885 00:11:09.098 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:11:09.098 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:09.098 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 114885 00:11:09.098 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:09.098 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:09.098 killing process with pid 114885 00:11:09.098 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 114885' 00:11:09.098 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 114885 00:11:09.098 11:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 114885 00:11:14.393 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 114909 00:11:14.393 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 114909 ']' 00:11:14.393 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 114909 00:11:14.393 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:11:14.393 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:14.393 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 114909 00:11:14.393 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:14.393 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:14.393 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 114909' 00:11:14.393 killing process with pid 114909 00:11:14.393 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 114909 00:11:14.393 11:34:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 114909 00:11:16.994 00:11:16.994 real 0m12.240s 00:11:16.994 user 0m12.633s 00:11:16.994 sys 0m1.415s 00:11:16.994 11:34:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:16.994 11:34:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:16.994 ************************************ 00:11:16.994 END TEST non_locking_app_on_locked_coremask 00:11:16.994 ************************************ 00:11:16.994 11:34:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:16.994 11:34:48 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:16.994 11:34:48 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:16.994 11:34:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:16.994 ************************************ 00:11:16.994 START TEST locking_app_on_unlocked_coremask 00:11:16.994 ************************************ 00:11:16.994 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:11:16.994 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=115078 00:11:16.994 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 115078 /var/tmp/spdk.sock 00:11:16.994 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:16.994 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 115078 ']' 00:11:16.994 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.994 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:16.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.994 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.994 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:16.994 11:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:16.994 [2024-06-10 11:34:48.723464] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:11:16.994 [2024-06-10 11:34:48.723652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115078 ] 00:11:16.994 [2024-06-10 11:34:48.881849] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:16.994 [2024-06-10 11:34:48.881933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.253 [2024-06-10 11:34:49.103478] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.185 11:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:18.185 11:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:11:18.185 11:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:18.185 11:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=115099 00:11:18.185 11:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 115099 /var/tmp/spdk2.sock 00:11:18.185 11:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 115099 ']' 00:11:18.185 11:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:18.185 11:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:18.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:18.185 11:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:18.185 11:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:18.185 11:34:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:18.185 [2024-06-10 11:34:50.090449] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:11:18.185 [2024-06-10 11:34:50.090712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115099 ] 00:11:18.443 [2024-06-10 11:34:50.256574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.701 [2024-06-10 11:34:50.698109] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.275 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:21.275 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:11:21.275 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 115099 00:11:21.275 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 115099 00:11:21.275 11:34:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:21.532 11:34:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 115078 00:11:21.532 11:34:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 115078 ']' 00:11:21.532 11:34:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 115078 00:11:21.532 11:34:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:11:21.532 11:34:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:21.532 11:34:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 115078 00:11:21.532 11:34:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:21.532 11:34:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:21.532 killing process with pid 115078 00:11:21.532 11:34:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 115078' 00:11:21.532 11:34:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 115078 00:11:21.532 11:34:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 115078 00:11:26.793 11:34:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 115099 00:11:26.793 11:34:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 115099 ']' 00:11:26.793 11:34:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 115099 00:11:27.052 11:34:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:11:27.052 11:34:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:27.052 11:34:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 115099 00:11:27.052 11:34:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:27.052 11:34:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:27.052 killing process with pid 115099 00:11:27.052 11:34:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 115099' 00:11:27.052 11:34:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 115099 00:11:27.052 11:34:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 115099 00:11:29.581 00:11:29.581 real 0m12.949s 00:11:29.581 user 0m13.427s 00:11:29.581 sys 0m1.453s 00:11:29.581 11:35:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:29.581 11:35:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:29.581 ************************************ 00:11:29.581 END TEST locking_app_on_unlocked_coremask 00:11:29.581 ************************************ 00:11:29.581 11:35:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:29.581 11:35:01 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:29.581 11:35:01 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:29.581 11:35:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:29.840 ************************************ 00:11:29.840 START TEST locking_app_on_locked_coremask 00:11:29.840 ************************************ 00:11:29.840 11:35:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:11:29.840 11:35:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=115281 00:11:29.840 11:35:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 115281 /var/tmp/spdk.sock 00:11:29.840 11:35:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 115281 ']' 00:11:29.840 11:35:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:29.840 11:35:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.840 11:35:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:29.840 11:35:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.840 11:35:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:29.840 11:35:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:29.840 [2024-06-10 11:35:01.756359] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:11:29.840 [2024-06-10 11:35:01.756684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115281 ] 00:11:30.098 [2024-06-10 11:35:01.940421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.356 [2024-06-10 11:35:02.214269] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=115301 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 115301 /var/tmp/spdk2.sock 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 115301 /var/tmp/spdk2.sock 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 115301 /var/tmp/spdk2.sock 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 115301 ']' 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:31.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:31.308 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:31.308 [2024-06-10 11:35:03.177299] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:11:31.308 [2024-06-10 11:35:03.177506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115301 ] 00:11:31.566 [2024-06-10 11:35:03.380794] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 115281 has claimed it. 00:11:31.566 [2024-06-10 11:35:03.380885] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:31.825 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 845: kill: (115301) - No such process 00:11:31.825 ERROR: process (pid: 115301) is no longer running 00:11:31.825 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:31.825 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:11:31.825 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:11:31.825 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:31.825 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:31.825 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:31.825 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 115281 00:11:31.825 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 115281 00:11:31.825 11:35:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:32.084 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 115281 00:11:32.084 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 115281 ']' 00:11:32.084 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 115281 00:11:32.084 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:11:32.084 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:32.084 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 115281 00:11:32.343 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:32.343 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:32.343 killing process with pid 115281 00:11:32.343 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 115281' 00:11:32.343 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 115281 00:11:32.343 11:35:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 115281 00:11:34.889 00:11:34.889 real 0m5.243s 00:11:34.889 user 0m5.517s 00:11:34.889 sys 0m0.772s 00:11:34.889 11:35:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:34.889 11:35:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:34.889 ************************************ 00:11:34.889 END TEST locking_app_on_locked_coremask 00:11:34.889 ************************************ 00:11:34.889 11:35:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:34.889 11:35:06 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:34.889 11:35:06 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:34.889 11:35:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:34.889 ************************************ 00:11:34.889 START TEST locking_overlapped_coremask 00:11:34.889 ************************************ 00:11:34.889 11:35:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:11:34.889 11:35:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=115377 00:11:34.889 11:35:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:34.889 11:35:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 115377 /var/tmp/spdk.sock 00:11:34.889 11:35:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 115377 ']' 00:11:34.889 11:35:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.889 11:35:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:34.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.889 11:35:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.889 11:35:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:34.889 11:35:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:35.147 [2024-06-10 11:35:07.013075] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:11:35.147 [2024-06-10 11:35:07.013256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115377 ] 00:11:35.147 [2024-06-10 11:35:07.188846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:35.406 [2024-06-10 11:35:07.407987] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.406 [2024-06-10 11:35:07.408188] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.406 [2024-06-10 11:35:07.408196] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=115407 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 115407 /var/tmp/spdk2.sock 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 115407 /var/tmp/spdk2.sock 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 115407 /var/tmp/spdk2.sock 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 115407 ']' 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:36.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:36.341 11:35:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:36.341 [2024-06-10 11:35:08.387076] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:11:36.341 [2024-06-10 11:35:08.387368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115407 ] 00:11:36.601 [2024-06-10 11:35:08.619569] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 115377 has claimed it. 00:11:36.601 [2024-06-10 11:35:08.619664] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:37.167 ERROR: process (pid: 115407) is no longer running 00:11:37.167 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 845: kill: (115407) - No such process 00:11:37.167 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:37.167 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:11:37.167 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:11:37.167 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:37.167 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:37.167 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:37.167 11:35:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:37.167 11:35:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:37.167 11:35:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:37.168 11:35:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:37.168 11:35:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 115377 00:11:37.168 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 115377 ']' 00:11:37.168 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 115377 00:11:37.168 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:11:37.168 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:37.168 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 115377 00:11:37.168 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:37.168 killing process with pid 115377 00:11:37.168 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:37.168 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 115377' 00:11:37.168 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 115377 00:11:37.168 11:35:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 115377 00:11:40.465 00:11:40.465 real 0m4.951s 00:11:40.465 user 0m13.228s 00:11:40.465 sys 0m0.671s 00:11:40.465 11:35:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:40.465 11:35:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:40.465 ************************************ 00:11:40.465 END TEST locking_overlapped_coremask 00:11:40.465 ************************************ 00:11:40.465 11:35:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:40.465 11:35:11 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:40.465 11:35:11 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:40.465 11:35:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:40.465 ************************************ 00:11:40.465 START TEST locking_overlapped_coremask_via_rpc 00:11:40.465 ************************************ 00:11:40.465 11:35:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:11:40.465 11:35:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=115483 00:11:40.465 11:35:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:40.465 11:35:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 115483 /var/tmp/spdk.sock 00:11:40.465 11:35:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 115483 ']' 00:11:40.465 11:35:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.465 11:35:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:40.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.465 11:35:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.465 11:35:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:40.465 11:35:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.465 [2024-06-10 11:35:12.016166] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:11:40.465 [2024-06-10 11:35:12.016367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115483 ] 00:11:40.465 [2024-06-10 11:35:12.197909] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:40.465 [2024-06-10 11:35:12.198020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:40.465 [2024-06-10 11:35:12.445846] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.465 [2024-06-10 11:35:12.445979] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.465 [2024-06-10 11:35:12.445986] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.404 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:41.404 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:11:41.404 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=115506 00:11:41.404 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:41.404 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 115506 /var/tmp/spdk2.sock 00:11:41.404 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 115506 ']' 00:11:41.404 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:41.404 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:41.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:41.404 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:41.404 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:41.404 11:35:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.404 [2024-06-10 11:35:13.387682] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:11:41.404 [2024-06-10 11:35:13.387878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115506 ] 00:11:41.662 [2024-06-10 11:35:13.569336] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:41.662 [2024-06-10 11:35:13.569404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:42.230 [2024-06-10 11:35:14.016029] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.230 [2024-06-10 11:35:14.030748] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.230 [2024-06-10 11:35:14.030749] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.163 [2024-06-10 11:35:16.134871] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 115483 has claimed it. 00:11:44.163 request: 00:11:44.163 { 00:11:44.163 "method": "framework_enable_cpumask_locks", 00:11:44.163 "req_id": 1 00:11:44.163 } 00:11:44.163 Got JSON-RPC error response 00:11:44.163 response: 00:11:44.163 { 00:11:44.163 "code": -32603, 00:11:44.163 "message": "Failed to claim CPU core: 2" 00:11:44.163 } 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 115483 /var/tmp/spdk.sock 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 115483 ']' 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:44.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:44.163 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.422 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:44.422 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:11:44.422 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 115506 /var/tmp/spdk2.sock 00:11:44.422 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 115506 ']' 00:11:44.422 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:44.422 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:44.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:44.422 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:44.422 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:44.422 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.681 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:44.681 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:11:44.681 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:44.681 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:44.681 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:44.681 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:44.681 00:11:44.681 real 0m4.691s 00:11:44.681 user 0m1.429s 00:11:44.681 sys 0m0.250s 00:11:44.681 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:44.681 ************************************ 00:11:44.681 END TEST locking_overlapped_coremask_via_rpc 00:11:44.681 ************************************ 00:11:44.681 11:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.681 11:35:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:44.681 11:35:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 115483 ]] 00:11:44.681 11:35:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 115483 00:11:44.681 11:35:16 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 115483 ']' 00:11:44.681 11:35:16 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 115483 00:11:44.681 11:35:16 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:11:44.681 11:35:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:44.681 11:35:16 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 115483 00:11:44.681 11:35:16 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:44.681 11:35:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:44.681 11:35:16 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 115483' 00:11:44.681 killing process with pid 115483 00:11:44.681 11:35:16 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 115483 00:11:44.681 11:35:16 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 115483 00:11:47.968 11:35:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 115506 ]] 00:11:47.968 11:35:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 115506 00:11:47.968 11:35:19 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 115506 ']' 00:11:47.968 11:35:19 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 115506 00:11:47.968 11:35:19 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:11:47.968 11:35:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:47.968 11:35:19 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 115506 00:11:47.968 11:35:19 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:11:47.968 11:35:19 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:11:47.968 11:35:19 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 115506' 00:11:47.968 killing process with pid 115506 00:11:47.968 11:35:19 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 115506 00:11:47.968 11:35:19 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 115506 00:11:50.499 11:35:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:50.499 11:35:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:50.499 11:35:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 115483 ]] 00:11:50.499 11:35:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 115483 00:11:50.499 11:35:22 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 115483 ']' 00:11:50.499 11:35:22 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 115483 00:11:50.499 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 953: kill: (115483) - No such process 00:11:50.499 11:35:22 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 115483 is not found' 00:11:50.499 Process with pid 115483 is not found 00:11:50.499 11:35:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 115506 ]] 00:11:50.499 11:35:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 115506 00:11:50.499 11:35:22 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 115506 ']' 00:11:50.499 11:35:22 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 115506 00:11:50.499 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 953: kill: (115506) - No such process 00:11:50.499 Process with pid 115506 is not found 00:11:50.499 11:35:22 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 115506 is not found' 00:11:50.499 11:35:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:50.499 00:11:50.499 real 0m54.633s 00:11:50.499 user 1m33.101s 00:11:50.499 sys 0m7.080s 00:11:50.499 11:35:22 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:50.499 11:35:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:50.499 ************************************ 00:11:50.499 END TEST cpu_locks 00:11:50.499 ************************************ 00:11:50.499 00:11:50.499 real 1m28.266s 00:11:50.499 user 2m36.734s 00:11:50.499 sys 0m11.545s 00:11:50.499 11:35:22 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:50.499 11:35:22 event -- common/autotest_common.sh@10 -- # set +x 00:11:50.499 ************************************ 00:11:50.499 END TEST event 00:11:50.499 ************************************ 00:11:50.499 11:35:22 -- spdk/autotest.sh@186 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:50.499 11:35:22 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:50.499 11:35:22 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:50.499 11:35:22 -- common/autotest_common.sh@10 -- # set +x 00:11:50.499 ************************************ 00:11:50.499 START TEST thread 00:11:50.499 ************************************ 00:11:50.499 11:35:22 thread -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:50.499 * Looking for test storage... 00:11:50.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:50.499 11:35:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:50.499 11:35:22 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:11:50.499 11:35:22 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:50.499 11:35:22 thread -- common/autotest_common.sh@10 -- # set +x 00:11:50.499 ************************************ 00:11:50.499 START TEST thread_poller_perf 00:11:50.499 ************************************ 00:11:50.499 11:35:22 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:50.499 [2024-06-10 11:35:22.275299] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:11:50.499 [2024-06-10 11:35:22.275552] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115716 ] 00:11:50.499 [2024-06-10 11:35:22.446179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.758 [2024-06-10 11:35:22.730341] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.758 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:52.132 ====================================== 00:11:52.132 busy:2112381058 (cyc) 00:11:52.132 total_run_count: 372000 00:11:52.132 tsc_hz: 2100000000 (cyc) 00:11:52.132 ====================================== 00:11:52.132 poller_cost: 5678 (cyc), 2703 (nsec) 00:11:52.462 00:11:52.462 real 0m1.972s 00:11:52.462 user 0m1.742s 00:11:52.462 sys 0m0.129s 00:11:52.462 11:35:24 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:52.462 ************************************ 00:11:52.462 END TEST thread_poller_perf 00:11:52.462 11:35:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:52.462 ************************************ 00:11:52.462 11:35:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:52.462 11:35:24 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:11:52.462 11:35:24 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:52.462 11:35:24 thread -- common/autotest_common.sh@10 -- # set +x 00:11:52.462 ************************************ 00:11:52.462 START TEST thread_poller_perf 00:11:52.462 ************************************ 00:11:52.462 11:35:24 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:52.462 [2024-06-10 11:35:24.310187] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:11:52.462 [2024-06-10 11:35:24.310366] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115766 ] 00:11:52.462 [2024-06-10 11:35:24.470963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.733 [2024-06-10 11:35:24.707622] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.733 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:54.634 ====================================== 00:11:54.634 busy:2103681600 (cyc) 00:11:54.634 total_run_count: 4773000 00:11:54.634 tsc_hz: 2100000000 (cyc) 00:11:54.634 ====================================== 00:11:54.634 poller_cost: 440 (cyc), 209 (nsec) 00:11:54.634 00:11:54.634 real 0m1.909s 00:11:54.634 user 0m1.684s 00:11:54.634 sys 0m0.125s 00:11:54.634 11:35:26 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:54.634 11:35:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:54.634 ************************************ 00:11:54.634 END TEST thread_poller_perf 00:11:54.634 ************************************ 00:11:54.634 11:35:26 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:11:54.634 11:35:26 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:54.634 11:35:26 thread -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:54.634 11:35:26 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:54.634 11:35:26 thread -- common/autotest_common.sh@10 -- # set +x 00:11:54.634 ************************************ 00:11:54.634 START TEST thread_spdk_lock 00:11:54.634 ************************************ 00:11:54.634 11:35:26 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:54.634 [2024-06-10 11:35:26.289819] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:11:54.634 [2024-06-10 11:35:26.289966] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115814 ] 00:11:54.634 [2024-06-10 11:35:26.454073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:54.634 [2024-06-10 11:35:26.687529] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.634 [2024-06-10 11:35:26.687530] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.211 [2024-06-10 11:35:27.206105] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:55.211 [2024-06-10 11:35:27.206238] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:11:55.211 [2024-06-10 11:35:27.206293] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x561dcbb70300 00:11:55.211 [2024-06-10 11:35:27.218156] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:55.211 [2024-06-10 11:35:27.218276] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:55.211 [2024-06-10 11:35:27.218310] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:55.812 Starting test contend 00:11:55.812 Worker Delay Wait us Hold us Total us 00:11:55.812 0 3 130055 192236 322292 00:11:55.812 1 5 63920 297956 361876 00:11:55.812 PASS test contend 00:11:55.812 Starting test hold_by_poller 00:11:55.812 PASS test hold_by_poller 00:11:55.812 Starting test hold_by_message 00:11:55.812 PASS test hold_by_message 00:11:55.812 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:11:55.812 100014 assertions passed 00:11:55.812 0 assertions failed 00:11:55.812 00:11:55.812 real 0m1.442s 00:11:55.812 user 0m1.758s 00:11:55.812 sys 0m0.116s 00:11:55.812 11:35:27 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:55.812 11:35:27 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:11:55.812 ************************************ 00:11:55.812 END TEST thread_spdk_lock 00:11:55.812 ************************************ 00:11:55.812 00:11:55.812 real 0m5.621s 00:11:55.812 user 0m5.327s 00:11:55.812 sys 0m0.531s 00:11:55.812 11:35:27 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:55.812 11:35:27 thread -- common/autotest_common.sh@10 -- # set +x 00:11:55.812 ************************************ 00:11:55.812 END TEST thread 00:11:55.812 ************************************ 00:11:55.812 11:35:27 -- spdk/autotest.sh@187 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:55.812 11:35:27 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:55.812 11:35:27 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:55.812 11:35:27 -- common/autotest_common.sh@10 -- # set +x 00:11:55.812 ************************************ 00:11:55.812 START TEST accel 00:11:55.812 ************************************ 00:11:55.812 11:35:27 accel -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:56.071 * Looking for test storage... 00:11:56.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:56.071 11:35:27 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:11:56.071 11:35:27 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:11:56.071 11:35:27 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:56.071 11:35:27 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=115894 00:11:56.071 11:35:27 accel -- accel/accel.sh@63 -- # waitforlisten 115894 00:11:56.071 11:35:27 accel -- common/autotest_common.sh@830 -- # '[' -z 115894 ']' 00:11:56.071 11:35:27 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:11:56.071 11:35:27 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.071 11:35:27 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:56.071 11:35:27 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.071 11:35:27 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:56.071 11:35:27 accel -- common/autotest_common.sh@10 -- # set +x 00:11:56.071 11:35:27 accel -- accel/accel.sh@61 -- # build_accel_config 00:11:56.071 11:35:27 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:56.071 11:35:27 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:56.071 11:35:27 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:56.071 11:35:27 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:56.071 11:35:27 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:56.071 11:35:27 accel -- accel/accel.sh@40 -- # local IFS=, 00:11:56.071 11:35:27 accel -- accel/accel.sh@41 -- # jq -r . 00:11:56.071 [2024-06-10 11:35:27.986828] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:11:56.071 [2024-06-10 11:35:27.987114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115894 ] 00:11:56.330 [2024-06-10 11:35:28.167769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.589 [2024-06-10 11:35:28.418320] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.527 11:35:29 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:57.527 11:35:29 accel -- common/autotest_common.sh@863 -- # return 0 00:11:57.527 11:35:29 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:11:57.527 11:35:29 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:11:57.527 11:35:29 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:11:57.527 11:35:29 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:11:57.527 11:35:29 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:11:57.527 11:35:29 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:11:57.527 11:35:29 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.527 11:35:29 accel -- common/autotest_common.sh@10 -- # set +x 00:11:57.527 11:35:29 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:11:57.527 11:35:29 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.527 11:35:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # IFS== 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:57.527 11:35:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:57.527 11:35:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # IFS== 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:57.527 11:35:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:57.527 11:35:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # IFS== 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:57.527 11:35:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:57.527 11:35:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # IFS== 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:57.527 11:35:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:57.527 11:35:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # IFS== 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:57.527 11:35:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:57.527 11:35:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # IFS== 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:57.527 11:35:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:57.527 11:35:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # IFS== 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:57.527 11:35:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:57.527 11:35:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # IFS== 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:57.527 11:35:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:57.527 11:35:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # IFS== 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:57.527 11:35:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:57.527 11:35:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # IFS== 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:57.527 11:35:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:57.527 11:35:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # IFS== 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:57.527 11:35:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:57.527 11:35:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # IFS== 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:57.527 11:35:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:57.527 11:35:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # IFS== 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:57.527 11:35:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:57.527 11:35:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # IFS== 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:57.527 11:35:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:57.527 11:35:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # IFS== 00:11:57.527 11:35:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:57.527 11:35:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:57.527 11:35:29 accel -- accel/accel.sh@75 -- # killprocess 115894 00:11:57.527 11:35:29 accel -- common/autotest_common.sh@949 -- # '[' -z 115894 ']' 00:11:57.527 11:35:29 accel -- common/autotest_common.sh@953 -- # kill -0 115894 00:11:57.527 11:35:29 accel -- common/autotest_common.sh@954 -- # uname 00:11:57.527 11:35:29 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:57.527 11:35:29 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 115894 00:11:57.527 11:35:29 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:57.527 11:35:29 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:57.527 killing process with pid 115894 00:11:57.527 11:35:29 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 115894' 00:11:57.527 11:35:29 accel -- common/autotest_common.sh@968 -- # kill 115894 00:11:57.527 11:35:29 accel -- common/autotest_common.sh@973 -- # wait 115894 00:12:00.056 11:35:31 accel -- accel/accel.sh@76 -- # trap - ERR 00:12:00.056 11:35:31 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:12:00.056 11:35:31 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:00.056 11:35:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:00.056 11:35:31 accel -- common/autotest_common.sh@10 -- # set +x 00:12:00.056 11:35:31 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:12:00.056 11:35:31 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:12:00.056 11:35:31 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:12:00.056 11:35:31 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:00.056 11:35:31 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:00.056 11:35:31 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:00.056 11:35:31 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:00.056 11:35:31 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:00.056 11:35:31 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:12:00.056 11:35:31 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:12:00.056 11:35:31 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:00.056 11:35:31 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:12:00.056 11:35:31 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:12:00.056 11:35:31 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:12:00.056 11:35:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:00.056 11:35:31 accel -- common/autotest_common.sh@10 -- # set +x 00:12:00.056 ************************************ 00:12:00.056 START TEST accel_missing_filename 00:12:00.056 ************************************ 00:12:00.056 11:35:31 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:12:00.056 11:35:31 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:12:00.056 11:35:31 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:12:00.056 11:35:31 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:12:00.056 11:35:31 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:00.056 11:35:31 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:12:00.056 11:35:31 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:00.056 11:35:31 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:12:00.056 11:35:31 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:12:00.056 11:35:31 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:12:00.056 11:35:31 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:00.056 11:35:31 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:00.056 11:35:31 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:00.056 11:35:31 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:00.056 11:35:31 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:00.056 11:35:31 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:12:00.056 11:35:31 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:12:00.056 [2024-06-10 11:35:31.987066] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:00.056 [2024-06-10 11:35:31.987227] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115990 ] 00:12:00.313 [2024-06-10 11:35:32.148416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.571 [2024-06-10 11:35:32.370353] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.571 [2024-06-10 11:35:32.587818] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:01.505 [2024-06-10 11:35:33.222478] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:12:01.764 A filename is required. 00:12:01.764 11:35:33 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:12:01.764 11:35:33 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:01.764 11:35:33 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:12:01.764 11:35:33 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:12:01.764 11:35:33 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:12:01.764 11:35:33 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:01.764 00:12:01.764 real 0m1.752s 00:12:01.764 user 0m1.518s 00:12:01.764 sys 0m0.178s 00:12:01.764 11:35:33 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:01.764 11:35:33 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:12:01.764 ************************************ 00:12:01.764 END TEST accel_missing_filename 00:12:01.764 ************************************ 00:12:01.764 11:35:33 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:01.764 11:35:33 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:12:01.764 11:35:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:01.764 11:35:33 accel -- common/autotest_common.sh@10 -- # set +x 00:12:01.764 ************************************ 00:12:01.764 START TEST accel_compress_verify 00:12:01.764 ************************************ 00:12:01.764 11:35:33 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:01.764 11:35:33 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:12:01.764 11:35:33 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:01.764 11:35:33 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:12:01.764 11:35:33 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:01.764 11:35:33 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:12:01.764 11:35:33 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:01.764 11:35:33 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:01.764 11:35:33 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:01.764 11:35:33 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:01.764 11:35:33 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:01.764 11:35:33 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:01.764 11:35:33 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:01.764 11:35:33 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:01.764 11:35:33 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:01.764 11:35:33 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:01.764 11:35:33 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:12:01.764 [2024-06-10 11:35:33.814229] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:01.764 [2024-06-10 11:35:33.814453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116042 ] 00:12:02.036 [2024-06-10 11:35:33.994514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.294 [2024-06-10 11:35:34.210323] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.552 [2024-06-10 11:35:34.444132] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:03.118 [2024-06-10 11:35:35.061151] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:12:03.685 00:12:03.685 Compression does not support the verify option, aborting. 00:12:03.685 11:35:35 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:12:03.685 11:35:35 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:03.685 11:35:35 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:12:03.685 11:35:35 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:12:03.685 11:35:35 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:12:03.685 11:35:35 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:03.685 00:12:03.685 real 0m1.762s 00:12:03.685 user 0m1.489s 00:12:03.685 sys 0m0.205s 00:12:03.685 11:35:35 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:03.685 ************************************ 00:12:03.685 END TEST accel_compress_verify 00:12:03.685 11:35:35 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:12:03.685 ************************************ 00:12:03.685 11:35:35 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:12:03.685 11:35:35 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:12:03.685 11:35:35 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:03.685 11:35:35 accel -- common/autotest_common.sh@10 -- # set +x 00:12:03.685 ************************************ 00:12:03.685 START TEST accel_wrong_workload 00:12:03.685 ************************************ 00:12:03.685 11:35:35 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:12:03.685 11:35:35 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:12:03.685 11:35:35 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:12:03.685 11:35:35 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:12:03.685 11:35:35 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:03.686 11:35:35 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:12:03.686 11:35:35 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:03.686 11:35:35 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:12:03.686 11:35:35 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:12:03.686 11:35:35 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:12:03.686 11:35:35 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:03.686 11:35:35 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:03.686 11:35:35 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:03.686 11:35:35 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:03.686 11:35:35 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:03.686 11:35:35 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:12:03.686 11:35:35 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:12:03.686 Unsupported workload type: foobar 00:12:03.686 [2024-06-10 11:35:35.634735] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:12:03.686 accel_perf options: 00:12:03.686 [-h help message] 00:12:03.686 [-q queue depth per core] 00:12:03.686 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:03.686 [-T number of threads per core 00:12:03.686 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:03.686 [-t time in seconds] 00:12:03.686 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:03.686 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:12:03.686 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:03.686 [-l for compress/decompress workloads, name of uncompressed input file 00:12:03.686 [-S for crc32c workload, use this seed value (default 0) 00:12:03.686 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:03.686 [-f for fill workload, use this BYTE value (default 255) 00:12:03.686 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:03.686 [-y verify result if this switch is on] 00:12:03.686 [-a tasks to allocate per core (default: same value as -q)] 00:12:03.686 Can be used to spread operations across a wider range of memory. 00:12:03.686 11:35:35 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:12:03.686 11:35:35 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:03.686 11:35:35 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:03.686 11:35:35 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:03.686 00:12:03.686 real 0m0.087s 00:12:03.686 user 0m0.086s 00:12:03.686 sys 0m0.038s 00:12:03.686 11:35:35 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:03.686 11:35:35 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:12:03.686 ************************************ 00:12:03.686 END TEST accel_wrong_workload 00:12:03.686 ************************************ 00:12:03.686 11:35:35 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:12:03.686 11:35:35 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:12:03.686 11:35:35 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:03.686 11:35:35 accel -- common/autotest_common.sh@10 -- # set +x 00:12:03.686 ************************************ 00:12:03.686 START TEST accel_negative_buffers 00:12:03.686 ************************************ 00:12:03.686 11:35:35 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:12:03.686 11:35:35 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:12:03.686 11:35:35 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:12:03.686 11:35:35 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:12:03.686 11:35:35 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:03.686 11:35:35 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:12:03.686 11:35:35 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:03.686 11:35:35 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:12:03.686 11:35:35 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:12:03.686 11:35:35 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:12:03.686 11:35:35 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:03.686 11:35:35 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:03.686 11:35:35 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:03.686 11:35:35 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:03.686 11:35:35 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:03.686 11:35:35 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:12:03.686 11:35:35 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:12:03.944 -x option must be non-negative. 00:12:03.944 [2024-06-10 11:35:35.780174] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:12:03.944 accel_perf options: 00:12:03.944 [-h help message] 00:12:03.944 [-q queue depth per core] 00:12:03.944 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:03.944 [-T number of threads per core 00:12:03.944 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:03.944 [-t time in seconds] 00:12:03.944 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:03.944 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:12:03.944 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:03.944 [-l for compress/decompress workloads, name of uncompressed input file 00:12:03.944 [-S for crc32c workload, use this seed value (default 0) 00:12:03.944 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:03.944 [-f for fill workload, use this BYTE value (default 255) 00:12:03.944 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:03.944 [-y verify result if this switch is on] 00:12:03.944 [-a tasks to allocate per core (default: same value as -q)] 00:12:03.944 Can be used to spread operations across a wider range of memory. 00:12:03.944 11:35:35 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:12:03.944 11:35:35 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:03.944 11:35:35 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:03.944 11:35:35 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:03.944 00:12:03.944 real 0m0.094s 00:12:03.944 user 0m0.115s 00:12:03.944 sys 0m0.048s 00:12:03.944 11:35:35 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:03.944 11:35:35 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:12:03.944 ************************************ 00:12:03.944 END TEST accel_negative_buffers 00:12:03.944 ************************************ 00:12:03.944 11:35:35 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:12:03.944 11:35:35 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:12:03.944 11:35:35 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:03.944 11:35:35 accel -- common/autotest_common.sh@10 -- # set +x 00:12:03.944 ************************************ 00:12:03.944 START TEST accel_crc32c 00:12:03.944 ************************************ 00:12:03.944 11:35:35 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:12:03.944 11:35:35 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:03.944 11:35:35 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:03.944 11:35:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.944 11:35:35 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:12:03.944 11:35:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.944 11:35:35 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:12:03.944 11:35:35 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:03.944 11:35:35 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:03.944 11:35:35 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:03.944 11:35:35 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:03.944 11:35:35 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:03.944 11:35:35 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:03.944 11:35:35 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:03.944 11:35:35 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:03.944 [2024-06-10 11:35:35.929652] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:03.944 [2024-06-10 11:35:35.929883] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116131 ] 00:12:04.202 [2024-06-10 11:35:36.113714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.459 [2024-06-10 11:35:36.405284] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:04.717 11:35:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:07.286 11:35:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:07.286 00:12:07.286 real 0m2.966s 00:12:07.286 user 0m2.709s 00:12:07.286 sys 0m0.175s 00:12:07.286 ************************************ 00:12:07.286 11:35:38 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:07.286 11:35:38 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:07.286 END TEST accel_crc32c 00:12:07.286 ************************************ 00:12:07.286 11:35:38 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:12:07.286 11:35:38 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:12:07.286 11:35:38 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:07.286 11:35:38 accel -- common/autotest_common.sh@10 -- # set +x 00:12:07.286 ************************************ 00:12:07.286 START TEST accel_crc32c_C2 00:12:07.286 ************************************ 00:12:07.286 11:35:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:12:07.286 11:35:38 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:07.286 11:35:38 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:07.286 11:35:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.286 11:35:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.286 11:35:38 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:12:07.286 11:35:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:07.286 11:35:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:12:07.286 11:35:38 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:07.286 11:35:38 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:07.286 11:35:38 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:07.286 11:35:38 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:07.286 11:35:38 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:07.286 11:35:38 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:07.286 11:35:38 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:07.286 [2024-06-10 11:35:38.952869] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:07.286 [2024-06-10 11:35:38.953217] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116196 ] 00:12:07.286 [2024-06-10 11:35:39.136826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.544 [2024-06-10 11:35:39.442907] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.803 11:35:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:10.336 00:12:10.336 real 0m3.042s 00:12:10.336 user 0m2.722s 00:12:10.336 sys 0m0.250s 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:10.336 11:35:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:10.336 ************************************ 00:12:10.336 END TEST accel_crc32c_C2 00:12:10.336 ************************************ 00:12:10.336 11:35:41 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:12:10.336 11:35:41 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:12:10.336 11:35:41 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:10.336 11:35:41 accel -- common/autotest_common.sh@10 -- # set +x 00:12:10.336 ************************************ 00:12:10.336 START TEST accel_copy 00:12:10.336 ************************************ 00:12:10.336 11:35:41 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:12:10.336 11:35:41 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:10.336 11:35:41 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:12:10.336 11:35:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.336 11:35:41 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:12:10.336 11:35:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.336 11:35:42 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:12:10.336 11:35:42 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:10.336 11:35:42 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:10.336 11:35:42 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:10.336 11:35:42 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:10.336 11:35:42 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:10.336 11:35:42 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:10.336 11:35:42 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:10.336 11:35:42 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:12:10.336 [2024-06-10 11:35:42.045549] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:10.336 [2024-06-10 11:35:42.046318] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116259 ] 00:12:10.336 [2024-06-10 11:35:42.214829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.599 [2024-06-10 11:35:42.536221] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.858 11:35:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:10.858 11:35:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.858 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.858 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.859 11:35:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:13.392 ************************************ 00:12:13.392 END TEST accel_copy 00:12:13.392 ************************************ 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:12:13.392 11:35:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:13.392 00:12:13.392 real 0m3.045s 00:12:13.392 user 0m2.792s 00:12:13.392 sys 0m0.184s 00:12:13.392 11:35:45 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:13.392 11:35:45 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:12:13.392 11:35:45 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:13.392 11:35:45 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:12:13.392 11:35:45 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:13.392 11:35:45 accel -- common/autotest_common.sh@10 -- # set +x 00:12:13.392 ************************************ 00:12:13.392 START TEST accel_fill 00:12:13.392 ************************************ 00:12:13.392 11:35:45 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:13.392 11:35:45 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:12:13.392 11:35:45 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:12:13.392 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.392 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.392 11:35:45 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:13.392 11:35:45 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:13.392 11:35:45 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:12:13.392 11:35:45 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:13.392 11:35:45 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:13.392 11:35:45 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:13.392 11:35:45 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:13.392 11:35:45 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:13.392 11:35:45 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:12:13.392 11:35:45 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:12:13.392 [2024-06-10 11:35:45.165920] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:13.392 [2024-06-10 11:35:45.166170] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116323 ] 00:12:13.392 [2024-06-10 11:35:45.341050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.650 [2024-06-10 11:35:45.575293] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.909 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.910 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.910 11:35:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:13.910 11:35:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.910 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.910 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.910 11:35:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:13.910 11:35:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.910 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.910 11:35:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:12:16.487 11:35:48 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:16.487 00:12:16.487 real 0m2.987s 00:12:16.487 user 0m2.742s 00:12:16.487 sys 0m0.198s 00:12:16.487 11:35:48 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:16.487 11:35:48 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:12:16.487 ************************************ 00:12:16.487 END TEST accel_fill 00:12:16.487 ************************************ 00:12:16.487 11:35:48 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:12:16.487 11:35:48 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:12:16.487 11:35:48 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:16.487 11:35:48 accel -- common/autotest_common.sh@10 -- # set +x 00:12:16.487 ************************************ 00:12:16.487 START TEST accel_copy_crc32c 00:12:16.487 ************************************ 00:12:16.487 11:35:48 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:12:16.487 11:35:48 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:16.487 11:35:48 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:16.487 11:35:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:16.487 11:35:48 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:16.487 11:35:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:16.487 11:35:48 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:16.487 11:35:48 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:16.487 11:35:48 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:16.487 11:35:48 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:16.487 11:35:48 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:16.487 11:35:48 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:16.487 11:35:48 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:16.487 11:35:48 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:16.487 11:35:48 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:16.487 [2024-06-10 11:35:48.212440] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:16.487 [2024-06-10 11:35:48.212672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116380 ] 00:12:16.487 [2024-06-10 11:35:48.402789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.751 [2024-06-10 11:35:48.724765] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:17.318 11:35:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:19.848 00:12:19.848 real 0m3.172s 00:12:19.848 user 0m2.891s 00:12:19.848 sys 0m0.211s 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:19.848 11:35:51 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:19.848 ************************************ 00:12:19.848 END TEST accel_copy_crc32c 00:12:19.848 ************************************ 00:12:19.848 11:35:51 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:12:19.848 11:35:51 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:12:19.848 11:35:51 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:19.848 11:35:51 accel -- common/autotest_common.sh@10 -- # set +x 00:12:19.848 ************************************ 00:12:19.848 START TEST accel_copy_crc32c_C2 00:12:19.848 ************************************ 00:12:19.848 11:35:51 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:12:19.848 11:35:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:19.848 11:35:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:19.848 11:35:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:19.848 11:35:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:19.848 11:35:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:19.848 11:35:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:19.848 11:35:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:19.848 11:35:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:19.848 11:35:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:19.848 11:35:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:19.848 11:35:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:19.848 11:35:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:19.848 11:35:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:19.848 11:35:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:19.848 [2024-06-10 11:35:51.438846] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:19.848 [2024-06-10 11:35:51.439078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116445 ] 00:12:19.848 [2024-06-10 11:35:51.624001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.848 [2024-06-10 11:35:51.888356] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.469 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:20.469 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.469 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.469 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:20.469 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:20.469 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.469 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.469 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:20.470 11:35:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:23.015 00:12:23.015 real 0m3.091s 00:12:23.015 user 0m2.804s 00:12:23.015 sys 0m0.218s 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:23.015 ************************************ 00:12:23.015 11:35:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:23.015 END TEST accel_copy_crc32c_C2 00:12:23.015 ************************************ 00:12:23.015 11:35:54 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:12:23.015 11:35:54 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:12:23.015 11:35:54 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:23.015 11:35:54 accel -- common/autotest_common.sh@10 -- # set +x 00:12:23.015 ************************************ 00:12:23.015 START TEST accel_dualcast 00:12:23.015 ************************************ 00:12:23.015 11:35:54 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:12:23.015 11:35:54 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:12:23.015 11:35:54 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:12:23.015 11:35:54 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:12:23.015 11:35:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:23.015 11:35:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:23.015 11:35:54 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:23.015 11:35:54 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:12:23.015 11:35:54 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:23.015 11:35:54 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:23.015 11:35:54 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:23.015 11:35:54 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:23.015 11:35:54 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:23.015 11:35:54 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:12:23.015 11:35:54 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:12:23.015 [2024-06-10 11:35:54.585694] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:23.015 [2024-06-10 11:35:54.585918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116508 ] 00:12:23.015 [2024-06-10 11:35:54.771937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.274 [2024-06-10 11:35:55.084753] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:23.532 11:35:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:26.061 11:35:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:26.061 11:35:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:26.061 11:35:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:26.061 11:35:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:26.061 11:35:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:26.061 11:35:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:26.061 11:35:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:12:26.062 11:35:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:26.062 00:12:26.062 real 0m3.077s 00:12:26.062 user 0m2.773s 00:12:26.062 sys 0m0.219s 00:12:26.062 11:35:57 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:26.062 11:35:57 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:12:26.062 ************************************ 00:12:26.062 END TEST accel_dualcast 00:12:26.062 ************************************ 00:12:26.062 11:35:57 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:26.062 11:35:57 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:12:26.062 11:35:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:26.062 11:35:57 accel -- common/autotest_common.sh@10 -- # set +x 00:12:26.062 ************************************ 00:12:26.062 START TEST accel_compare 00:12:26.062 ************************************ 00:12:26.062 11:35:57 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:12:26.062 11:35:57 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:12:26.062 11:35:57 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:12:26.062 11:35:57 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:26.062 11:35:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.062 11:35:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.062 11:35:57 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:26.062 11:35:57 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:12:26.062 11:35:57 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:26.062 11:35:57 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:26.062 11:35:57 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:26.062 11:35:57 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:26.062 11:35:57 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:26.062 11:35:57 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:12:26.062 11:35:57 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:12:26.062 [2024-06-10 11:35:57.706429] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:26.062 [2024-06-10 11:35:57.706799] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116571 ] 00:12:26.062 [2024-06-10 11:35:57.867801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.062 [2024-06-10 11:35:58.101879] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:26.320 11:35:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.321 11:35:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:28.850 11:36:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:28.850 11:36:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:28.850 11:36:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:28.850 11:36:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:28.850 11:36:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:28.850 11:36:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:28.850 11:36:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:28.850 11:36:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:28.850 11:36:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:28.850 11:36:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:28.850 11:36:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:28.850 11:36:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:28.851 11:36:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:28.851 11:36:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:28.851 11:36:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:28.851 11:36:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:28.851 11:36:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:28.851 11:36:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:28.851 11:36:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:28.851 11:36:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:28.851 11:36:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:28.851 11:36:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:28.851 11:36:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:28.851 11:36:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:28.851 11:36:00 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:28.851 11:36:00 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:12:28.851 11:36:00 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:28.851 00:12:28.851 real 0m2.873s 00:12:28.851 user 0m2.655s 00:12:28.851 sys 0m0.144s 00:12:28.851 11:36:00 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:28.851 11:36:00 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:12:28.851 ************************************ 00:12:28.851 END TEST accel_compare 00:12:28.851 ************************************ 00:12:28.851 11:36:00 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:12:28.851 11:36:00 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:12:28.851 11:36:00 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:28.851 11:36:00 accel -- common/autotest_common.sh@10 -- # set +x 00:12:28.851 ************************************ 00:12:28.851 START TEST accel_xor 00:12:28.851 ************************************ 00:12:28.851 11:36:00 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:12:28.851 11:36:00 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:28.851 11:36:00 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:28.851 11:36:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:28.851 11:36:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:28.851 11:36:00 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:12:28.851 11:36:00 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:28.851 11:36:00 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:28.851 11:36:00 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:28.851 11:36:00 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:28.851 11:36:00 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:28.851 11:36:00 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:28.851 11:36:00 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:28.851 11:36:00 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:28.851 11:36:00 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:28.851 [2024-06-10 11:36:00.667625] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:28.851 [2024-06-10 11:36:00.667892] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116631 ] 00:12:28.851 [2024-06-10 11:36:00.858049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.108 [2024-06-10 11:36:01.134101] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:29.366 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:29.367 11:36:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:31.979 00:12:31.979 real 0m2.922s 00:12:31.979 user 0m2.673s 00:12:31.979 sys 0m0.176s 00:12:31.979 11:36:03 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:31.979 ************************************ 00:12:31.979 END TEST accel_xor 00:12:31.979 11:36:03 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:31.979 ************************************ 00:12:31.979 11:36:03 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:12:31.979 11:36:03 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:12:31.979 11:36:03 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:31.979 11:36:03 accel -- common/autotest_common.sh@10 -- # set +x 00:12:31.979 ************************************ 00:12:31.979 START TEST accel_xor 00:12:31.979 ************************************ 00:12:31.979 11:36:03 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:31.979 11:36:03 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:31.979 [2024-06-10 11:36:03.634525] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:31.979 [2024-06-10 11:36:03.634770] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116694 ] 00:12:31.979 [2024-06-10 11:36:03.797546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.979 [2024-06-10 11:36:04.030506] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.236 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.237 11:36:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:34.783 11:36:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:34.783 11:36:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:34.783 11:36:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:34.783 11:36:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:34.783 11:36:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:34.783 11:36:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:34.783 11:36:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:34.783 11:36:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:34.783 11:36:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:34.783 11:36:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:34.783 11:36:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:34.783 11:36:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:34.783 11:36:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:34.783 11:36:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:34.783 11:36:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:34.784 11:36:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:34.784 11:36:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:34.784 11:36:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:34.784 11:36:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:34.784 11:36:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:34.784 11:36:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:34.784 11:36:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:34.784 11:36:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:34.784 11:36:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:34.784 11:36:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:34.784 11:36:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:34.784 11:36:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:34.784 00:12:34.784 real 0m2.892s 00:12:34.784 user 0m2.649s 00:12:34.784 sys 0m0.175s 00:12:34.784 11:36:06 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:34.784 11:36:06 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:34.784 ************************************ 00:12:34.784 END TEST accel_xor 00:12:34.784 ************************************ 00:12:34.784 11:36:06 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:12:34.784 11:36:06 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:12:34.784 11:36:06 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:34.784 11:36:06 accel -- common/autotest_common.sh@10 -- # set +x 00:12:34.784 ************************************ 00:12:34.784 START TEST accel_dif_verify 00:12:34.784 ************************************ 00:12:34.784 11:36:06 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:12:34.784 11:36:06 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:12:34.784 11:36:06 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:12:34.784 11:36:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:34.784 11:36:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:34.784 11:36:06 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:12:34.784 11:36:06 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:34.784 11:36:06 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:12:34.784 11:36:06 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:34.784 11:36:06 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:34.784 11:36:06 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:34.784 11:36:06 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:34.784 11:36:06 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:34.784 11:36:06 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:34.784 11:36:06 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:12:34.784 [2024-06-10 11:36:06.580166] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:34.784 [2024-06-10 11:36:06.580778] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116755 ] 00:12:34.784 [2024-06-10 11:36:06.741560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.041 [2024-06-10 11:36:06.965855] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.301 11:36:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:12:37.870 11:36:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:37.870 00:12:37.870 real 0m2.847s 00:12:37.870 user 0m2.610s 00:12:37.870 sys 0m0.184s 00:12:37.870 11:36:09 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:37.870 11:36:09 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:12:37.870 ************************************ 00:12:37.870 END TEST accel_dif_verify 00:12:37.870 ************************************ 00:12:37.870 11:36:09 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:12:37.870 11:36:09 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:12:37.870 11:36:09 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:37.870 11:36:09 accel -- common/autotest_common.sh@10 -- # set +x 00:12:37.870 ************************************ 00:12:37.870 START TEST accel_dif_generate 00:12:37.870 ************************************ 00:12:37.870 11:36:09 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:12:37.870 11:36:09 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:12:37.870 11:36:09 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:12:37.870 11:36:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:37.870 11:36:09 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:12:37.870 11:36:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:37.870 11:36:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:12:37.870 11:36:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:12:37.870 11:36:09 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:37.870 11:36:09 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:37.870 11:36:09 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:37.870 11:36:09 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:37.870 11:36:09 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:37.870 11:36:09 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:12:37.870 11:36:09 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:12:37.870 [2024-06-10 11:36:09.497034] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:37.870 [2024-06-10 11:36:09.497436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116816 ] 00:12:37.870 [2024-06-10 11:36:09.672771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.870 [2024-06-10 11:36:09.908722] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.129 11:36:10 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:38.130 11:36:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:12:40.668 11:36:12 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:40.668 00:12:40.668 real 0m2.870s 00:12:40.668 user 0m2.598s 00:12:40.668 sys 0m0.199s 00:12:40.668 11:36:12 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:40.668 ************************************ 00:12:40.668 END TEST accel_dif_generate 00:12:40.668 11:36:12 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:12:40.668 ************************************ 00:12:40.668 11:36:12 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:12:40.668 11:36:12 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:12:40.668 11:36:12 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:40.668 11:36:12 accel -- common/autotest_common.sh@10 -- # set +x 00:12:40.668 ************************************ 00:12:40.668 START TEST accel_dif_generate_copy 00:12:40.668 ************************************ 00:12:40.668 11:36:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:12:40.668 11:36:12 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:40.668 11:36:12 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:12:40.668 11:36:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:40.668 11:36:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:40.668 11:36:12 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:12:40.668 11:36:12 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:12:40.668 11:36:12 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:40.668 11:36:12 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:40.668 11:36:12 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:40.669 11:36:12 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:40.669 11:36:12 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:40.669 11:36:12 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:40.669 11:36:12 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:40.669 11:36:12 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:12:40.669 [2024-06-10 11:36:12.426645] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:40.669 [2024-06-10 11:36:12.427041] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116879 ] 00:12:40.669 [2024-06-10 11:36:12.623950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.926 [2024-06-10 11:36:12.906835] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:41.185 11:36:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:43.717 00:12:43.717 real 0m2.931s 00:12:43.717 user 0m2.598s 00:12:43.717 sys 0m0.245s 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:43.717 11:36:15 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:12:43.717 ************************************ 00:12:43.717 END TEST accel_dif_generate_copy 00:12:43.717 ************************************ 00:12:43.717 11:36:15 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:12:43.717 11:36:15 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:43.717 11:36:15 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:12:43.717 11:36:15 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:43.717 11:36:15 accel -- common/autotest_common.sh@10 -- # set +x 00:12:43.717 ************************************ 00:12:43.717 START TEST accel_comp 00:12:43.717 ************************************ 00:12:43.717 11:36:15 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:43.717 11:36:15 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:12:43.717 11:36:15 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:12:43.717 11:36:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:43.717 11:36:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:43.717 11:36:15 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:43.717 11:36:15 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:12:43.717 11:36:15 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:43.717 11:36:15 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:43.717 11:36:15 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:43.717 11:36:15 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:43.717 11:36:15 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:43.717 11:36:15 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:43.717 11:36:15 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:12:43.717 11:36:15 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:12:43.717 [2024-06-10 11:36:15.420887] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:43.717 [2024-06-10 11:36:15.421101] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116937 ] 00:12:43.717 [2024-06-10 11:36:15.603261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.975 [2024-06-10 11:36:15.848246] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:44.234 11:36:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:46.136 11:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:46.396 11:36:18 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:46.396 11:36:18 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:12:46.396 11:36:18 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:46.396 00:12:46.397 real 0m2.828s 00:12:46.397 user 0m2.621s 00:12:46.397 sys 0m0.148s 00:12:46.397 11:36:18 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:46.397 11:36:18 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:12:46.397 ************************************ 00:12:46.397 END TEST accel_comp 00:12:46.397 ************************************ 00:12:46.397 11:36:18 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:46.397 11:36:18 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:12:46.397 11:36:18 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:46.397 11:36:18 accel -- common/autotest_common.sh@10 -- # set +x 00:12:46.397 ************************************ 00:12:46.397 START TEST accel_decomp 00:12:46.397 ************************************ 00:12:46.397 11:36:18 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:46.397 11:36:18 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:12:46.397 11:36:18 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:12:46.397 11:36:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:46.397 11:36:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:46.397 11:36:18 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:46.397 11:36:18 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:46.397 11:36:18 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:12:46.397 11:36:18 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:46.397 11:36:18 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:46.397 11:36:18 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:46.397 11:36:18 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:46.397 11:36:18 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:46.397 11:36:18 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:12:46.397 11:36:18 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:12:46.397 [2024-06-10 11:36:18.312176] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:46.397 [2024-06-10 11:36:18.312399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116993 ] 00:12:46.656 [2024-06-10 11:36:18.498521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.915 [2024-06-10 11:36:18.820613] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:47.174 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:47.175 11:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:49.709 11:36:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:49.709 00:12:49.709 real 0m3.166s 00:12:49.709 user 0m2.886s 00:12:49.709 sys 0m0.208s 00:12:49.709 11:36:21 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:49.709 ************************************ 00:12:49.709 END TEST accel_decomp 00:12:49.709 ************************************ 00:12:49.709 11:36:21 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:12:49.709 11:36:21 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:49.709 11:36:21 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:12:49.709 11:36:21 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:49.709 11:36:21 accel -- common/autotest_common.sh@10 -- # set +x 00:12:49.709 ************************************ 00:12:49.709 START TEST accel_decomp_full 00:12:49.709 ************************************ 00:12:49.709 11:36:21 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:49.709 11:36:21 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:12:49.709 11:36:21 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:12:49.709 11:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:49.709 11:36:21 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:49.709 11:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:49.709 11:36:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:49.709 11:36:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:12:49.709 11:36:21 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:49.709 11:36:21 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:49.709 11:36:21 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:49.709 11:36:21 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:49.709 11:36:21 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:49.709 11:36:21 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:12:49.709 11:36:21 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:12:49.709 [2024-06-10 11:36:21.538149] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:49.709 [2024-06-10 11:36:21.538361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117066 ] 00:12:49.709 [2024-06-10 11:36:21.722102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.968 [2024-06-10 11:36:21.986347] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.225 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:50.226 11:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:52.757 11:36:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:52.757 00:12:52.757 real 0m2.977s 00:12:52.757 user 0m2.684s 00:12:52.757 sys 0m0.220s 00:12:52.757 11:36:24 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:52.757 ************************************ 00:12:52.757 END TEST accel_decomp_full 00:12:52.757 ************************************ 00:12:52.757 11:36:24 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:12:52.757 11:36:24 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:52.757 11:36:24 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:12:52.757 11:36:24 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:52.757 11:36:24 accel -- common/autotest_common.sh@10 -- # set +x 00:12:52.757 ************************************ 00:12:52.757 START TEST accel_decomp_mcore 00:12:52.757 ************************************ 00:12:52.757 11:36:24 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:52.757 11:36:24 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:12:52.758 11:36:24 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:12:52.758 11:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:52.758 11:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:52.758 11:36:24 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:52.758 11:36:24 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:52.758 11:36:24 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:12:52.758 11:36:24 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:52.758 11:36:24 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:52.758 11:36:24 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:52.758 11:36:24 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:52.758 11:36:24 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:52.758 11:36:24 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:12:52.758 11:36:24 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:12:52.758 [2024-06-10 11:36:24.564635] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:52.758 [2024-06-10 11:36:24.564831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117129 ] 00:12:52.758 [2024-06-10 11:36:24.769479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.017 [2024-06-10 11:36:25.029754] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.017 [2024-06-10 11:36:25.030333] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.017 [2024-06-10 11:36:25.030463] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.017 [2024-06-10 11:36:25.030464] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:53.277 11:36:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:55.813 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:55.814 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:55.814 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:55.814 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:55.814 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:55.814 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:55.814 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:55.814 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:55.814 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:55.814 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:55.814 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:55.814 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:55.814 11:36:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:55.814 00:12:55.814 real 0m3.109s 00:12:55.814 user 0m8.864s 00:12:55.814 sys 0m0.270s 00:12:55.814 11:36:27 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:55.814 11:36:27 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:12:55.814 ************************************ 00:12:55.814 END TEST accel_decomp_mcore 00:12:55.814 ************************************ 00:12:55.814 11:36:27 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:55.814 11:36:27 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:12:55.814 11:36:27 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:55.814 11:36:27 accel -- common/autotest_common.sh@10 -- # set +x 00:12:55.814 ************************************ 00:12:55.814 START TEST accel_decomp_full_mcore 00:12:55.814 ************************************ 00:12:55.814 11:36:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:55.814 11:36:27 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:12:55.814 11:36:27 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:12:55.814 11:36:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:55.814 11:36:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:55.814 11:36:27 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:55.814 11:36:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:55.814 11:36:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:12:55.814 11:36:27 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:55.814 11:36:27 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:55.814 11:36:27 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:55.814 11:36:27 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:55.814 11:36:27 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:55.814 11:36:27 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:12:55.814 11:36:27 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:12:55.814 [2024-06-10 11:36:27.732591] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:55.814 [2024-06-10 11:36:27.732835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117190 ] 00:12:56.074 [2024-06-10 11:36:27.935996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.333 [2024-06-10 11:36:28.212433] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.333 [2024-06-10 11:36:28.212735] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.333 [2024-06-10 11:36:28.212814] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.333 [2024-06-10 11:36:28.212820] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:56.592 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:56.593 11:36:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:59.136 00:12:59.136 real 0m3.187s 00:12:59.136 user 0m9.202s 00:12:59.136 sys 0m0.251s 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:59.136 11:36:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:12:59.136 ************************************ 00:12:59.136 END TEST accel_decomp_full_mcore 00:12:59.136 ************************************ 00:12:59.136 11:36:30 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:59.136 11:36:30 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:12:59.136 11:36:30 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:59.136 11:36:30 accel -- common/autotest_common.sh@10 -- # set +x 00:12:59.136 ************************************ 00:12:59.136 START TEST accel_decomp_mthread 00:12:59.136 ************************************ 00:12:59.136 11:36:30 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:59.136 11:36:30 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:12:59.136 11:36:30 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:12:59.136 11:36:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.136 11:36:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.136 11:36:30 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:59.136 11:36:30 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:59.136 11:36:30 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:12:59.136 11:36:30 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:59.136 11:36:30 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:59.136 11:36:30 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:59.136 11:36:30 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:59.136 11:36:30 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:59.136 11:36:30 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:12:59.136 11:36:30 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:12:59.136 [2024-06-10 11:36:30.976889] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:12:59.136 [2024-06-10 11:36:30.977091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117258 ] 00:12:59.136 [2024-06-10 11:36:31.147078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.394 [2024-06-10 11:36:31.410447] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:59.653 11:36:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:02.185 00:13:02.185 real 0m3.004s 00:13:02.185 user 0m2.698s 00:13:02.185 sys 0m0.218s 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:02.185 11:36:33 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:02.185 ************************************ 00:13:02.185 END TEST accel_decomp_mthread 00:13:02.185 ************************************ 00:13:02.185 11:36:33 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:02.185 11:36:33 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:13:02.185 11:36:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:02.185 11:36:33 accel -- common/autotest_common.sh@10 -- # set +x 00:13:02.185 ************************************ 00:13:02.185 START TEST accel_decomp_full_mthread 00:13:02.185 ************************************ 00:13:02.185 11:36:33 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:02.185 11:36:33 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:02.185 11:36:33 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:02.185 11:36:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.185 11:36:33 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:02.185 11:36:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.185 11:36:33 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:02.185 11:36:33 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:02.185 11:36:33 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:02.185 11:36:33 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:02.185 11:36:33 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:02.185 11:36:33 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:02.185 11:36:33 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:02.185 11:36:33 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:02.185 11:36:33 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:02.185 [2024-06-10 11:36:34.045350] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:13:02.185 [2024-06-10 11:36:34.045571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117321 ] 00:13:02.185 [2024-06-10 11:36:34.226944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.443 [2024-06-10 11:36:34.469431] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.701 11:36:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:05.233 00:13:05.233 real 0m2.961s 00:13:05.233 user 0m2.685s 00:13:05.233 sys 0m0.207s 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:05.233 ************************************ 00:13:05.233 11:36:36 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:05.233 END TEST accel_decomp_full_mthread 00:13:05.233 ************************************ 00:13:05.233 11:36:37 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:13:05.233 11:36:37 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:05.233 11:36:37 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:13:05.233 11:36:37 accel -- accel/accel.sh@137 -- # build_accel_config 00:13:05.233 11:36:37 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:05.233 11:36:37 accel -- common/autotest_common.sh@10 -- # set +x 00:13:05.233 11:36:37 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:05.233 11:36:37 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:05.233 11:36:37 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:05.233 11:36:37 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:05.233 11:36:37 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:05.233 11:36:37 accel -- accel/accel.sh@40 -- # local IFS=, 00:13:05.233 11:36:37 accel -- accel/accel.sh@41 -- # jq -r . 00:13:05.233 ************************************ 00:13:05.233 START TEST accel_dif_functional_tests 00:13:05.233 ************************************ 00:13:05.233 11:36:37 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:05.233 [2024-06-10 11:36:37.094601] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:13:05.233 [2024-06-10 11:36:37.094784] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117385 ] 00:13:05.233 [2024-06-10 11:36:37.274571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:05.798 [2024-06-10 11:36:37.562278] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.798 [2024-06-10 11:36:37.562435] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.798 [2024-06-10 11:36:37.562622] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.056 00:13:06.056 00:13:06.056 CUnit - A unit testing framework for C - Version 2.1-3 00:13:06.056 http://cunit.sourceforge.net/ 00:13:06.056 00:13:06.056 00:13:06.056 Suite: accel_dif 00:13:06.056 Test: verify: DIF generated, GUARD check ...passed 00:13:06.056 Test: verify: DIF generated, APPTAG check ...passed 00:13:06.056 Test: verify: DIF generated, REFTAG check ...passed 00:13:06.056 Test: verify: DIF not generated, GUARD check ...[2024-06-10 11:36:37.969712] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:06.056 passed 00:13:06.056 Test: verify: DIF not generated, APPTAG check ...[2024-06-10 11:36:37.969967] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:06.056 passed 00:13:06.056 Test: verify: DIF not generated, REFTAG check ...[2024-06-10 11:36:37.970280] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:06.056 passed 00:13:06.056 Test: verify: APPTAG correct, APPTAG check ...passed 00:13:06.056 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:13:06.056 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-06-10 11:36:37.970835] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:06.056 passed 00:13:06.056 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:13:06.056 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:13:06.056 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed[2024-06-10 11:36:37.971537] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:06.056 00:13:06.056 Test: verify copy: DIF generated, GUARD check ...passed 00:13:06.056 Test: verify copy: DIF generated, APPTAG check ...passed 00:13:06.056 Test: verify copy: DIF generated, REFTAG check ...passed 00:13:06.056 Test: verify copy: DIF not generated, GUARD check ...[2024-06-10 11:36:37.972305] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:06.056 passed 00:13:06.056 Test: verify copy: DIF not generated, APPTAG check ...passed 00:13:06.056 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-10 11:36:37.972550] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:06.056 [2024-06-10 11:36:37.972675] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:06.056 passed 00:13:06.056 Test: generate copy: DIF generated, GUARD check ...passed 00:13:06.056 Test: generate copy: DIF generated, APTTAG check ...passed 00:13:06.056 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:06.056 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:13:06.056 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:13:06.056 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:13:06.056 Test: generate copy: iovecs-len validate ...passed[2024-06-10 11:36:37.973802] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:13:06.056 00:13:06.056 Test: generate copy: buffer alignment validate ...passed 00:13:06.056 00:13:06.056 Run Summary: Type Total Ran Passed Failed Inactive 00:13:06.056 suites 1 1 n/a 0 0 00:13:06.056 tests 26 26 26 0 0 00:13:06.056 asserts 115 115 115 0 n/a 00:13:06.056 00:13:06.056 Elapsed time = 0.010 seconds 00:13:07.957 00:13:07.957 real 0m2.477s 00:13:07.957 user 0m4.963s 00:13:07.957 sys 0m0.255s 00:13:07.957 11:36:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:07.957 11:36:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:13:07.957 ************************************ 00:13:07.957 END TEST accel_dif_functional_tests 00:13:07.957 ************************************ 00:13:07.957 00:13:07.957 real 1m11.748s 00:13:07.957 user 1m19.834s 00:13:07.957 sys 0m6.166s 00:13:07.957 11:36:39 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:07.957 11:36:39 accel -- common/autotest_common.sh@10 -- # set +x 00:13:07.957 ************************************ 00:13:07.957 END TEST accel 00:13:07.957 ************************************ 00:13:07.957 11:36:39 -- spdk/autotest.sh@188 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:07.957 11:36:39 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:07.957 11:36:39 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:07.957 11:36:39 -- common/autotest_common.sh@10 -- # set +x 00:13:07.957 ************************************ 00:13:07.957 START TEST accel_rpc 00:13:07.957 ************************************ 00:13:07.957 11:36:39 accel_rpc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:07.957 * Looking for test storage... 00:13:07.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:07.957 11:36:39 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:07.957 11:36:39 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=117476 00:13:07.957 11:36:39 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:07.957 11:36:39 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 117476 00:13:07.957 11:36:39 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 117476 ']' 00:13:07.957 11:36:39 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.957 11:36:39 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:07.957 11:36:39 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.957 11:36:39 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:07.957 11:36:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.957 [2024-06-10 11:36:39.804750] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:13:07.957 [2024-06-10 11:36:39.804982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117476 ] 00:13:07.957 [2024-06-10 11:36:39.989912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.215 [2024-06-10 11:36:40.211157] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.207 11:36:40 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:09.207 11:36:40 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:13:09.207 11:36:40 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:09.207 11:36:40 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:13:09.207 11:36:40 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:09.207 11:36:40 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:13:09.207 11:36:40 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:09.207 11:36:40 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:09.207 11:36:40 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:09.207 11:36:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.207 ************************************ 00:13:09.207 START TEST accel_assign_opcode 00:13:09.207 ************************************ 00:13:09.207 11:36:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:13:09.207 11:36:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:09.207 11:36:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.207 11:36:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:09.207 [2024-06-10 11:36:40.868416] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:09.207 11:36:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.207 11:36:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:09.207 11:36:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.207 11:36:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:09.207 [2024-06-10 11:36:40.876357] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:09.207 11:36:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.207 11:36:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:09.207 11:36:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.207 11:36:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:09.774 11:36:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.774 11:36:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:09.775 11:36:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:09.775 11:36:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:13:09.775 11:36:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.775 11:36:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:09.775 11:36:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.775 software 00:13:09.775 00:13:09.775 real 0m0.883s 00:13:09.775 user 0m0.053s 00:13:09.775 sys 0m0.011s 00:13:09.775 11:36:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:09.775 11:36:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:09.775 ************************************ 00:13:09.775 END TEST accel_assign_opcode 00:13:09.775 ************************************ 00:13:09.775 11:36:41 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 117476 00:13:09.775 11:36:41 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 117476 ']' 00:13:09.775 11:36:41 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 117476 00:13:09.775 11:36:41 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:13:09.775 11:36:41 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:09.775 11:36:41 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 117476 00:13:09.775 11:36:41 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:09.775 11:36:41 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:09.775 11:36:41 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 117476' 00:13:09.775 killing process with pid 117476 00:13:09.775 11:36:41 accel_rpc -- common/autotest_common.sh@968 -- # kill 117476 00:13:09.775 11:36:41 accel_rpc -- common/autotest_common.sh@973 -- # wait 117476 00:13:13.055 ************************************ 00:13:13.055 END TEST accel_rpc 00:13:13.055 ************************************ 00:13:13.055 00:13:13.055 real 0m4.789s 00:13:13.055 user 0m4.828s 00:13:13.055 sys 0m0.625s 00:13:13.055 11:36:44 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:13.055 11:36:44 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.055 11:36:44 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:13.055 11:36:44 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:13.055 11:36:44 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:13.055 11:36:44 -- common/autotest_common.sh@10 -- # set +x 00:13:13.055 ************************************ 00:13:13.055 START TEST app_cmdline 00:13:13.055 ************************************ 00:13:13.055 11:36:44 app_cmdline -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:13.055 * Looking for test storage... 00:13:13.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:13.055 11:36:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:13.055 11:36:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=117619 00:13:13.055 11:36:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 117619 00:13:13.055 11:36:44 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 117619 ']' 00:13:13.055 11:36:44 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:13.055 11:36:44 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.055 11:36:44 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:13.055 11:36:44 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.055 11:36:44 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:13.055 11:36:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:13.055 [2024-06-10 11:36:44.645659] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:13:13.055 [2024-06-10 11:36:44.647025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117619 ] 00:13:13.055 [2024-06-10 11:36:44.818615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.055 [2024-06-10 11:36:45.032191] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.990 11:36:45 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:13.990 11:36:45 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:13:13.990 11:36:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:13.990 { 00:13:13.990 "version": "SPDK v24.09-pre git sha1 d88da79a3", 00:13:13.990 "fields": { 00:13:13.990 "major": 24, 00:13:13.990 "minor": 9, 00:13:13.990 "patch": 0, 00:13:13.990 "suffix": "-pre", 00:13:13.990 "commit": "d88da79a3" 00:13:13.990 } 00:13:13.990 } 00:13:14.249 11:36:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:14.249 11:36:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:14.249 11:36:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:14.249 11:36:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:14.249 11:36:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:14.249 11:36:46 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:14.249 11:36:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:14.249 11:36:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:14.249 11:36:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:14.249 11:36:46 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:14.249 11:36:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:14.249 11:36:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:14.249 11:36:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:14.249 11:36:46 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:13:14.249 11:36:46 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:14.249 11:36:46 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:14.249 11:36:46 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:14.249 11:36:46 app_cmdline -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:14.249 11:36:46 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:14.249 11:36:46 app_cmdline -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:14.249 11:36:46 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:14.249 11:36:46 app_cmdline -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:14.249 11:36:46 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:14.249 11:36:46 app_cmdline -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:14.507 request: 00:13:14.507 { 00:13:14.507 "method": "env_dpdk_get_mem_stats", 00:13:14.507 "req_id": 1 00:13:14.507 } 00:13:14.507 Got JSON-RPC error response 00:13:14.507 response: 00:13:14.507 { 00:13:14.507 "code": -32601, 00:13:14.507 "message": "Method not found" 00:13:14.507 } 00:13:14.507 11:36:46 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:13:14.507 11:36:46 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:14.507 11:36:46 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:14.507 11:36:46 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:14.507 11:36:46 app_cmdline -- app/cmdline.sh@1 -- # killprocess 117619 00:13:14.507 11:36:46 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 117619 ']' 00:13:14.507 11:36:46 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 117619 00:13:14.507 11:36:46 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:13:14.507 11:36:46 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:14.507 11:36:46 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 117619 00:13:14.507 11:36:46 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:14.507 11:36:46 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:14.507 11:36:46 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 117619' 00:13:14.507 killing process with pid 117619 00:13:14.507 11:36:46 app_cmdline -- common/autotest_common.sh@968 -- # kill 117619 00:13:14.507 11:36:46 app_cmdline -- common/autotest_common.sh@973 -- # wait 117619 00:13:17.036 ************************************ 00:13:17.036 END TEST app_cmdline 00:13:17.036 ************************************ 00:13:17.036 00:13:17.036 real 0m4.494s 00:13:17.036 user 0m4.738s 00:13:17.036 sys 0m0.599s 00:13:17.036 11:36:48 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:17.036 11:36:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:17.036 11:36:48 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:17.036 11:36:48 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:17.036 11:36:48 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:17.036 11:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:17.036 ************************************ 00:13:17.036 START TEST version 00:13:17.036 ************************************ 00:13:17.036 11:36:49 version -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:17.295 * Looking for test storage... 00:13:17.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:17.295 11:36:49 version -- app/version.sh@17 -- # get_header_version major 00:13:17.295 11:36:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:17.295 11:36:49 version -- app/version.sh@14 -- # tr -d '"' 00:13:17.295 11:36:49 version -- app/version.sh@14 -- # cut -f2 00:13:17.295 11:36:49 version -- app/version.sh@17 -- # major=24 00:13:17.295 11:36:49 version -- app/version.sh@18 -- # get_header_version minor 00:13:17.295 11:36:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:17.295 11:36:49 version -- app/version.sh@14 -- # cut -f2 00:13:17.295 11:36:49 version -- app/version.sh@14 -- # tr -d '"' 00:13:17.295 11:36:49 version -- app/version.sh@18 -- # minor=9 00:13:17.295 11:36:49 version -- app/version.sh@19 -- # get_header_version patch 00:13:17.295 11:36:49 version -- app/version.sh@14 -- # cut -f2 00:13:17.295 11:36:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:17.295 11:36:49 version -- app/version.sh@14 -- # tr -d '"' 00:13:17.295 11:36:49 version -- app/version.sh@19 -- # patch=0 00:13:17.295 11:36:49 version -- app/version.sh@20 -- # get_header_version suffix 00:13:17.295 11:36:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:17.295 11:36:49 version -- app/version.sh@14 -- # cut -f2 00:13:17.295 11:36:49 version -- app/version.sh@14 -- # tr -d '"' 00:13:17.295 11:36:49 version -- app/version.sh@20 -- # suffix=-pre 00:13:17.295 11:36:49 version -- app/version.sh@22 -- # version=24.9 00:13:17.295 11:36:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:17.295 11:36:49 version -- app/version.sh@28 -- # version=24.9rc0 00:13:17.295 11:36:49 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:17.295 11:36:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:17.296 11:36:49 version -- app/version.sh@30 -- # py_version=24.9rc0 00:13:17.296 11:36:49 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:13:17.296 ************************************ 00:13:17.296 END TEST version 00:13:17.296 ************************************ 00:13:17.296 00:13:17.296 real 0m0.154s 00:13:17.296 user 0m0.110s 00:13:17.296 sys 0m0.079s 00:13:17.296 11:36:49 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:17.296 11:36:49 version -- common/autotest_common.sh@10 -- # set +x 00:13:17.296 11:36:49 -- spdk/autotest.sh@192 -- # '[' 1 -eq 1 ']' 00:13:17.296 11:36:49 -- spdk/autotest.sh@193 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:13:17.296 11:36:49 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:17.296 11:36:49 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:17.296 11:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:17.296 ************************************ 00:13:17.296 START TEST blockdev_general 00:13:17.296 ************************************ 00:13:17.296 11:36:49 blockdev_general -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:13:17.296 * Looking for test storage... 00:13:17.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:17.296 11:36:49 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=117809 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 117809 00:13:17.296 11:36:49 blockdev_general -- common/autotest_common.sh@830 -- # '[' -z 117809 ']' 00:13:17.296 11:36:49 blockdev_general -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.296 11:36:49 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:13:17.296 11:36:49 blockdev_general -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:17.296 11:36:49 blockdev_general -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.296 11:36:49 blockdev_general -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:17.296 11:36:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:17.554 [2024-06-10 11:36:49.419851] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:13:17.554 [2024-06-10 11:36:49.420206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117809 ] 00:13:17.555 [2024-06-10 11:36:49.606423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.120 [2024-06-10 11:36:49.870352] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.378 11:36:50 blockdev_general -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:18.378 11:36:50 blockdev_general -- common/autotest_common.sh@863 -- # return 0 00:13:18.378 11:36:50 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:13:18.378 11:36:50 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:13:18.378 11:36:50 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:13:18.378 11:36:50 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.378 11:36:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:19.314 [2024-06-10 11:36:51.094039] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:19.314 [2024-06-10 11:36:51.094136] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:19.314 00:13:19.314 [2024-06-10 11:36:51.102007] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:19.314 [2024-06-10 11:36:51.102051] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:19.314 00:13:19.314 Malloc0 00:13:19.314 Malloc1 00:13:19.314 Malloc2 00:13:19.314 Malloc3 00:13:19.314 Malloc4 00:13:19.573 Malloc5 00:13:19.573 Malloc6 00:13:19.573 Malloc7 00:13:19.573 Malloc8 00:13:19.573 Malloc9 00:13:19.573 [2024-06-10 11:36:51.531747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:19.573 [2024-06-10 11:36:51.531821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.573 [2024-06-10 11:36:51.531857] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:19.573 [2024-06-10 11:36:51.531902] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.573 [2024-06-10 11:36:51.534249] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.573 [2024-06-10 11:36:51.534293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:19.573 TestPT 00:13:19.573 11:36:51 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:19.573 11:36:51 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:13:19.573 5000+0 records in 00:13:19.573 5000+0 records out 00:13:19.573 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0251559 s, 407 MB/s 00:13:19.573 11:36:51 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:13:19.573 11:36:51 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:19.573 11:36:51 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:19.831 AIO0 00:13:19.831 11:36:51 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:19.831 11:36:51 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:13:19.831 11:36:51 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:19.831 11:36:51 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:19.831 11:36:51 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:19.831 11:36:51 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:13:19.831 11:36:51 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:13:19.831 11:36:51 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:19.831 11:36:51 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:19.831 11:36:51 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:19.831 11:36:51 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:13:19.831 11:36:51 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:19.831 11:36:51 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:19.831 11:36:51 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:19.831 11:36:51 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:19.831 11:36:51 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:19.831 11:36:51 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:19.831 11:36:51 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:19.831 11:36:51 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:13:19.831 11:36:51 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:13:19.831 11:36:51 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:13:19.831 11:36:51 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:19.831 11:36:51 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:19.831 11:36:51 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:20.090 11:36:51 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:13:20.090 11:36:51 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:13:20.091 11:36:51 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "4be3252b-ac23-4931-b31d-c0ea1aed0569"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4be3252b-ac23-4931-b31d-c0ea1aed0569",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "c7790993-aa5d-5005-89d8-560eeac2faf8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "c7790993-aa5d-5005-89d8-560eeac2faf8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "e8cb1eff-23e0-5445-93eb-de5b16cc91fd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "e8cb1eff-23e0-5445-93eb-de5b16cc91fd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "2fd99128-e182-5691-a811-1c04f85974bc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2fd99128-e182-5691-a811-1c04f85974bc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "b6cc34d9-b868-520f-b274-820901feae32"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b6cc34d9-b868-520f-b274-820901feae32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "1583b64c-0803-5f38-80be-b60481d0b523"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1583b64c-0803-5f38-80be-b60481d0b523",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "074209bb-3f14-55bb-a57f-a9b6e1855e61"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "074209bb-3f14-55bb-a57f-a9b6e1855e61",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "3d95a8b3-2d5c-5e51-ae5b-f43e03f197f4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3d95a8b3-2d5c-5e51-ae5b-f43e03f197f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "bf04a033-a3a3-5457-a910-e8d05ec2e923"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bf04a033-a3a3-5457-a910-e8d05ec2e923",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "48d2c5ac-b5dc-5bea-ad87-cf3cf284960c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "48d2c5ac-b5dc-5bea-ad87-cf3cf284960c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "679aeee3-6a48-589e-880d-106ea8c4efdc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "679aeee3-6a48-589e-880d-106ea8c4efdc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "4211d1e0-54d3-54ec-8b09-d46b133730f3"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4211d1e0-54d3-54ec-8b09-d46b133730f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "6c8bd1f0-9099-432f-9a98-e90a992efd8d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6c8bd1f0-9099-432f-9a98-e90a992efd8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6c8bd1f0-9099-432f-9a98-e90a992efd8d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "b7a9d2e4-655d-4447-b84c-f7148f0eef8a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "fc180a42-af6e-4fa9-93f3-a03a8186b8f2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "4d9ea621-3d1f-49c3-838c-f7a5db74bc35"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4d9ea621-3d1f-49c3-838c-f7a5db74bc35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4d9ea621-3d1f-49c3-838c-f7a5db74bc35",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b216b8e2-9495-4bb2-bb89-36725ad678b4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "db70d9af-4cdd-468b-97b6-68a155c4a2b3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "74d95990-64fa-4de1-9857-c25e9f87d36a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "74d95990-64fa-4de1-9857-c25e9f87d36a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "74d95990-64fa-4de1-9857-c25e9f87d36a",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "998fc740-bc4b-4ce7-b87f-ee4aba87af06",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "87425162-9ba5-4ee8-b2fc-8d187996d423",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "73ed1f1b-4523-4cd2-acf1-6e41c74cdc5a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "73ed1f1b-4523-4cd2-acf1-6e41c74cdc5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:13:20.091 11:36:51 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:13:20.091 11:36:51 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:13:20.091 11:36:51 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:13:20.091 11:36:51 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 117809 00:13:20.091 11:36:51 blockdev_general -- common/autotest_common.sh@949 -- # '[' -z 117809 ']' 00:13:20.091 11:36:51 blockdev_general -- common/autotest_common.sh@953 -- # kill -0 117809 00:13:20.091 11:36:51 blockdev_general -- common/autotest_common.sh@954 -- # uname 00:13:20.091 11:36:51 blockdev_general -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:20.091 11:36:51 blockdev_general -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 117809 00:13:20.091 11:36:51 blockdev_general -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:20.091 11:36:51 blockdev_general -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:20.091 killing process with pid 117809 00:13:20.091 11:36:51 blockdev_general -- common/autotest_common.sh@967 -- # echo 'killing process with pid 117809' 00:13:20.091 11:36:51 blockdev_general -- common/autotest_common.sh@968 -- # kill 117809 00:13:20.091 11:36:51 blockdev_general -- common/autotest_common.sh@973 -- # wait 117809 00:13:24.310 11:36:55 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:24.310 11:36:55 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:13:24.310 11:36:55 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:13:24.310 11:36:55 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:24.310 11:36:55 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:24.310 ************************************ 00:13:24.310 START TEST bdev_hello_world 00:13:24.310 ************************************ 00:13:24.310 11:36:55 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:13:24.310 [2024-06-10 11:36:55.649063] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:13:24.310 [2024-06-10 11:36:55.649273] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117913 ] 00:13:24.310 [2024-06-10 11:36:55.831563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.310 [2024-06-10 11:36:56.073215] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.568 [2024-06-10 11:36:56.562413] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:24.568 [2024-06-10 11:36:56.562499] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:24.568 [2024-06-10 11:36:56.570354] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:24.569 [2024-06-10 11:36:56.570403] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:24.569 [2024-06-10 11:36:56.578384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:24.569 [2024-06-10 11:36:56.578455] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:24.569 [2024-06-10 11:36:56.578488] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:24.869 [2024-06-10 11:36:56.815984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:24.869 [2024-06-10 11:36:56.816122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.869 [2024-06-10 11:36:56.816167] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:24.869 [2024-06-10 11:36:56.816213] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.869 [2024-06-10 11:36:56.818794] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.869 [2024-06-10 11:36:56.818854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:25.436 [2024-06-10 11:36:57.191608] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:25.436 [2024-06-10 11:36:57.191679] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:13:25.436 [2024-06-10 11:36:57.191745] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:25.436 [2024-06-10 11:36:57.191811] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:25.436 [2024-06-10 11:36:57.191908] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:25.436 [2024-06-10 11:36:57.191935] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:25.436 [2024-06-10 11:36:57.191985] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:25.436 00:13:25.436 [2024-06-10 11:36:57.192022] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:27.969 00:13:27.969 real 0m4.301s 00:13:27.969 user 0m3.802s 00:13:27.969 sys 0m0.344s 00:13:27.969 11:36:59 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:27.969 11:36:59 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:27.969 ************************************ 00:13:27.969 END TEST bdev_hello_world 00:13:27.969 ************************************ 00:13:27.969 11:36:59 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:13:27.969 11:36:59 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:27.969 11:36:59 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:27.969 11:36:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:27.969 ************************************ 00:13:27.969 START TEST bdev_bounds 00:13:27.969 ************************************ 00:13:27.969 11:36:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # bdev_bounds '' 00:13:27.969 11:36:59 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=117987 00:13:27.969 11:36:59 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:27.969 Process bdevio pid: 117987 00:13:27.969 11:36:59 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 117987' 00:13:27.969 11:36:59 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 117987 00:13:27.969 11:36:59 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:27.969 11:36:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@830 -- # '[' -z 117987 ']' 00:13:27.969 11:36:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.969 11:36:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:27.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.969 11:36:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.969 11:36:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:27.969 11:36:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:27.969 [2024-06-10 11:37:00.014180] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:13:27.969 [2024-06-10 11:37:00.014431] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117987 ] 00:13:28.227 [2024-06-10 11:37:00.217143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:28.484 [2024-06-10 11:37:00.486693] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.484 [2024-06-10 11:37:00.486816] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.484 [2024-06-10 11:37:00.486816] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.049 [2024-06-10 11:37:00.938585] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:29.049 [2024-06-10 11:37:00.938702] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:29.049 [2024-06-10 11:37:00.946592] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:29.049 [2024-06-10 11:37:00.946668] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:29.049 [2024-06-10 11:37:00.954625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:29.049 [2024-06-10 11:37:00.954743] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:29.049 [2024-06-10 11:37:00.954766] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:29.306 [2024-06-10 11:37:01.172219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:29.306 [2024-06-10 11:37:01.172312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.306 [2024-06-10 11:37:01.172358] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:29.306 [2024-06-10 11:37:01.172383] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.306 [2024-06-10 11:37:01.175265] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.306 [2024-06-10 11:37:01.175352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:29.564 11:37:01 blockdev_general.bdev_bounds -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:29.564 11:37:01 blockdev_general.bdev_bounds -- common/autotest_common.sh@863 -- # return 0 00:13:29.564 11:37:01 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:29.822 I/O targets: 00:13:29.822 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:13:29.822 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:13:29.822 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:13:29.822 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:13:29.822 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:13:29.822 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:13:29.822 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:13:29.822 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:13:29.822 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:13:29.822 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:13:29.822 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:13:29.822 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:13:29.822 raid0: 131072 blocks of 512 bytes (64 MiB) 00:13:29.822 concat0: 131072 blocks of 512 bytes (64 MiB) 00:13:29.822 raid1: 65536 blocks of 512 bytes (32 MiB) 00:13:29.822 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:13:29.822 00:13:29.822 00:13:29.822 CUnit - A unit testing framework for C - Version 2.1-3 00:13:29.822 http://cunit.sourceforge.net/ 00:13:29.822 00:13:29.822 00:13:29.822 Suite: bdevio tests on: AIO0 00:13:29.822 Test: blockdev write read block ...passed 00:13:29.822 Test: blockdev write zeroes read block ...passed 00:13:29.822 Test: blockdev write zeroes read no split ...passed 00:13:29.822 Test: blockdev write zeroes read split ...passed 00:13:29.822 Test: blockdev write zeroes read split partial ...passed 00:13:29.822 Test: blockdev reset ...passed 00:13:29.822 Test: blockdev write read 8 blocks ...passed 00:13:29.822 Test: blockdev write read size > 128k ...passed 00:13:29.822 Test: blockdev write read invalid size ...passed 00:13:29.822 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:29.822 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:29.822 Test: blockdev write read max offset ...passed 00:13:29.822 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:29.822 Test: blockdev writev readv 8 blocks ...passed 00:13:29.822 Test: blockdev writev readv 30 x 1block ...passed 00:13:29.822 Test: blockdev writev readv block ...passed 00:13:29.822 Test: blockdev writev readv size > 128k ...passed 00:13:29.822 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:29.822 Test: blockdev comparev and writev ...passed 00:13:29.822 Test: blockdev nvme passthru rw ...passed 00:13:29.822 Test: blockdev nvme passthru vendor specific ...passed 00:13:29.822 Test: blockdev nvme admin passthru ...passed 00:13:29.822 Test: blockdev copy ...passed 00:13:29.822 Suite: bdevio tests on: raid1 00:13:29.822 Test: blockdev write read block ...passed 00:13:29.822 Test: blockdev write zeroes read block ...passed 00:13:29.822 Test: blockdev write zeroes read no split ...passed 00:13:29.822 Test: blockdev write zeroes read split ...passed 00:13:30.080 Test: blockdev write zeroes read split partial ...passed 00:13:30.080 Test: blockdev reset ...passed 00:13:30.080 Test: blockdev write read 8 blocks ...passed 00:13:30.080 Test: blockdev write read size > 128k ...passed 00:13:30.080 Test: blockdev write read invalid size ...passed 00:13:30.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.080 Test: blockdev write read max offset ...passed 00:13:30.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.080 Test: blockdev writev readv 8 blocks ...passed 00:13:30.080 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.080 Test: blockdev writev readv block ...passed 00:13:30.080 Test: blockdev writev readv size > 128k ...passed 00:13:30.080 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.080 Test: blockdev comparev and writev ...passed 00:13:30.080 Test: blockdev nvme passthru rw ...passed 00:13:30.081 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.081 Test: blockdev nvme admin passthru ...passed 00:13:30.081 Test: blockdev copy ...passed 00:13:30.081 Suite: bdevio tests on: concat0 00:13:30.081 Test: blockdev write read block ...passed 00:13:30.081 Test: blockdev write zeroes read block ...passed 00:13:30.081 Test: blockdev write zeroes read no split ...passed 00:13:30.081 Test: blockdev write zeroes read split ...passed 00:13:30.081 Test: blockdev write zeroes read split partial ...passed 00:13:30.081 Test: blockdev reset ...passed 00:13:30.081 Test: blockdev write read 8 blocks ...passed 00:13:30.081 Test: blockdev write read size > 128k ...passed 00:13:30.081 Test: blockdev write read invalid size ...passed 00:13:30.081 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.081 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.081 Test: blockdev write read max offset ...passed 00:13:30.081 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.081 Test: blockdev writev readv 8 blocks ...passed 00:13:30.081 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.081 Test: blockdev writev readv block ...passed 00:13:30.081 Test: blockdev writev readv size > 128k ...passed 00:13:30.081 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.081 Test: blockdev comparev and writev ...passed 00:13:30.081 Test: blockdev nvme passthru rw ...passed 00:13:30.081 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.081 Test: blockdev nvme admin passthru ...passed 00:13:30.081 Test: blockdev copy ...passed 00:13:30.081 Suite: bdevio tests on: raid0 00:13:30.081 Test: blockdev write read block ...passed 00:13:30.081 Test: blockdev write zeroes read block ...passed 00:13:30.081 Test: blockdev write zeroes read no split ...passed 00:13:30.081 Test: blockdev write zeroes read split ...passed 00:13:30.081 Test: blockdev write zeroes read split partial ...passed 00:13:30.081 Test: blockdev reset ...passed 00:13:30.081 Test: blockdev write read 8 blocks ...passed 00:13:30.081 Test: blockdev write read size > 128k ...passed 00:13:30.081 Test: blockdev write read invalid size ...passed 00:13:30.081 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.081 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.081 Test: blockdev write read max offset ...passed 00:13:30.081 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.081 Test: blockdev writev readv 8 blocks ...passed 00:13:30.081 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.081 Test: blockdev writev readv block ...passed 00:13:30.081 Test: blockdev writev readv size > 128k ...passed 00:13:30.081 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.081 Test: blockdev comparev and writev ...passed 00:13:30.081 Test: blockdev nvme passthru rw ...passed 00:13:30.081 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.081 Test: blockdev nvme admin passthru ...passed 00:13:30.081 Test: blockdev copy ...passed 00:13:30.081 Suite: bdevio tests on: TestPT 00:13:30.081 Test: blockdev write read block ...passed 00:13:30.081 Test: blockdev write zeroes read block ...passed 00:13:30.081 Test: blockdev write zeroes read no split ...passed 00:13:30.339 Test: blockdev write zeroes read split ...passed 00:13:30.339 Test: blockdev write zeroes read split partial ...passed 00:13:30.339 Test: blockdev reset ...passed 00:13:30.339 Test: blockdev write read 8 blocks ...passed 00:13:30.339 Test: blockdev write read size > 128k ...passed 00:13:30.339 Test: blockdev write read invalid size ...passed 00:13:30.339 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.339 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.339 Test: blockdev write read max offset ...passed 00:13:30.339 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.339 Test: blockdev writev readv 8 blocks ...passed 00:13:30.339 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.339 Test: blockdev writev readv block ...passed 00:13:30.339 Test: blockdev writev readv size > 128k ...passed 00:13:30.339 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.339 Test: blockdev comparev and writev ...passed 00:13:30.339 Test: blockdev nvme passthru rw ...passed 00:13:30.339 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.339 Test: blockdev nvme admin passthru ...passed 00:13:30.339 Test: blockdev copy ...passed 00:13:30.339 Suite: bdevio tests on: Malloc2p7 00:13:30.339 Test: blockdev write read block ...passed 00:13:30.339 Test: blockdev write zeroes read block ...passed 00:13:30.339 Test: blockdev write zeroes read no split ...passed 00:13:30.339 Test: blockdev write zeroes read split ...passed 00:13:30.339 Test: blockdev write zeroes read split partial ...passed 00:13:30.339 Test: blockdev reset ...passed 00:13:30.339 Test: blockdev write read 8 blocks ...passed 00:13:30.339 Test: blockdev write read size > 128k ...passed 00:13:30.339 Test: blockdev write read invalid size ...passed 00:13:30.339 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.339 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.339 Test: blockdev write read max offset ...passed 00:13:30.339 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.339 Test: blockdev writev readv 8 blocks ...passed 00:13:30.339 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.339 Test: blockdev writev readv block ...passed 00:13:30.339 Test: blockdev writev readv size > 128k ...passed 00:13:30.339 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.339 Test: blockdev comparev and writev ...passed 00:13:30.339 Test: blockdev nvme passthru rw ...passed 00:13:30.339 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.339 Test: blockdev nvme admin passthru ...passed 00:13:30.339 Test: blockdev copy ...passed 00:13:30.339 Suite: bdevio tests on: Malloc2p6 00:13:30.339 Test: blockdev write read block ...passed 00:13:30.339 Test: blockdev write zeroes read block ...passed 00:13:30.339 Test: blockdev write zeroes read no split ...passed 00:13:30.339 Test: blockdev write zeroes read split ...passed 00:13:30.597 Test: blockdev write zeroes read split partial ...passed 00:13:30.597 Test: blockdev reset ...passed 00:13:30.597 Test: blockdev write read 8 blocks ...passed 00:13:30.597 Test: blockdev write read size > 128k ...passed 00:13:30.597 Test: blockdev write read invalid size ...passed 00:13:30.597 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.597 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.597 Test: blockdev write read max offset ...passed 00:13:30.597 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.597 Test: blockdev writev readv 8 blocks ...passed 00:13:30.597 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.597 Test: blockdev writev readv block ...passed 00:13:30.597 Test: blockdev writev readv size > 128k ...passed 00:13:30.597 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.597 Test: blockdev comparev and writev ...passed 00:13:30.597 Test: blockdev nvme passthru rw ...passed 00:13:30.597 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.597 Test: blockdev nvme admin passthru ...passed 00:13:30.597 Test: blockdev copy ...passed 00:13:30.597 Suite: bdevio tests on: Malloc2p5 00:13:30.597 Test: blockdev write read block ...passed 00:13:30.597 Test: blockdev write zeroes read block ...passed 00:13:30.597 Test: blockdev write zeroes read no split ...passed 00:13:30.597 Test: blockdev write zeroes read split ...passed 00:13:30.597 Test: blockdev write zeroes read split partial ...passed 00:13:30.597 Test: blockdev reset ...passed 00:13:30.597 Test: blockdev write read 8 blocks ...passed 00:13:30.597 Test: blockdev write read size > 128k ...passed 00:13:30.597 Test: blockdev write read invalid size ...passed 00:13:30.597 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.597 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.597 Test: blockdev write read max offset ...passed 00:13:30.597 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.597 Test: blockdev writev readv 8 blocks ...passed 00:13:30.597 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.597 Test: blockdev writev readv block ...passed 00:13:30.597 Test: blockdev writev readv size > 128k ...passed 00:13:30.597 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.597 Test: blockdev comparev and writev ...passed 00:13:30.597 Test: blockdev nvme passthru rw ...passed 00:13:30.597 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.597 Test: blockdev nvme admin passthru ...passed 00:13:30.597 Test: blockdev copy ...passed 00:13:30.597 Suite: bdevio tests on: Malloc2p4 00:13:30.597 Test: blockdev write read block ...passed 00:13:30.597 Test: blockdev write zeroes read block ...passed 00:13:30.597 Test: blockdev write zeroes read no split ...passed 00:13:30.597 Test: blockdev write zeroes read split ...passed 00:13:30.597 Test: blockdev write zeroes read split partial ...passed 00:13:30.597 Test: blockdev reset ...passed 00:13:30.597 Test: blockdev write read 8 blocks ...passed 00:13:30.597 Test: blockdev write read size > 128k ...passed 00:13:30.597 Test: blockdev write read invalid size ...passed 00:13:30.597 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.597 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.597 Test: blockdev write read max offset ...passed 00:13:30.597 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.597 Test: blockdev writev readv 8 blocks ...passed 00:13:30.597 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.597 Test: blockdev writev readv block ...passed 00:13:30.597 Test: blockdev writev readv size > 128k ...passed 00:13:30.597 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.597 Test: blockdev comparev and writev ...passed 00:13:30.597 Test: blockdev nvme passthru rw ...passed 00:13:30.598 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.598 Test: blockdev nvme admin passthru ...passed 00:13:30.598 Test: blockdev copy ...passed 00:13:30.598 Suite: bdevio tests on: Malloc2p3 00:13:30.598 Test: blockdev write read block ...passed 00:13:30.598 Test: blockdev write zeroes read block ...passed 00:13:30.598 Test: blockdev write zeroes read no split ...passed 00:13:30.598 Test: blockdev write zeroes read split ...passed 00:13:30.855 Test: blockdev write zeroes read split partial ...passed 00:13:30.855 Test: blockdev reset ...passed 00:13:30.855 Test: blockdev write read 8 blocks ...passed 00:13:30.855 Test: blockdev write read size > 128k ...passed 00:13:30.855 Test: blockdev write read invalid size ...passed 00:13:30.855 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.855 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.855 Test: blockdev write read max offset ...passed 00:13:30.855 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.855 Test: blockdev writev readv 8 blocks ...passed 00:13:30.855 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.855 Test: blockdev writev readv block ...passed 00:13:30.855 Test: blockdev writev readv size > 128k ...passed 00:13:30.855 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.855 Test: blockdev comparev and writev ...passed 00:13:30.855 Test: blockdev nvme passthru rw ...passed 00:13:30.855 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.855 Test: blockdev nvme admin passthru ...passed 00:13:30.855 Test: blockdev copy ...passed 00:13:30.855 Suite: bdevio tests on: Malloc2p2 00:13:30.855 Test: blockdev write read block ...passed 00:13:30.855 Test: blockdev write zeroes read block ...passed 00:13:30.855 Test: blockdev write zeroes read no split ...passed 00:13:30.855 Test: blockdev write zeroes read split ...passed 00:13:30.855 Test: blockdev write zeroes read split partial ...passed 00:13:30.855 Test: blockdev reset ...passed 00:13:30.855 Test: blockdev write read 8 blocks ...passed 00:13:30.855 Test: blockdev write read size > 128k ...passed 00:13:30.855 Test: blockdev write read invalid size ...passed 00:13:30.855 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.855 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.855 Test: blockdev write read max offset ...passed 00:13:30.855 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.855 Test: blockdev writev readv 8 blocks ...passed 00:13:30.855 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.855 Test: blockdev writev readv block ...passed 00:13:30.855 Test: blockdev writev readv size > 128k ...passed 00:13:30.855 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.855 Test: blockdev comparev and writev ...passed 00:13:30.855 Test: blockdev nvme passthru rw ...passed 00:13:30.855 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.855 Test: blockdev nvme admin passthru ...passed 00:13:30.855 Test: blockdev copy ...passed 00:13:30.855 Suite: bdevio tests on: Malloc2p1 00:13:30.855 Test: blockdev write read block ...passed 00:13:30.855 Test: blockdev write zeroes read block ...passed 00:13:30.856 Test: blockdev write zeroes read no split ...passed 00:13:30.856 Test: blockdev write zeroes read split ...passed 00:13:30.856 Test: blockdev write zeroes read split partial ...passed 00:13:30.856 Test: blockdev reset ...passed 00:13:30.856 Test: blockdev write read 8 blocks ...passed 00:13:30.856 Test: blockdev write read size > 128k ...passed 00:13:30.856 Test: blockdev write read invalid size ...passed 00:13:30.856 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.856 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.856 Test: blockdev write read max offset ...passed 00:13:30.856 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.856 Test: blockdev writev readv 8 blocks ...passed 00:13:30.856 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.856 Test: blockdev writev readv block ...passed 00:13:30.856 Test: blockdev writev readv size > 128k ...passed 00:13:30.856 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.856 Test: blockdev comparev and writev ...passed 00:13:30.856 Test: blockdev nvme passthru rw ...passed 00:13:30.856 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.856 Test: blockdev nvme admin passthru ...passed 00:13:30.856 Test: blockdev copy ...passed 00:13:30.856 Suite: bdevio tests on: Malloc2p0 00:13:30.856 Test: blockdev write read block ...passed 00:13:30.856 Test: blockdev write zeroes read block ...passed 00:13:30.856 Test: blockdev write zeroes read no split ...passed 00:13:31.113 Test: blockdev write zeroes read split ...passed 00:13:31.113 Test: blockdev write zeroes read split partial ...passed 00:13:31.113 Test: blockdev reset ...passed 00:13:31.113 Test: blockdev write read 8 blocks ...passed 00:13:31.113 Test: blockdev write read size > 128k ...passed 00:13:31.113 Test: blockdev write read invalid size ...passed 00:13:31.113 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.113 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.113 Test: blockdev write read max offset ...passed 00:13:31.113 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.113 Test: blockdev writev readv 8 blocks ...passed 00:13:31.113 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.113 Test: blockdev writev readv block ...passed 00:13:31.113 Test: blockdev writev readv size > 128k ...passed 00:13:31.113 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.113 Test: blockdev comparev and writev ...passed 00:13:31.113 Test: blockdev nvme passthru rw ...passed 00:13:31.113 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.113 Test: blockdev nvme admin passthru ...passed 00:13:31.113 Test: blockdev copy ...passed 00:13:31.113 Suite: bdevio tests on: Malloc1p1 00:13:31.113 Test: blockdev write read block ...passed 00:13:31.113 Test: blockdev write zeroes read block ...passed 00:13:31.113 Test: blockdev write zeroes read no split ...passed 00:13:31.113 Test: blockdev write zeroes read split ...passed 00:13:31.113 Test: blockdev write zeroes read split partial ...passed 00:13:31.113 Test: blockdev reset ...passed 00:13:31.113 Test: blockdev write read 8 blocks ...passed 00:13:31.113 Test: blockdev write read size > 128k ...passed 00:13:31.113 Test: blockdev write read invalid size ...passed 00:13:31.113 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.113 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.113 Test: blockdev write read max offset ...passed 00:13:31.113 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.113 Test: blockdev writev readv 8 blocks ...passed 00:13:31.113 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.113 Test: blockdev writev readv block ...passed 00:13:31.114 Test: blockdev writev readv size > 128k ...passed 00:13:31.114 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.114 Test: blockdev comparev and writev ...passed 00:13:31.114 Test: blockdev nvme passthru rw ...passed 00:13:31.114 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.114 Test: blockdev nvme admin passthru ...passed 00:13:31.114 Test: blockdev copy ...passed 00:13:31.114 Suite: bdevio tests on: Malloc1p0 00:13:31.114 Test: blockdev write read block ...passed 00:13:31.114 Test: blockdev write zeroes read block ...passed 00:13:31.114 Test: blockdev write zeroes read no split ...passed 00:13:31.114 Test: blockdev write zeroes read split ...passed 00:13:31.114 Test: blockdev write zeroes read split partial ...passed 00:13:31.114 Test: blockdev reset ...passed 00:13:31.114 Test: blockdev write read 8 blocks ...passed 00:13:31.114 Test: blockdev write read size > 128k ...passed 00:13:31.114 Test: blockdev write read invalid size ...passed 00:13:31.114 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.114 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.114 Test: blockdev write read max offset ...passed 00:13:31.114 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.114 Test: blockdev writev readv 8 blocks ...passed 00:13:31.114 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.114 Test: blockdev writev readv block ...passed 00:13:31.114 Test: blockdev writev readv size > 128k ...passed 00:13:31.114 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.114 Test: blockdev comparev and writev ...passed 00:13:31.114 Test: blockdev nvme passthru rw ...passed 00:13:31.114 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.114 Test: blockdev nvme admin passthru ...passed 00:13:31.114 Test: blockdev copy ...passed 00:13:31.114 Suite: bdevio tests on: Malloc0 00:13:31.114 Test: blockdev write read block ...passed 00:13:31.114 Test: blockdev write zeroes read block ...passed 00:13:31.114 Test: blockdev write zeroes read no split ...passed 00:13:31.371 Test: blockdev write zeroes read split ...passed 00:13:31.371 Test: blockdev write zeroes read split partial ...passed 00:13:31.371 Test: blockdev reset ...passed 00:13:31.371 Test: blockdev write read 8 blocks ...passed 00:13:31.371 Test: blockdev write read size > 128k ...passed 00:13:31.371 Test: blockdev write read invalid size ...passed 00:13:31.372 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.372 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.372 Test: blockdev write read max offset ...passed 00:13:31.372 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.372 Test: blockdev writev readv 8 blocks ...passed 00:13:31.372 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.372 Test: blockdev writev readv block ...passed 00:13:31.372 Test: blockdev writev readv size > 128k ...passed 00:13:31.372 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.372 Test: blockdev comparev and writev ...passed 00:13:31.372 Test: blockdev nvme passthru rw ...passed 00:13:31.372 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.372 Test: blockdev nvme admin passthru ...passed 00:13:31.372 Test: blockdev copy ...passed 00:13:31.372 00:13:31.372 Run Summary: Type Total Ran Passed Failed Inactive 00:13:31.372 suites 16 16 n/a 0 0 00:13:31.372 tests 368 368 368 0 0 00:13:31.372 asserts 2224 2224 2224 0 n/a 00:13:31.372 00:13:31.372 Elapsed time = 4.477 seconds 00:13:31.372 0 00:13:31.372 11:37:03 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 117987 00:13:31.372 11:37:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@949 -- # '[' -z 117987 ']' 00:13:31.372 11:37:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # kill -0 117987 00:13:31.372 11:37:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # uname 00:13:31.372 11:37:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:31.372 11:37:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 117987 00:13:31.372 11:37:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:31.372 11:37:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:31.372 11:37:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # echo 'killing process with pid 117987' 00:13:31.372 killing process with pid 117987 00:13:31.372 11:37:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@968 -- # kill 117987 00:13:31.372 11:37:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@973 -- # wait 117987 00:13:33.902 11:37:05 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:13:33.902 00:13:33.902 real 0m5.613s 00:13:33.902 user 0m14.393s 00:13:33.902 sys 0m0.630s 00:13:33.902 11:37:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:33.902 ************************************ 00:13:33.902 END TEST bdev_bounds 00:13:33.902 11:37:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:33.902 ************************************ 00:13:33.902 11:37:05 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:13:33.902 11:37:05 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:13:33.902 11:37:05 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:33.902 11:37:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:33.902 ************************************ 00:13:33.902 START TEST bdev_nbd 00:13:33.902 ************************************ 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=16 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=118095 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 118095 /var/tmp/spdk-nbd.sock 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@830 -- # '[' -z 118095 ']' 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:33.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:33.902 11:37:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:33.902 [2024-06-10 11:37:05.702218] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:13:33.902 [2024-06-10 11:37:05.702439] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.902 [2024-06-10 11:37:05.884921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.160 [2024-06-10 11:37:06.143883] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.725 [2024-06-10 11:37:06.544524] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:34.725 [2024-06-10 11:37:06.544603] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:34.725 [2024-06-10 11:37:06.552485] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:34.725 [2024-06-10 11:37:06.552562] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:34.725 [2024-06-10 11:37:06.560511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:34.725 [2024-06-10 11:37:06.560581] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:34.725 [2024-06-10 11:37:06.560611] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:34.725 [2024-06-10 11:37:06.764117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:34.725 [2024-06-10 11:37:06.764202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.725 [2024-06-10 11:37:06.764250] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:34.725 [2024-06-10 11:37:06.764277] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.725 [2024-06-10 11:37:06.766599] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.725 [2024-06-10 11:37:06.766650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:35.290 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:35.290 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@863 -- # return 0 00:13:35.290 11:37:07 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:13:35.290 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.290 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:35.290 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:35.290 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:13:35.290 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.291 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:35.291 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:35.291 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:35.291 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:35.291 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:35.291 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:35.291 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.549 1+0 records in 00:13:35.549 1+0 records out 00:13:35.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410134 s, 10.0 MB/s 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:35.549 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.808 1+0 records in 00:13:35.808 1+0 records out 00:13:35.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364237 s, 11.2 MB/s 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:35.808 11:37:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd2 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd2 /proc/partitions 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.067 1+0 records in 00:13:36.067 1+0 records out 00:13:36.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484978 s, 8.4 MB/s 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:36.067 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd3 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd3 /proc/partitions 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.325 1+0 records in 00:13:36.325 1+0 records out 00:13:36.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397211 s, 10.3 MB/s 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:36.325 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd4 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd4 /proc/partitions 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.582 1+0 records in 00:13:36.582 1+0 records out 00:13:36.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505851 s, 8.1 MB/s 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:36.582 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:13:36.839 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:36.839 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:36.839 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:36.839 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd5 00:13:36.839 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:36.839 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:36.839 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:36.839 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd5 /proc/partitions 00:13:36.840 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:36.840 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:36.840 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:36.840 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.840 1+0 records in 00:13:36.840 1+0 records out 00:13:36.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502707 s, 8.1 MB/s 00:13:36.840 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.840 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:36.840 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.840 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:36.840 11:37:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:36.840 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:36.840 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:36.840 11:37:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd6 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd6 /proc/partitions 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.112 1+0 records in 00:13:37.112 1+0 records out 00:13:37.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638554 s, 6.4 MB/s 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:37.112 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:13:37.371 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:13:37.371 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd7 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd7 /proc/partitions 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.630 1+0 records in 00:13:37.630 1+0 records out 00:13:37.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510197 s, 8.0 MB/s 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:37.630 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:13:37.888 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:13:37.888 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:13:37.888 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:13:37.888 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd8 00:13:37.888 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:37.888 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:37.888 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:37.888 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd8 /proc/partitions 00:13:37.888 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:37.888 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:37.888 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:37.888 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.888 1+0 records in 00:13:37.888 1+0 records out 00:13:37.888 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000926736 s, 4.4 MB/s 00:13:37.888 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.889 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:37.889 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.889 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:37.889 11:37:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:37.889 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:37.889 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:37.889 11:37:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd9 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd9 /proc/partitions 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.146 1+0 records in 00:13:38.146 1+0 records out 00:13:38.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688881 s, 5.9 MB/s 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:38.146 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd10 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd10 /proc/partitions 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.404 1+0 records in 00:13:38.404 1+0 records out 00:13:38.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734068 s, 5.6 MB/s 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:38.404 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:13:38.662 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:13:38.662 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:13:38.662 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:13:38.662 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd11 00:13:38.662 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:38.662 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:38.662 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:38.662 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd11 /proc/partitions 00:13:38.662 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:38.662 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:38.663 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:38.663 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.663 1+0 records in 00:13:38.663 1+0 records out 00:13:38.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104099 s, 3.9 MB/s 00:13:38.663 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.663 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:38.663 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.663 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:38.663 11:37:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:38.663 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:38.663 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:38.663 11:37:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd12 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd12 /proc/partitions 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.229 1+0 records in 00:13:39.229 1+0 records out 00:13:39.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000847125 s, 4.8 MB/s 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:39.229 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd13 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd13 /proc/partitions 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.488 1+0 records in 00:13:39.488 1+0 records out 00:13:39.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00368449 s, 1.1 MB/s 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:39.488 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd14 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd14 /proc/partitions 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.747 1+0 records in 00:13:39.747 1+0 records out 00:13:39.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000904455 s, 4.5 MB/s 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:39.747 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:13:40.006 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:13:40.006 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:13:40.006 11:37:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:13:40.006 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd15 00:13:40.006 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:40.006 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:40.006 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:40.006 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd15 /proc/partitions 00:13:40.006 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:40.006 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:40.006 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:40.006 11:37:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.006 1+0 records in 00:13:40.006 1+0 records out 00:13:40.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122731 s, 3.3 MB/s 00:13:40.006 11:37:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.006 11:37:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:40.006 11:37:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.006 11:37:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:40.006 11:37:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:40.006 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:40.006 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:40.006 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:40.266 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:40.266 { 00:13:40.266 "nbd_device": "/dev/nbd0", 00:13:40.266 "bdev_name": "Malloc0" 00:13:40.266 }, 00:13:40.266 { 00:13:40.266 "nbd_device": "/dev/nbd1", 00:13:40.266 "bdev_name": "Malloc1p0" 00:13:40.266 }, 00:13:40.266 { 00:13:40.266 "nbd_device": "/dev/nbd2", 00:13:40.266 "bdev_name": "Malloc1p1" 00:13:40.266 }, 00:13:40.266 { 00:13:40.266 "nbd_device": "/dev/nbd3", 00:13:40.266 "bdev_name": "Malloc2p0" 00:13:40.266 }, 00:13:40.266 { 00:13:40.266 "nbd_device": "/dev/nbd4", 00:13:40.267 "bdev_name": "Malloc2p1" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd5", 00:13:40.267 "bdev_name": "Malloc2p2" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd6", 00:13:40.267 "bdev_name": "Malloc2p3" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd7", 00:13:40.267 "bdev_name": "Malloc2p4" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd8", 00:13:40.267 "bdev_name": "Malloc2p5" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd9", 00:13:40.267 "bdev_name": "Malloc2p6" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd10", 00:13:40.267 "bdev_name": "Malloc2p7" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd11", 00:13:40.267 "bdev_name": "TestPT" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd12", 00:13:40.267 "bdev_name": "raid0" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd13", 00:13:40.267 "bdev_name": "concat0" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd14", 00:13:40.267 "bdev_name": "raid1" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd15", 00:13:40.267 "bdev_name": "AIO0" 00:13:40.267 } 00:13:40.267 ]' 00:13:40.267 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:40.267 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd0", 00:13:40.267 "bdev_name": "Malloc0" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd1", 00:13:40.267 "bdev_name": "Malloc1p0" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd2", 00:13:40.267 "bdev_name": "Malloc1p1" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd3", 00:13:40.267 "bdev_name": "Malloc2p0" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd4", 00:13:40.267 "bdev_name": "Malloc2p1" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd5", 00:13:40.267 "bdev_name": "Malloc2p2" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd6", 00:13:40.267 "bdev_name": "Malloc2p3" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd7", 00:13:40.267 "bdev_name": "Malloc2p4" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd8", 00:13:40.267 "bdev_name": "Malloc2p5" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd9", 00:13:40.267 "bdev_name": "Malloc2p6" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd10", 00:13:40.267 "bdev_name": "Malloc2p7" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd11", 00:13:40.267 "bdev_name": "TestPT" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd12", 00:13:40.267 "bdev_name": "raid0" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd13", 00:13:40.267 "bdev_name": "concat0" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd14", 00:13:40.267 "bdev_name": "raid1" 00:13:40.267 }, 00:13:40.267 { 00:13:40.267 "nbd_device": "/dev/nbd15", 00:13:40.267 "bdev_name": "AIO0" 00:13:40.267 } 00:13:40.267 ]' 00:13:40.267 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:40.267 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:13:40.267 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:40.267 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:13:40.267 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:40.268 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:40.268 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.268 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:40.526 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:40.526 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:40.526 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:40.526 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.526 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.526 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:40.526 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:40.526 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.526 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.526 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:41.094 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:41.094 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:41.094 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:41.094 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.094 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.094 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:41.094 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:41.094 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.094 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.094 11:37:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:41.351 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:41.351 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:41.351 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:41.351 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.351 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.351 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:41.351 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:41.351 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.351 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.351 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:41.609 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:41.609 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:41.609 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:41.609 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.609 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.609 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:41.609 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:41.609 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.609 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.609 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:41.867 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:41.867 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:41.867 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:41.867 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.867 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.867 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:41.867 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:41.867 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.867 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.867 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:42.125 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:42.125 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:42.125 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:42.125 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.125 11:37:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.125 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:42.125 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:42.125 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.125 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.125 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:42.384 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:42.384 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:42.384 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:42.384 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.384 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.384 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:42.384 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:42.384 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.384 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.384 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:42.643 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:42.643 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:42.643 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:42.643 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.643 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.643 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:42.643 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:42.643 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.643 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.643 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:42.901 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:42.901 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:42.901 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:42.901 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.901 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.901 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:42.901 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:42.901 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.901 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.901 11:37:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:43.158 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:43.158 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:43.158 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:43.158 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.158 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.158 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:43.158 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:43.158 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.158 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.158 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:43.417 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:43.417 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:43.417 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:43.417 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.417 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.417 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:43.417 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:43.417 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.418 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.418 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:43.679 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:43.679 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:43.679 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:43.679 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.679 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.679 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:43.679 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:43.679 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.679 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.679 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:43.937 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:43.937 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:43.937 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:43.937 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.937 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.937 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:43.937 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:43.937 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.937 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.937 11:37:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:44.195 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:44.195 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:44.195 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:44.195 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.195 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.195 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:44.195 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:44.195 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.195 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.195 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:44.760 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:44.760 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:44.760 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:44.760 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.760 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.760 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:44.760 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:44.760 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.760 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.760 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:13:45.018 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:13:45.018 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:13:45.018 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:13:45.018 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.018 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.018 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:13:45.018 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:45.018 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.018 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:45.018 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:45.018 11:37:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:45.276 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:45.277 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:45.535 /dev/nbd0 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:45.535 1+0 records in 00:13:45.535 1+0 records out 00:13:45.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000652605 s, 6.3 MB/s 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:45.535 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:13:45.793 /dev/nbd1 00:13:46.051 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:46.051 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:46.051 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:13:46.051 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:46.052 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:46.052 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:46.052 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:13:46.052 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:46.052 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:46.052 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:46.052 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:46.052 1+0 records in 00:13:46.052 1+0 records out 00:13:46.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266252 s, 15.4 MB/s 00:13:46.052 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.052 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:46.052 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.052 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:46.052 11:37:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:46.052 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:46.052 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:46.052 11:37:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:13:46.310 /dev/nbd10 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd10 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd10 /proc/partitions 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:46.310 1+0 records in 00:13:46.310 1+0 records out 00:13:46.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441213 s, 9.3 MB/s 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:46.310 11:37:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:13:46.569 /dev/nbd11 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd11 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd11 /proc/partitions 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:46.569 1+0 records in 00:13:46.569 1+0 records out 00:13:46.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360066 s, 11.4 MB/s 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:46.569 11:37:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:13:46.827 /dev/nbd12 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd12 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd12 /proc/partitions 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:46.827 1+0 records in 00:13:46.827 1+0 records out 00:13:46.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327438 s, 12.5 MB/s 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:46.827 11:37:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:13:47.086 /dev/nbd13 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd13 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd13 /proc/partitions 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:47.086 1+0 records in 00:13:47.086 1+0 records out 00:13:47.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040633 s, 10.1 MB/s 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:47.086 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:13:47.344 /dev/nbd14 00:13:47.344 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:13:47.344 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:13:47.344 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd14 00:13:47.344 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:47.344 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:47.344 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:47.345 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd14 /proc/partitions 00:13:47.345 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:47.345 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:47.345 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:47.345 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:47.345 1+0 records in 00:13:47.345 1+0 records out 00:13:47.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456199 s, 9.0 MB/s 00:13:47.345 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.345 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:47.345 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.604 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:47.604 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:47.604 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:47.604 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:47.604 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:13:47.604 /dev/nbd15 00:13:47.604 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:13:47.604 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:13:47.604 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd15 00:13:47.604 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:47.604 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:47.604 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:47.604 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd15 /proc/partitions 00:13:47.604 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:47.604 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:47.604 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:47.604 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:47.862 1+0 records in 00:13:47.862 1+0 records out 00:13:47.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594311 s, 6.9 MB/s 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:13:47.862 /dev/nbd2 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd2 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd2 /proc/partitions 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:47.862 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.121 1+0 records in 00:13:48.121 1+0 records out 00:13:48.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512055 s, 8.0 MB/s 00:13:48.121 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.121 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:48.121 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.121 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:48.121 11:37:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:48.121 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.121 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:48.121 11:37:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:13:48.459 /dev/nbd3 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd3 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd3 /proc/partitions 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.459 1+0 records in 00:13:48.459 1+0 records out 00:13:48.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534391 s, 7.7 MB/s 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:48.459 11:37:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:13:48.719 /dev/nbd4 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd4 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd4 /proc/partitions 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.719 1+0 records in 00:13:48.719 1+0 records out 00:13:48.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000990761 s, 4.1 MB/s 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:48.719 11:37:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:13:48.978 /dev/nbd5 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd5 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd5 /proc/partitions 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.978 1+0 records in 00:13:48.978 1+0 records out 00:13:48.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567243 s, 7.2 MB/s 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:48.978 11:37:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:13:49.236 /dev/nbd6 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd6 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd6 /proc/partitions 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.236 1+0 records in 00:13:49.236 1+0 records out 00:13:49.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000869414 s, 4.7 MB/s 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:49.236 11:37:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:13:49.495 /dev/nbd7 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd7 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd7 /proc/partitions 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.495 1+0 records in 00:13:49.495 1+0 records out 00:13:49.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000702705 s, 5.8 MB/s 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:49.495 11:37:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:49.496 11:37:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:13:49.754 /dev/nbd8 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd8 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd8 /proc/partitions 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.754 1+0 records in 00:13:49.754 1+0 records out 00:13:49.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000794827 s, 5.2 MB/s 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:49.754 11:37:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:13:50.013 /dev/nbd9 00:13:50.013 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:13:50.013 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:13:50.013 11:37:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd9 00:13:50.013 11:37:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:13:50.013 11:37:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:50.013 11:37:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:50.013 11:37:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd9 /proc/partitions 00:13:50.013 11:37:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:13:50.013 11:37:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:13:50.013 11:37:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:13:50.013 11:37:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.013 1+0 records in 00:13:50.013 1+0 records out 00:13:50.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125397 s, 3.3 MB/s 00:13:50.013 11:37:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.013 11:37:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:13:50.013 11:37:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.271 11:37:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:13:50.271 11:37:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:13:50.271 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.271 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:50.271 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:50.271 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:50.271 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:50.271 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd0", 00:13:50.271 "bdev_name": "Malloc0" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd1", 00:13:50.271 "bdev_name": "Malloc1p0" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd10", 00:13:50.271 "bdev_name": "Malloc1p1" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd11", 00:13:50.271 "bdev_name": "Malloc2p0" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd12", 00:13:50.271 "bdev_name": "Malloc2p1" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd13", 00:13:50.271 "bdev_name": "Malloc2p2" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd14", 00:13:50.271 "bdev_name": "Malloc2p3" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd15", 00:13:50.271 "bdev_name": "Malloc2p4" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd2", 00:13:50.271 "bdev_name": "Malloc2p5" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd3", 00:13:50.271 "bdev_name": "Malloc2p6" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd4", 00:13:50.271 "bdev_name": "Malloc2p7" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd5", 00:13:50.271 "bdev_name": "TestPT" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd6", 00:13:50.271 "bdev_name": "raid0" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd7", 00:13:50.271 "bdev_name": "concat0" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd8", 00:13:50.271 "bdev_name": "raid1" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd9", 00:13:50.271 "bdev_name": "AIO0" 00:13:50.271 } 00:13:50.271 ]' 00:13:50.271 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd0", 00:13:50.271 "bdev_name": "Malloc0" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd1", 00:13:50.271 "bdev_name": "Malloc1p0" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd10", 00:13:50.271 "bdev_name": "Malloc1p1" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd11", 00:13:50.271 "bdev_name": "Malloc2p0" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd12", 00:13:50.271 "bdev_name": "Malloc2p1" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd13", 00:13:50.271 "bdev_name": "Malloc2p2" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd14", 00:13:50.271 "bdev_name": "Malloc2p3" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd15", 00:13:50.271 "bdev_name": "Malloc2p4" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd2", 00:13:50.271 "bdev_name": "Malloc2p5" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd3", 00:13:50.271 "bdev_name": "Malloc2p6" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd4", 00:13:50.271 "bdev_name": "Malloc2p7" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd5", 00:13:50.271 "bdev_name": "TestPT" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd6", 00:13:50.271 "bdev_name": "raid0" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd7", 00:13:50.271 "bdev_name": "concat0" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd8", 00:13:50.271 "bdev_name": "raid1" 00:13:50.271 }, 00:13:50.271 { 00:13:50.271 "nbd_device": "/dev/nbd9", 00:13:50.271 "bdev_name": "AIO0" 00:13:50.271 } 00:13:50.271 ]' 00:13:50.271 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:50.529 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:50.529 /dev/nbd1 00:13:50.529 /dev/nbd10 00:13:50.529 /dev/nbd11 00:13:50.529 /dev/nbd12 00:13:50.529 /dev/nbd13 00:13:50.529 /dev/nbd14 00:13:50.529 /dev/nbd15 00:13:50.529 /dev/nbd2 00:13:50.529 /dev/nbd3 00:13:50.529 /dev/nbd4 00:13:50.529 /dev/nbd5 00:13:50.529 /dev/nbd6 00:13:50.529 /dev/nbd7 00:13:50.529 /dev/nbd8 00:13:50.529 /dev/nbd9' 00:13:50.529 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:50.529 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:50.529 /dev/nbd1 00:13:50.529 /dev/nbd10 00:13:50.529 /dev/nbd11 00:13:50.529 /dev/nbd12 00:13:50.529 /dev/nbd13 00:13:50.529 /dev/nbd14 00:13:50.529 /dev/nbd15 00:13:50.529 /dev/nbd2 00:13:50.529 /dev/nbd3 00:13:50.529 /dev/nbd4 00:13:50.529 /dev/nbd5 00:13:50.529 /dev/nbd6 00:13:50.529 /dev/nbd7 00:13:50.529 /dev/nbd8 00:13:50.529 /dev/nbd9' 00:13:50.529 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:13:50.529 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:13:50.529 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:13:50.529 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:13:50.529 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:13:50.529 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:50.529 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:50.529 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:50.529 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:50.529 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:50.529 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:50.529 256+0 records in 00:13:50.529 256+0 records out 00:13:50.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00537034 s, 195 MB/s 00:13:50.529 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:50.529 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:50.529 256+0 records in 00:13:50.529 256+0 records out 00:13:50.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.184869 s, 5.7 MB/s 00:13:50.529 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:50.530 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:50.789 256+0 records in 00:13:50.789 256+0 records out 00:13:50.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186452 s, 5.6 MB/s 00:13:50.789 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:50.789 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:51.075 256+0 records in 00:13:51.076 256+0 records out 00:13:51.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188722 s, 5.6 MB/s 00:13:51.076 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:51.076 11:37:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:51.333 256+0 records in 00:13:51.333 256+0 records out 00:13:51.333 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189192 s, 5.5 MB/s 00:13:51.333 11:37:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:51.333 11:37:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:51.333 256+0 records in 00:13:51.333 256+0 records out 00:13:51.333 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.184846 s, 5.7 MB/s 00:13:51.333 11:37:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:51.333 11:37:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:51.592 256+0 records in 00:13:51.592 256+0 records out 00:13:51.592 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186784 s, 5.6 MB/s 00:13:51.592 11:37:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:51.592 11:37:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:13:51.850 256+0 records in 00:13:51.850 256+0 records out 00:13:51.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185123 s, 5.7 MB/s 00:13:51.850 11:37:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:51.850 11:37:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:13:51.850 256+0 records in 00:13:51.850 256+0 records out 00:13:51.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186747 s, 5.6 MB/s 00:13:51.850 11:37:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:51.850 11:37:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:13:52.108 256+0 records in 00:13:52.108 256+0 records out 00:13:52.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186375 s, 5.6 MB/s 00:13:52.108 11:37:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:52.108 11:37:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:13:52.367 256+0 records in 00:13:52.367 256+0 records out 00:13:52.367 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186835 s, 5.6 MB/s 00:13:52.367 11:37:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:52.367 11:37:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:13:52.625 256+0 records in 00:13:52.625 256+0 records out 00:13:52.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.187558 s, 5.6 MB/s 00:13:52.625 11:37:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:52.625 11:37:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:13:52.625 256+0 records in 00:13:52.625 256+0 records out 00:13:52.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177762 s, 5.9 MB/s 00:13:52.625 11:37:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:52.625 11:37:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:13:52.882 256+0 records in 00:13:52.882 256+0 records out 00:13:52.882 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.183218 s, 5.7 MB/s 00:13:52.882 11:37:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:52.882 11:37:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:13:53.140 256+0 records in 00:13:53.140 256+0 records out 00:13:53.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18716 s, 5.6 MB/s 00:13:53.141 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:53.141 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:13:53.398 256+0 records in 00:13:53.398 256+0 records out 00:13:53.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188304 s, 5.6 MB/s 00:13:53.398 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:53.398 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:13:53.656 256+0 records in 00:13:53.656 256+0 records out 00:13:53.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.277452 s, 3.8 MB/s 00:13:53.656 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:13:53.656 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:53.656 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:53.656 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:53.656 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:53.656 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:53.656 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:53.656 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:53.656 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:53.656 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:53.656 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:53.656 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:53.656 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:53.656 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:53.657 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:54.230 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:54.230 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:54.230 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:54.230 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.230 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.230 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:54.230 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:54.230 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.230 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.230 11:37:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:54.230 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:54.230 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:54.230 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:54.230 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.230 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.230 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:54.230 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:54.230 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.230 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.230 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:54.498 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:54.498 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:54.498 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:54.498 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.498 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.498 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:54.498 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:54.498 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.498 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.498 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:54.756 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:54.756 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:54.756 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:54.756 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.756 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.756 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:54.756 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:54.756 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.756 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.756 11:37:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:55.025 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:55.025 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:55.025 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:55.025 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.025 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.025 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:55.025 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:55.025 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.025 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.025 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:55.289 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:55.289 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:55.289 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:55.289 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.289 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.289 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:55.289 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:55.289 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.289 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.289 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:55.547 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:55.547 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:55.547 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:55.547 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.547 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.547 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:55.547 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:55.547 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.547 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.547 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:13:55.817 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:13:55.817 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:13:55.817 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:13:55.817 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.817 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.817 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:13:56.099 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:56.099 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.099 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.099 11:37:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:56.357 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:56.357 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:56.357 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:56.357 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.357 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.357 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:56.357 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:56.357 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.357 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.357 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:56.629 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:56.629 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:56.629 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:56.629 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.629 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.629 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:56.629 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:56.629 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.629 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.629 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:56.887 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:56.887 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:56.887 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:56.887 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.887 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.887 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:56.887 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:56.887 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.887 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.887 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:57.153 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:57.153 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:57.153 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:57.153 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.153 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.153 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:57.153 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:57.153 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.153 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.153 11:37:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:57.417 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:57.417 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:57.417 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:57.417 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.417 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.417 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:57.417 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:57.417 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.417 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.417 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:57.676 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:57.676 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:57.676 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:57.676 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.676 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.676 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:57.676 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:57.676 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.676 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.676 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:57.934 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:57.934 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:57.934 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:57.934 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.934 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.934 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:57.934 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:57.934 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.934 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.934 11:37:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:58.193 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:58.193 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:58.193 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:58.193 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.193 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.193 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:58.193 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:58.193 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.193 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:58.193 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:58.193 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:58.500 malloc_lvol_verify 00:13:58.500 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:58.761 692ae87c-1f52-4e2b-8ba7-5c901c4fde42 00:13:58.761 11:37:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:59.022 57e56562-a2cc-458e-9f24-4309ec1ae0ec 00:13:59.022 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:59.280 /dev/nbd0 00:13:59.280 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:59.280 mke2fs 1.46.5 (30-Dec-2021) 00:13:59.280 00:13:59.280 Filesystem too small for a journal 00:13:59.280 Discarding device blocks: 0/1024 done 00:13:59.280 Creating filesystem with 1024 4k blocks and 1024 inodes 00:13:59.280 00:13:59.280 Allocating group tables: 0/1 done 00:13:59.280 Writing inode tables: 0/1 done 00:13:59.280 Writing superblocks and filesystem accounting information: 0/1 done 00:13:59.280 00:13:59.280 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:59.280 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:59.280 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:59.280 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:59.280 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:59.280 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:59.280 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:59.280 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:59.538 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:59.538 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:59.538 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:59.538 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:59.538 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:59.538 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:59.538 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:59.538 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:59.538 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:59.538 11:37:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:13:59.538 11:37:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 118095 00:13:59.538 11:37:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@949 -- # '[' -z 118095 ']' 00:13:59.538 11:37:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # kill -0 118095 00:13:59.538 11:37:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # uname 00:13:59.538 11:37:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:59.538 11:37:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 118095 00:13:59.797 11:37:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:59.797 killing process with pid 118095 00:13:59.797 11:37:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:59.797 11:37:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@967 -- # echo 'killing process with pid 118095' 00:13:59.797 11:37:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@968 -- # kill 118095 00:13:59.797 11:37:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@973 -- # wait 118095 00:14:03.089 11:37:34 blockdev_general.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:14:03.089 00:14:03.089 real 0m28.809s 00:14:03.089 user 0m36.733s 00:14:03.089 sys 0m12.301s 00:14:03.089 ************************************ 00:14:03.089 END TEST bdev_nbd 00:14:03.089 ************************************ 00:14:03.089 11:37:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:03.089 11:37:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:03.089 11:37:34 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:14:03.089 11:37:34 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:14:03.089 11:37:34 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:14:03.089 11:37:34 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:14:03.089 11:37:34 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:03.089 11:37:34 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:03.089 11:37:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:03.089 ************************************ 00:14:03.089 START TEST bdev_fio 00:14:03.089 ************************************ 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # fio_test_suite '' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:14:03.089 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local workload=verify 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local bdev_type=AIO 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local env_context= 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local fio_dir=/usr/src/fio 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -z verify ']' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1294 -- # '[' -n '' ']' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1298 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1300 -- # cat 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1312 -- # '[' verify == verify ']' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # cat 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1322 -- # '[' AIO == AIO ']' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # /usr/src/fio/fio --version 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # echo serialize_overlap=1 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:03.089 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:14:03.090 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:03.090 11:37:34 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:03.090 ************************************ 00:14:03.090 START TEST bdev_fio_rw_verify 00:14:03.090 ************************************ 00:14:03.090 11:37:34 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:03.090 11:37:34 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1355 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:03.090 11:37:34 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:14:03.090 11:37:34 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:03.090 11:37:34 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # local sanitizers 00:14:03.090 11:37:34 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:03.090 11:37:34 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # shift 00:14:03.090 11:37:34 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local asan_lib= 00:14:03.090 11:37:34 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:14:03.090 11:37:34 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # grep libasan 00:14:03.090 11:37:34 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:14:03.090 11:37:34 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:03.090 11:37:34 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:14:03.090 11:37:34 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:14:03.090 11:37:34 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # break 00:14:03.090 11:37:34 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:03.090 11:37:34 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:03.090 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:03.090 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:03.090 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:03.090 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:03.090 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:03.090 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:03.090 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:03.090 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:03.090 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:03.090 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:03.090 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:03.090 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:03.090 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:03.090 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:03.090 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:03.090 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:03.090 fio-3.35 00:14:03.090 Starting 16 threads 00:14:15.416 00:14:15.416 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=119294: Mon Jun 10 11:37:46 2024 00:14:15.416 read: IOPS=63.4k, BW=248MiB/s (260MB/s)(2477MiB/10001msec) 00:14:15.416 slat (usec): min=2, max=48054, avg=43.95, stdev=493.57 00:14:15.416 clat (usec): min=10, max=56424, avg=350.60, stdev=1409.94 00:14:15.416 lat (usec): min=27, max=56451, avg=394.55, stdev=1493.80 00:14:15.416 clat percentiles (usec): 00:14:15.416 | 50.000th=[ 210], 99.000th=[ 725], 99.900th=[16450], 99.990th=[36439], 00:14:15.416 | 99.999th=[56361] 00:14:15.416 write: IOPS=97.9k, BW=383MiB/s (401MB/s)(3783MiB/9887msec); 0 zone resets 00:14:15.416 slat (usec): min=6, max=96055, avg=82.63, stdev=790.45 00:14:15.416 clat (usec): min=6, max=96361, avg=478.64, stdev=1830.56 00:14:15.416 lat (usec): min=33, max=96400, avg=561.27, stdev=1993.97 00:14:15.416 clat percentiles (usec): 00:14:15.416 | 50.000th=[ 265], 99.000th=[10421], 99.900th=[22152], 99.990th=[40633], 00:14:15.416 | 99.999th=[84411] 00:14:15.416 bw ( KiB/s): min=216975, max=646776, per=99.03%, avg=387978.05, stdev=7324.43, samples=304 00:14:15.416 iops : min=54243, max=161694, avg=96994.47, stdev=1831.12, samples=304 00:14:15.416 lat (usec) : 10=0.01%, 20=0.01%, 50=0.41%, 100=7.51%, 250=44.86% 00:14:15.416 lat (usec) : 500=40.38%, 750=5.26%, 1000=0.20% 00:14:15.416 lat (msec) : 2=0.12%, 4=0.09%, 10=0.24%, 20=0.82%, 50=0.10% 00:14:15.416 lat (msec) : 100=0.01% 00:14:15.416 cpu : usr=56.14%, sys=2.02%, ctx=250083, majf=2, minf=67251 00:14:15.416 IO depths : 1=11.1%, 2=24.0%, 4=51.8%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:15.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.416 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.416 issued rwts: total=634008,968346,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.416 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:15.416 00:14:15.416 Run status group 0 (all jobs): 00:14:15.416 READ: bw=248MiB/s (260MB/s), 248MiB/s-248MiB/s (260MB/s-260MB/s), io=2477MiB (2597MB), run=10001-10001msec 00:14:15.416 WRITE: bw=383MiB/s (401MB/s), 383MiB/s-383MiB/s (401MB/s-401MB/s), io=3783MiB (3966MB), run=9887-9887msec 00:14:18.040 ----------------------------------------------------- 00:14:18.040 Suppressions used: 00:14:18.040 count bytes template 00:14:18.040 16 140 /usr/src/fio/parse.c 00:14:18.040 10293 988128 /usr/src/fio/iolog.c 00:14:18.040 1 904 libcrypto.so 00:14:18.040 ----------------------------------------------------- 00:14:18.040 00:14:18.040 00:14:18.040 real 0m15.242s 00:14:18.040 user 1m36.861s 00:14:18.040 sys 0m4.254s 00:14:18.040 11:37:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:18.040 ************************************ 00:14:18.040 END TEST bdev_fio_rw_verify 00:14:18.040 ************************************ 00:14:18.040 11:37:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:18.040 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:14:18.040 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:18.040 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:18.040 11:37:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:18.040 11:37:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local workload=trim 00:14:18.040 11:37:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local bdev_type= 00:14:18.040 11:37:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local env_context= 00:14:18.040 11:37:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local fio_dir=/usr/src/fio 00:14:18.040 11:37:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:18.040 11:37:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -z trim ']' 00:14:18.040 11:37:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1294 -- # '[' -n '' ']' 00:14:18.040 11:37:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1298 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:18.040 11:37:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1300 -- # cat 00:14:18.040 11:37:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1312 -- # '[' trim == verify ']' 00:14:18.040 11:37:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1327 -- # '[' trim == trim ']' 00:14:18.040 11:37:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # echo rw=trimwrite 00:14:18.040 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:18.041 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "4be3252b-ac23-4931-b31d-c0ea1aed0569"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4be3252b-ac23-4931-b31d-c0ea1aed0569",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "c7790993-aa5d-5005-89d8-560eeac2faf8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "c7790993-aa5d-5005-89d8-560eeac2faf8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "e8cb1eff-23e0-5445-93eb-de5b16cc91fd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "e8cb1eff-23e0-5445-93eb-de5b16cc91fd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "2fd99128-e182-5691-a811-1c04f85974bc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2fd99128-e182-5691-a811-1c04f85974bc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "b6cc34d9-b868-520f-b274-820901feae32"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b6cc34d9-b868-520f-b274-820901feae32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "1583b64c-0803-5f38-80be-b60481d0b523"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1583b64c-0803-5f38-80be-b60481d0b523",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "074209bb-3f14-55bb-a57f-a9b6e1855e61"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "074209bb-3f14-55bb-a57f-a9b6e1855e61",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "3d95a8b3-2d5c-5e51-ae5b-f43e03f197f4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3d95a8b3-2d5c-5e51-ae5b-f43e03f197f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "bf04a033-a3a3-5457-a910-e8d05ec2e923"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bf04a033-a3a3-5457-a910-e8d05ec2e923",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "48d2c5ac-b5dc-5bea-ad87-cf3cf284960c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "48d2c5ac-b5dc-5bea-ad87-cf3cf284960c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "679aeee3-6a48-589e-880d-106ea8c4efdc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "679aeee3-6a48-589e-880d-106ea8c4efdc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "4211d1e0-54d3-54ec-8b09-d46b133730f3"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4211d1e0-54d3-54ec-8b09-d46b133730f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "6c8bd1f0-9099-432f-9a98-e90a992efd8d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6c8bd1f0-9099-432f-9a98-e90a992efd8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6c8bd1f0-9099-432f-9a98-e90a992efd8d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "b7a9d2e4-655d-4447-b84c-f7148f0eef8a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "fc180a42-af6e-4fa9-93f3-a03a8186b8f2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "4d9ea621-3d1f-49c3-838c-f7a5db74bc35"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4d9ea621-3d1f-49c3-838c-f7a5db74bc35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4d9ea621-3d1f-49c3-838c-f7a5db74bc35",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b216b8e2-9495-4bb2-bb89-36725ad678b4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "db70d9af-4cdd-468b-97b6-68a155c4a2b3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "74d95990-64fa-4de1-9857-c25e9f87d36a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "74d95990-64fa-4de1-9857-c25e9f87d36a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "74d95990-64fa-4de1-9857-c25e9f87d36a",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "998fc740-bc4b-4ce7-b87f-ee4aba87af06",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "87425162-9ba5-4ee8-b2fc-8d187996d423",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "73ed1f1b-4523-4cd2-acf1-6e41c74cdc5a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "73ed1f1b-4523-4cd2-acf1-6e41c74cdc5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:14:18.041 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:14:18.041 Malloc1p0 00:14:18.041 Malloc1p1 00:14:18.041 Malloc2p0 00:14:18.041 Malloc2p1 00:14:18.041 Malloc2p2 00:14:18.041 Malloc2p3 00:14:18.041 Malloc2p4 00:14:18.041 Malloc2p5 00:14:18.041 Malloc2p6 00:14:18.041 Malloc2p7 00:14:18.041 TestPT 00:14:18.041 raid0 00:14:18.041 concat0 ]] 00:14:18.041 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "4be3252b-ac23-4931-b31d-c0ea1aed0569"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4be3252b-ac23-4931-b31d-c0ea1aed0569",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "c7790993-aa5d-5005-89d8-560eeac2faf8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "c7790993-aa5d-5005-89d8-560eeac2faf8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "e8cb1eff-23e0-5445-93eb-de5b16cc91fd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "e8cb1eff-23e0-5445-93eb-de5b16cc91fd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "2fd99128-e182-5691-a811-1c04f85974bc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2fd99128-e182-5691-a811-1c04f85974bc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "b6cc34d9-b868-520f-b274-820901feae32"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b6cc34d9-b868-520f-b274-820901feae32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "1583b64c-0803-5f38-80be-b60481d0b523"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1583b64c-0803-5f38-80be-b60481d0b523",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "074209bb-3f14-55bb-a57f-a9b6e1855e61"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "074209bb-3f14-55bb-a57f-a9b6e1855e61",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "3d95a8b3-2d5c-5e51-ae5b-f43e03f197f4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3d95a8b3-2d5c-5e51-ae5b-f43e03f197f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "bf04a033-a3a3-5457-a910-e8d05ec2e923"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bf04a033-a3a3-5457-a910-e8d05ec2e923",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "48d2c5ac-b5dc-5bea-ad87-cf3cf284960c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "48d2c5ac-b5dc-5bea-ad87-cf3cf284960c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "679aeee3-6a48-589e-880d-106ea8c4efdc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "679aeee3-6a48-589e-880d-106ea8c4efdc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "4211d1e0-54d3-54ec-8b09-d46b133730f3"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4211d1e0-54d3-54ec-8b09-d46b133730f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "6c8bd1f0-9099-432f-9a98-e90a992efd8d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6c8bd1f0-9099-432f-9a98-e90a992efd8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6c8bd1f0-9099-432f-9a98-e90a992efd8d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "b7a9d2e4-655d-4447-b84c-f7148f0eef8a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "fc180a42-af6e-4fa9-93f3-a03a8186b8f2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "4d9ea621-3d1f-49c3-838c-f7a5db74bc35"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4d9ea621-3d1f-49c3-838c-f7a5db74bc35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4d9ea621-3d1f-49c3-838c-f7a5db74bc35",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b216b8e2-9495-4bb2-bb89-36725ad678b4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "db70d9af-4cdd-468b-97b6-68a155c4a2b3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "74d95990-64fa-4de1-9857-c25e9f87d36a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "74d95990-64fa-4de1-9857-c25e9f87d36a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "74d95990-64fa-4de1-9857-c25e9f87d36a",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "998fc740-bc4b-4ce7-b87f-ee4aba87af06",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "87425162-9ba5-4ee8-b2fc-8d187996d423",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "73ed1f1b-4523-4cd2-acf1-6e41c74cdc5a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "73ed1f1b-4523-4cd2-acf1-6e41c74cdc5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:18.042 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:14:18.043 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:14:18.043 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:18.043 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:14:18.043 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:14:18.043 11:37:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:18.043 11:37:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:14:18.043 11:37:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:18.043 11:37:49 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:18.043 ************************************ 00:14:18.043 START TEST bdev_fio_trim 00:14:18.043 ************************************ 00:14:18.043 11:37:49 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:18.043 11:37:49 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1355 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:18.043 11:37:49 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:14:18.043 11:37:49 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:18.043 11:37:49 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1338 -- # local sanitizers 00:14:18.043 11:37:49 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.043 11:37:49 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # shift 00:14:18.043 11:37:49 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # local asan_lib= 00:14:18.043 11:37:49 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:14:18.043 11:37:49 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.043 11:37:49 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # grep libasan 00:14:18.043 11:37:49 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:14:18.043 11:37:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:14:18.043 11:37:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:14:18.043 11:37:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # break 00:14:18.043 11:37:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1351 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:18.043 11:37:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:18.301 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.301 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.301 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.301 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.301 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.301 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.301 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.301 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.301 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.301 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.301 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.301 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.301 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.301 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.301 fio-3.35 00:14:18.301 Starting 14 threads 00:14:30.518 00:14:30.518 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=119528: Mon Jun 10 11:38:01 2024 00:14:30.518 write: IOPS=142k, BW=555MiB/s (582MB/s)(5556MiB/10005msec); 0 zone resets 00:14:30.518 slat (usec): min=2, max=28082, avg=34.87, stdev=390.17 00:14:30.518 clat (usec): min=12, max=28326, avg=249.38, stdev=1055.04 00:14:30.518 lat (usec): min=21, max=28348, avg=284.25, stdev=1124.56 00:14:30.518 clat percentiles (usec): 00:14:30.518 | 50.000th=[ 167], 99.000th=[ 490], 99.900th=[16319], 99.990th=[20317], 00:14:30.518 | 99.999th=[28181] 00:14:30.518 bw ( KiB/s): min=382998, max=825021, per=99.84%, avg=567750.21, stdev=10699.79, samples=266 00:14:30.518 iops : min=95749, max=206255, avg=141937.00, stdev=2674.94, samples=266 00:14:30.518 trim: IOPS=142k, BW=555MiB/s (582MB/s)(5556MiB/10005msec); 0 zone resets 00:14:30.518 slat (usec): min=4, max=28060, avg=23.83, stdev=319.44 00:14:30.518 clat (usec): min=4, max=28348, avg=268.51, stdev=1091.47 00:14:30.518 lat (usec): min=15, max=28362, avg=292.34, stdev=1137.22 00:14:30.518 clat percentiles (usec): 00:14:30.518 | 50.000th=[ 186], 99.000th=[ 486], 99.900th=[16319], 99.990th=[20317], 00:14:30.518 | 99.999th=[28181] 00:14:30.518 bw ( KiB/s): min=383022, max=825085, per=99.85%, avg=567752.32, stdev=10700.00, samples=266 00:14:30.518 iops : min=95755, max=206271, avg=141937.32, stdev=2675.00, samples=266 00:14:30.518 lat (usec) : 10=0.10%, 20=0.25%, 50=0.79%, 100=8.67%, 250=75.09% 00:14:30.518 lat (usec) : 500=14.18%, 750=0.36%, 1000=0.01% 00:14:30.518 lat (msec) : 2=0.01%, 4=0.01%, 10=0.04%, 20=0.48%, 50=0.01% 00:14:30.518 cpu : usr=68.81%, sys=0.36%, ctx=169434, majf=0, minf=831 00:14:30.518 IO depths : 1=12.3%, 2=24.6%, 4=50.0%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:30.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.518 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.518 issued rwts: total=0,1422291,1422294,0 short=0,0,0,0 dropped=0,0,0,0 00:14:30.518 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:30.518 00:14:30.518 Run status group 0 (all jobs): 00:14:30.518 WRITE: bw=555MiB/s (582MB/s), 555MiB/s-555MiB/s (582MB/s-582MB/s), io=5556MiB (5826MB), run=10005-10005msec 00:14:30.518 TRIM: bw=555MiB/s (582MB/s), 555MiB/s-555MiB/s (582MB/s-582MB/s), io=5556MiB (5826MB), run=10005-10005msec 00:14:33.046 ----------------------------------------------------- 00:14:33.046 Suppressions used: 00:14:33.046 count bytes template 00:14:33.046 14 129 /usr/src/fio/parse.c 00:14:33.046 1 904 libcrypto.so 00:14:33.046 ----------------------------------------------------- 00:14:33.046 00:14:33.046 00:14:33.046 real 0m14.587s 00:14:33.046 user 1m42.209s 00:14:33.046 sys 0m1.314s 00:14:33.046 11:38:04 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:33.046 11:38:04 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:14:33.046 ************************************ 00:14:33.046 END TEST bdev_fio_trim 00:14:33.046 ************************************ 00:14:33.046 11:38:04 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:14:33.046 11:38:04 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:33.046 11:38:04 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:14:33.046 /home/vagrant/spdk_repo/spdk 00:14:33.046 11:38:04 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:14:33.046 00:14:33.046 real 0m30.183s 00:14:33.046 user 3m19.294s 00:14:33.046 sys 0m5.685s 00:14:33.046 11:38:04 blockdev_general.bdev_fio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:33.046 11:38:04 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:33.046 ************************************ 00:14:33.046 END TEST bdev_fio 00:14:33.046 ************************************ 00:14:33.046 11:38:04 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:33.046 11:38:04 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:33.046 11:38:04 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:14:33.046 11:38:04 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:33.046 11:38:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:33.046 ************************************ 00:14:33.046 START TEST bdev_verify 00:14:33.046 ************************************ 00:14:33.046 11:38:04 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:33.046 [2024-06-10 11:38:04.768174] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:14:33.046 [2024-06-10 11:38:04.768886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119722 ] 00:14:33.046 [2024-06-10 11:38:04.943579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:33.304 [2024-06-10 11:38:05.195371] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.304 [2024-06-10 11:38:05.195377] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.870 [2024-06-10 11:38:05.672833] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:33.870 [2024-06-10 11:38:05.672932] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:33.870 [2024-06-10 11:38:05.680763] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:33.870 [2024-06-10 11:38:05.680837] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:33.870 [2024-06-10 11:38:05.688772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:33.870 [2024-06-10 11:38:05.688851] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:33.870 [2024-06-10 11:38:05.688914] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:33.870 [2024-06-10 11:38:05.923579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:33.870 [2024-06-10 11:38:05.923676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.870 [2024-06-10 11:38:05.923733] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:33.870 [2024-06-10 11:38:05.923769] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.870 [2024-06-10 11:38:05.926460] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.870 [2024-06-10 11:38:05.926514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:34.438 Running I/O for 5 seconds... 00:14:41.012 00:14:41.012 Latency(us) 00:14:41.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.012 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.012 Verification LBA range: start 0x0 length 0x1000 00:14:41.012 Malloc0 : 5.07 1236.90 4.83 0.00 0.00 103275.58 776.29 234681.30 00:14:41.012 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.012 Verification LBA range: start 0x1000 length 0x1000 00:14:41.012 Malloc0 : 5.07 1211.62 4.73 0.00 0.00 105435.85 694.37 377487.36 00:14:41.012 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.012 Verification LBA range: start 0x0 length 0x800 00:14:41.012 Malloc1p0 : 5.27 631.18 2.47 0.00 0.00 201728.10 3308.01 221698.93 00:14:41.012 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.012 Verification LBA range: start 0x800 length 0x800 00:14:41.012 Malloc1p0 : 5.28 630.80 2.46 0.00 0.00 201864.94 3261.20 209715.20 00:14:41.012 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.012 Verification LBA range: start 0x0 length 0x800 00:14:41.012 Malloc1p1 : 5.27 630.94 2.46 0.00 0.00 201251.13 3292.40 215707.06 00:14:41.012 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.012 Verification LBA range: start 0x800 length 0x800 00:14:41.012 Malloc1p1 : 5.28 630.56 2.46 0.00 0.00 201394.37 3198.78 206719.27 00:14:41.012 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.012 Verification LBA range: start 0x0 length 0x200 00:14:41.012 Malloc2p0 : 5.28 630.71 2.46 0.00 0.00 200745.72 3198.78 211712.49 00:14:41.012 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.012 Verification LBA range: start 0x200 length 0x200 00:14:41.012 Malloc2p0 : 5.28 630.32 2.46 0.00 0.00 200900.30 3198.78 200727.41 00:14:41.012 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.012 Verification LBA range: start 0x0 length 0x200 00:14:41.012 Malloc2p1 : 5.28 630.47 2.46 0.00 0.00 200259.52 3214.38 207717.91 00:14:41.012 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.012 Verification LBA range: start 0x200 length 0x200 00:14:41.012 Malloc2p1 : 5.28 630.08 2.46 0.00 0.00 200421.56 3198.78 195734.19 00:14:41.012 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.012 Verification LBA range: start 0x0 length 0x200 00:14:41.012 Malloc2p2 : 5.28 630.23 2.46 0.00 0.00 199780.33 3198.78 204721.98 00:14:41.012 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.012 Verification LBA range: start 0x200 length 0x200 00:14:41.012 Malloc2p2 : 5.28 629.84 2.46 0.00 0.00 199951.50 3058.35 193736.90 00:14:41.012 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x0 length 0x200 00:14:41.013 Malloc2p3 : 5.28 629.99 2.46 0.00 0.00 199326.97 3011.54 202724.69 00:14:41.013 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x200 length 0x200 00:14:41.013 Malloc2p3 : 5.29 629.60 2.46 0.00 0.00 199499.80 2949.12 191739.61 00:14:41.013 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x0 length 0x200 00:14:41.013 Malloc2p4 : 5.28 629.74 2.46 0.00 0.00 198923.92 3073.95 198730.12 00:14:41.013 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x200 length 0x200 00:14:41.013 Malloc2p4 : 5.29 629.37 2.46 0.00 0.00 199102.23 3011.54 190740.97 00:14:41.013 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x0 length 0x200 00:14:41.013 Malloc2p5 : 5.29 629.50 2.46 0.00 0.00 198530.02 3027.14 194735.54 00:14:41.013 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x200 length 0x200 00:14:41.013 Malloc2p5 : 5.29 629.13 2.46 0.00 0.00 198712.62 2980.33 186746.39 00:14:41.013 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x0 length 0x200 00:14:41.013 Malloc2p6 : 5.29 629.27 2.46 0.00 0.00 198119.24 2824.29 191739.61 00:14:41.013 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x200 length 0x200 00:14:41.013 Malloc2p6 : 5.29 628.91 2.46 0.00 0.00 198305.61 2793.08 182751.82 00:14:41.013 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x0 length 0x200 00:14:41.013 Malloc2p7 : 5.29 629.03 2.46 0.00 0.00 197710.18 2824.29 188743.68 00:14:41.013 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x200 length 0x200 00:14:41.013 Malloc2p7 : 5.30 628.41 2.45 0.00 0.00 197977.83 2699.46 179755.89 00:14:41.013 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x0 length 0x1000 00:14:41.013 TestPT : 5.29 608.31 2.38 0.00 0.00 202715.32 1185.89 187745.04 00:14:41.013 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x1000 length 0x1000 00:14:41.013 TestPT : 5.30 603.85 2.36 0.00 0.00 205038.18 12732.71 259647.39 00:14:41.013 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x0 length 0x2000 00:14:41.013 raid0 : 5.30 628.40 2.45 0.00 0.00 196719.80 2980.33 170768.09 00:14:41.013 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x2000 length 0x2000 00:14:41.013 raid0 : 5.30 627.62 2.45 0.00 0.00 197057.15 2964.72 158784.37 00:14:41.013 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x0 length 0x2000 00:14:41.013 concat0 : 5.30 627.95 2.45 0.00 0.00 196383.28 2964.72 165774.87 00:14:41.013 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x2000 length 0x2000 00:14:41.013 concat0 : 5.31 627.29 2.45 0.00 0.00 196677.04 2855.50 153791.15 00:14:41.013 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x0 length 0x1000 00:14:41.013 raid1 : 5.30 627.57 2.45 0.00 0.00 195985.99 3183.18 159783.01 00:14:41.013 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x1000 length 0x1000 00:14:41.013 raid1 : 5.31 627.02 2.45 0.00 0.00 196255.47 3401.63 149796.57 00:14:41.013 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x0 length 0x4e2 00:14:41.013 AIO0 : 5.31 627.21 2.45 0.00 0.00 195387.87 768.49 157785.72 00:14:41.013 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.013 Verification LBA range: start 0x4e2 length 0x4e2 00:14:41.013 AIO0 : 5.31 626.79 2.45 0.00 0.00 195576.69 760.69 159783.01 00:14:41.013 =================================================================================================================== 00:14:41.013 Total : 21278.61 83.12 0.00 0.00 188566.45 694.37 377487.36 00:14:42.915 00:14:42.915 real 0m10.220s 00:14:42.915 user 0m18.062s 00:14:42.915 sys 0m0.551s 00:14:42.915 11:38:14 blockdev_general.bdev_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:42.915 11:38:14 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:42.915 ************************************ 00:14:42.915 END TEST bdev_verify 00:14:42.915 ************************************ 00:14:42.915 11:38:14 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:42.915 11:38:14 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:14:42.915 11:38:14 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:42.915 11:38:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:42.915 ************************************ 00:14:42.915 START TEST bdev_verify_big_io 00:14:42.915 ************************************ 00:14:42.915 11:38:14 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:43.173 [2024-06-10 11:38:15.062168] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:14:43.173 [2024-06-10 11:38:15.062421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119861 ] 00:14:43.431 [2024-06-10 11:38:15.252066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:43.689 [2024-06-10 11:38:15.500698] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.689 [2024-06-10 11:38:15.500705] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.946 [2024-06-10 11:38:15.985591] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:43.946 [2024-06-10 11:38:15.985681] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:43.946 [2024-06-10 11:38:15.993521] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:43.946 [2024-06-10 11:38:15.993580] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:43.946 [2024-06-10 11:38:16.001533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:43.946 [2024-06-10 11:38:16.001614] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:43.946 [2024-06-10 11:38:16.001665] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:44.202 [2024-06-10 11:38:16.252565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:44.202 [2024-06-10 11:38:16.252660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.202 [2024-06-10 11:38:16.252702] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:44.202 [2024-06-10 11:38:16.252725] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.202 [2024-06-10 11:38:16.255518] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.202 [2024-06-10 11:38:16.255591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:44.768 [2024-06-10 11:38:16.683473] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:14:44.768 [2024-06-10 11:38:16.687354] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:14:44.768 [2024-06-10 11:38:16.691234] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:14:44.768 [2024-06-10 11:38:16.695594] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:14:44.768 [2024-06-10 11:38:16.699295] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:14:44.768 [2024-06-10 11:38:16.703322] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:14:44.768 [2024-06-10 11:38:16.707126] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:14:44.768 [2024-06-10 11:38:16.711015] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:14:44.768 [2024-06-10 11:38:16.714582] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:14:44.768 [2024-06-10 11:38:16.718796] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:14:44.768 [2024-06-10 11:38:16.722768] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:14:44.768 [2024-06-10 11:38:16.727195] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:14:44.768 [2024-06-10 11:38:16.730965] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:14:44.768 [2024-06-10 11:38:16.735017] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:14:44.768 [2024-06-10 11:38:16.739279] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:14:44.768 [2024-06-10 11:38:16.743547] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:14:45.026 [2024-06-10 11:38:16.834619] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:14:45.027 [2024-06-10 11:38:16.842160] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:14:45.027 Running I/O for 5 seconds... 00:14:51.582 00:14:51.582 Latency(us) 00:14:51.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.582 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:51.582 Verification LBA range: start 0x0 length 0x100 00:14:51.583 Malloc0 : 5.67 225.78 14.11 0.00 0.00 557946.79 698.27 1462014.54 00:14:51.583 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x100 length 0x100 00:14:51.583 Malloc0 : 5.58 206.33 12.90 0.00 0.00 611671.38 725.58 1669732.45 00:14:51.583 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x0 length 0x80 00:14:51.583 Malloc1p0 : 6.43 42.27 2.64 0.00 0.00 2809720.31 1357.53 4633707.28 00:14:51.583 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x80 length 0x80 00:14:51.583 Malloc1p0 : 5.90 113.23 7.08 0.00 0.00 1053444.83 2293.76 1965331.02 00:14:51.583 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x0 length 0x80 00:14:51.583 Malloc1p1 : 6.44 42.27 2.64 0.00 0.00 2733964.84 1178.09 4473924.27 00:14:51.583 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x80 length 0x80 00:14:51.583 Malloc1p1 : 6.23 43.63 2.73 0.00 0.00 2641081.34 1302.92 4410011.06 00:14:51.583 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x0 length 0x20 00:14:51.583 Malloc2p0 : 6.09 31.51 1.97 0.00 0.00 921534.38 565.64 1581851.79 00:14:51.583 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x20 length 0x20 00:14:51.583 Malloc2p0 : 5.90 32.53 2.03 0.00 0.00 899549.78 585.14 1438047.09 00:14:51.583 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x0 length 0x20 00:14:51.583 Malloc2p1 : 6.10 31.50 1.97 0.00 0.00 915048.52 624.15 1557884.34 00:14:51.583 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x20 length 0x20 00:14:51.583 Malloc2p1 : 5.90 32.53 2.03 0.00 0.00 894430.51 686.57 1422068.78 00:14:51.583 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x0 length 0x20 00:14:51.583 Malloc2p2 : 6.10 31.49 1.97 0.00 0.00 909175.96 589.04 1541906.04 00:14:51.583 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x20 length 0x20 00:14:51.583 Malloc2p2 : 5.90 32.52 2.03 0.00 0.00 888275.39 628.05 1398101.33 00:14:51.583 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x0 length 0x20 00:14:51.583 Malloc2p3 : 6.10 31.48 1.97 0.00 0.00 902770.13 643.66 1525927.74 00:14:51.583 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x20 length 0x20 00:14:51.583 Malloc2p3 : 5.91 32.51 2.03 0.00 0.00 882453.14 589.04 1382123.03 00:14:51.583 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x0 length 0x20 00:14:51.583 Malloc2p4 : 6.10 31.47 1.97 0.00 0.00 896388.40 659.26 1501960.29 00:14:51.583 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x20 length 0x20 00:14:51.583 Malloc2p4 : 5.91 32.51 2.03 0.00 0.00 876572.84 585.14 1366144.73 00:14:51.583 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x0 length 0x20 00:14:51.583 Malloc2p5 : 6.10 31.46 1.97 0.00 0.00 889756.82 604.65 1477992.84 00:14:51.583 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x20 length 0x20 00:14:51.583 Malloc2p5 : 5.91 32.50 2.03 0.00 0.00 870885.05 624.15 1342177.28 00:14:51.583 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x0 length 0x20 00:14:51.583 Malloc2p6 : 6.10 31.45 1.97 0.00 0.00 883414.62 553.94 1462014.54 00:14:51.583 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x20 length 0x20 00:14:51.583 Malloc2p6 : 5.91 32.49 2.03 0.00 0.00 864824.25 565.64 1326198.98 00:14:51.583 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x0 length 0x20 00:14:51.583 Malloc2p7 : 6.11 31.44 1.97 0.00 0.00 877101.95 553.94 1438047.09 00:14:51.583 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x20 length 0x20 00:14:51.583 Malloc2p7 : 5.91 32.48 2.03 0.00 0.00 858644.59 608.55 1310220.68 00:14:51.583 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x0 length 0x100 00:14:51.583 TestPT : 6.51 44.26 2.77 0.00 0.00 2425042.64 1224.90 4186314.85 00:14:51.583 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x100 length 0x100 00:14:51.583 TestPT : 6.37 43.29 2.71 0.00 0.00 2479245.42 84385.40 3706965.82 00:14:51.583 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x0 length 0x200 00:14:51.583 raid0 : 6.48 46.89 2.93 0.00 0.00 2249906.46 1334.13 4042510.14 00:14:51.583 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x200 length 0x200 00:14:51.583 raid0 : 6.38 50.13 3.13 0.00 0.00 2105564.90 1326.32 4010553.54 00:14:51.583 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x0 length 0x200 00:14:51.583 concat0 : 6.42 54.06 3.38 0.00 0.00 1902421.06 1326.32 3914683.73 00:14:51.583 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x200 length 0x200 00:14:51.583 concat0 : 6.39 55.08 3.44 0.00 0.00 1898521.17 1271.71 3898705.43 00:14:51.583 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x0 length 0x100 00:14:51.583 raid1 : 6.43 76.81 4.80 0.00 0.00 1322404.80 1903.66 3802835.63 00:14:51.583 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x100 length 0x100 00:14:51.583 raid1 : 6.38 60.82 3.80 0.00 0.00 1682868.33 1685.21 3770879.02 00:14:51.583 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x0 length 0x4e 00:14:51.583 AIO0 : 6.43 54.10 3.38 0.00 0.00 1119304.56 1872.46 2284897.04 00:14:51.583 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:14:51.583 Verification LBA range: start 0x4e length 0x4e 00:14:51.583 AIO0 : 6.39 67.17 4.20 0.00 0.00 916574.58 1045.46 2189027.23 00:14:51.583 =================================================================================================================== 00:14:51.583 Total : 1738.01 108.63 0.00 0.00 1256777.95 553.94 4633707.28 00:14:54.873 00:14:54.873 real 0m11.963s 00:14:54.873 user 0m21.919s 00:14:54.873 sys 0m0.572s 00:14:54.873 ************************************ 00:14:54.873 END TEST bdev_verify_big_io 00:14:54.873 11:38:26 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:54.873 11:38:26 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.873 ************************************ 00:14:55.130 11:38:26 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:55.130 11:38:26 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:14:55.130 11:38:26 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:55.130 11:38:26 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:55.130 ************************************ 00:14:55.130 START TEST bdev_write_zeroes 00:14:55.130 ************************************ 00:14:55.130 11:38:26 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:55.130 [2024-06-10 11:38:27.069672] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:14:55.130 [2024-06-10 11:38:27.069903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120029 ] 00:14:55.387 [2024-06-10 11:38:27.256128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.643 [2024-06-10 11:38:27.554188] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.233 [2024-06-10 11:38:28.017484] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:56.233 [2024-06-10 11:38:28.017583] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:56.233 [2024-06-10 11:38:28.025459] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:56.233 [2024-06-10 11:38:28.025525] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:56.233 [2024-06-10 11:38:28.033458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:56.233 [2024-06-10 11:38:28.033527] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:56.233 [2024-06-10 11:38:28.033557] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:56.233 [2024-06-10 11:38:28.262104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:56.233 [2024-06-10 11:38:28.262236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.233 [2024-06-10 11:38:28.262280] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:56.233 [2024-06-10 11:38:28.262351] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.233 [2024-06-10 11:38:28.264994] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.233 [2024-06-10 11:38:28.265053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:56.798 Running I/O for 1 seconds... 00:14:57.733 00:14:57.733 Latency(us) 00:14:57.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.733 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:57.733 Malloc0 : 1.03 5730.82 22.39 0.00 0.00 22318.96 550.03 38947.11 00:14:57.733 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:57.733 Malloc1p0 : 1.03 5723.86 22.36 0.00 0.00 22319.42 807.50 38447.79 00:14:57.733 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:57.733 Malloc1p1 : 1.03 5717.62 22.33 0.00 0.00 22304.23 784.09 37698.80 00:14:57.733 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:57.733 Malloc2p0 : 1.03 5711.29 22.31 0.00 0.00 22282.53 760.69 36949.82 00:14:57.733 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:57.733 Malloc2p1 : 1.03 5705.32 22.29 0.00 0.00 22258.69 803.60 36200.84 00:14:57.733 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:57.733 Malloc2p2 : 1.03 5699.35 22.26 0.00 0.00 22244.88 803.60 35451.86 00:14:57.733 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:57.733 Malloc2p3 : 1.03 5693.23 22.24 0.00 0.00 22224.37 780.19 34702.87 00:14:57.733 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:57.733 Malloc2p4 : 1.04 5686.77 22.21 0.00 0.00 22200.65 799.70 33953.89 00:14:57.733 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:57.733 Malloc2p5 : 1.05 5727.80 22.37 0.00 0.00 22005.60 784.09 33204.91 00:14:57.733 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:57.733 Malloc2p6 : 1.05 5720.69 22.35 0.00 0.00 21992.38 815.30 32455.92 00:14:57.733 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:57.733 Malloc2p7 : 1.05 5714.26 22.32 0.00 0.00 21980.15 827.00 31457.28 00:14:57.733 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:57.733 TestPT : 1.05 5707.22 22.29 0.00 0.00 21956.03 877.71 30583.47 00:14:57.733 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:57.733 raid0 : 1.06 5699.18 22.26 0.00 0.00 21926.40 1458.96 29210.33 00:14:57.733 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:57.733 concat0 : 1.06 5691.80 22.23 0.00 0.00 21880.35 1482.36 27837.20 00:14:57.733 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:57.733 raid1 : 1.06 5681.94 22.20 0.00 0.00 21838.90 2387.38 25340.59 00:14:57.733 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:57.733 AIO0 : 1.06 5672.67 22.16 0.00 0.00 21766.76 1396.54 24466.77 00:14:57.733 =================================================================================================================== 00:14:57.733 Total : 91283.82 356.58 0.00 0.00 22091.90 550.03 38947.11 00:15:01.007 00:15:01.007 real 0m5.797s 00:15:01.007 user 0m5.218s 00:15:01.007 sys 0m0.389s 00:15:01.007 ************************************ 00:15:01.007 END TEST bdev_write_zeroes 00:15:01.007 11:38:32 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:01.007 11:38:32 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:01.007 ************************************ 00:15:01.007 11:38:32 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:01.007 11:38:32 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:15:01.007 11:38:32 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:01.007 11:38:32 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:01.007 ************************************ 00:15:01.007 START TEST bdev_json_nonenclosed 00:15:01.007 ************************************ 00:15:01.007 11:38:32 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:01.007 [2024-06-10 11:38:32.916946] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:15:01.007 [2024-06-10 11:38:32.917117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120122 ] 00:15:01.269 [2024-06-10 11:38:33.090047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.527 [2024-06-10 11:38:33.423234] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.527 [2024-06-10 11:38:33.423363] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:01.527 [2024-06-10 11:38:33.423440] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:01.527 [2024-06-10 11:38:33.423497] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:02.095 00:15:02.095 real 0m1.125s 00:15:02.095 user 0m0.872s 00:15:02.095 sys 0m0.152s 00:15:02.095 11:38:33 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:02.095 ************************************ 00:15:02.095 END TEST bdev_json_nonenclosed 00:15:02.095 ************************************ 00:15:02.095 11:38:33 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:02.095 11:38:34 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:02.095 11:38:34 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:15:02.095 11:38:34 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:02.095 11:38:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:02.095 ************************************ 00:15:02.095 START TEST bdev_json_nonarray 00:15:02.095 ************************************ 00:15:02.095 11:38:34 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:02.095 [2024-06-10 11:38:34.123605] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:15:02.095 [2024-06-10 11:38:34.123866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120162 ] 00:15:02.354 [2024-06-10 11:38:34.300605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.612 [2024-06-10 11:38:34.545730] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.612 [2024-06-10 11:38:34.545837] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:02.612 [2024-06-10 11:38:34.545890] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:02.612 [2024-06-10 11:38:34.545917] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:03.177 00:15:03.177 real 0m1.047s 00:15:03.177 user 0m0.774s 00:15:03.177 sys 0m0.173s 00:15:03.177 11:38:35 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:03.177 ************************************ 00:15:03.177 11:38:35 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:03.177 END TEST bdev_json_nonarray 00:15:03.177 ************************************ 00:15:03.177 11:38:35 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:15:03.177 11:38:35 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:15:03.177 11:38:35 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:03.177 11:38:35 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:03.177 11:38:35 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:03.177 ************************************ 00:15:03.177 START TEST bdev_qos 00:15:03.177 ************************************ 00:15:03.177 11:38:35 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # qos_test_suite '' 00:15:03.177 11:38:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=120200 00:15:03.177 11:38:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:15:03.177 Process qos testing pid: 120200 00:15:03.177 11:38:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 120200' 00:15:03.177 11:38:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:15:03.177 11:38:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 120200 00:15:03.177 11:38:35 blockdev_general.bdev_qos -- common/autotest_common.sh@830 -- # '[' -z 120200 ']' 00:15:03.177 11:38:35 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.177 11:38:35 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:03.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.177 11:38:35 blockdev_general.bdev_qos -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.177 11:38:35 blockdev_general.bdev_qos -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:03.177 11:38:35 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:03.177 [2024-06-10 11:38:35.225250] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:15:03.177 [2024-06-10 11:38:35.225424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120200 ] 00:15:03.435 [2024-06-10 11:38:35.395329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.694 [2024-06-10 11:38:35.699093] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@863 -- # return 0 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:04.304 Malloc_0 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_name=Malloc_0 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local i 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:04.304 [ 00:15:04.304 { 00:15:04.304 "name": "Malloc_0", 00:15:04.304 "aliases": [ 00:15:04.304 "b6d54043-0cc4-4f81-81a2-045899f6462d" 00:15:04.304 ], 00:15:04.304 "product_name": "Malloc disk", 00:15:04.304 "block_size": 512, 00:15:04.304 "num_blocks": 262144, 00:15:04.304 "uuid": "b6d54043-0cc4-4f81-81a2-045899f6462d", 00:15:04.304 "assigned_rate_limits": { 00:15:04.304 "rw_ios_per_sec": 0, 00:15:04.304 "rw_mbytes_per_sec": 0, 00:15:04.304 "r_mbytes_per_sec": 0, 00:15:04.304 "w_mbytes_per_sec": 0 00:15:04.304 }, 00:15:04.304 "claimed": false, 00:15:04.304 "zoned": false, 00:15:04.304 "supported_io_types": { 00:15:04.304 "read": true, 00:15:04.304 "write": true, 00:15:04.304 "unmap": true, 00:15:04.304 "write_zeroes": true, 00:15:04.304 "flush": true, 00:15:04.304 "reset": true, 00:15:04.304 "compare": false, 00:15:04.304 "compare_and_write": false, 00:15:04.304 "abort": true, 00:15:04.304 "nvme_admin": false, 00:15:04.304 "nvme_io": false 00:15:04.304 }, 00:15:04.304 "memory_domains": [ 00:15:04.304 { 00:15:04.304 "dma_device_id": "system", 00:15:04.304 "dma_device_type": 1 00:15:04.304 }, 00:15:04.304 { 00:15:04.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.304 "dma_device_type": 2 00:15:04.304 } 00:15:04.304 ], 00:15:04.304 "driver_specific": {} 00:15:04.304 } 00:15:04.304 ] 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # return 0 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:04.304 Null_1 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_name=Null_1 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local i 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.304 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:04.565 [ 00:15:04.565 { 00:15:04.565 "name": "Null_1", 00:15:04.565 "aliases": [ 00:15:04.565 "80d4822e-f168-4aea-8d03-b8abc48c7cbf" 00:15:04.565 ], 00:15:04.565 "product_name": "Null disk", 00:15:04.565 "block_size": 512, 00:15:04.565 "num_blocks": 262144, 00:15:04.565 "uuid": "80d4822e-f168-4aea-8d03-b8abc48c7cbf", 00:15:04.565 "assigned_rate_limits": { 00:15:04.565 "rw_ios_per_sec": 0, 00:15:04.565 "rw_mbytes_per_sec": 0, 00:15:04.565 "r_mbytes_per_sec": 0, 00:15:04.565 "w_mbytes_per_sec": 0 00:15:04.565 }, 00:15:04.565 "claimed": false, 00:15:04.565 "zoned": false, 00:15:04.565 "supported_io_types": { 00:15:04.565 "read": true, 00:15:04.565 "write": true, 00:15:04.565 "unmap": false, 00:15:04.565 "write_zeroes": true, 00:15:04.565 "flush": false, 00:15:04.565 "reset": true, 00:15:04.565 "compare": false, 00:15:04.565 "compare_and_write": false, 00:15:04.565 "abort": true, 00:15:04.565 "nvme_admin": false, 00:15:04.565 "nvme_io": false 00:15:04.565 }, 00:15:04.565 "driver_specific": {} 00:15:04.565 } 00:15:04.565 ] 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # return 0 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:15:04.565 11:38:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:15:04.565 Running I/O for 60 seconds... 00:15:09.832 11:38:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 70955.03 283820.12 0.00 0.00 287744.00 0.00 0.00 ' 00:15:09.832 11:38:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:15:09.832 11:38:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:15:09.832 11:38:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=70955.03 00:15:09.832 11:38:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 70955 00:15:09.832 11:38:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=70955 00:15:09.832 11:38:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=17000 00:15:09.832 11:38:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 17000 -gt 1000 ']' 00:15:09.832 11:38:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 17000 Malloc_0 00:15:09.832 11:38:41 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:09.832 11:38:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:09.832 11:38:41 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:09.832 11:38:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 17000 IOPS Malloc_0 00:15:09.832 11:38:41 blockdev_general.bdev_qos -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:15:09.832 11:38:41 blockdev_general.bdev_qos -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:09.832 11:38:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:09.832 ************************************ 00:15:09.832 START TEST bdev_qos_iops 00:15:09.832 ************************************ 00:15:09.832 11:38:41 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # run_qos_test 17000 IOPS Malloc_0 00:15:09.832 11:38:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=17000 00:15:09.832 11:38:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:15:09.832 11:38:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:15:09.832 11:38:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:15:09.832 11:38:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:15:09.832 11:38:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:09.832 11:38:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:09.832 11:38:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:15:09.832 11:38:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:15:15.150 11:38:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 16987.58 67950.33 0.00 0.00 69360.00 0.00 0.00 ' 00:15:15.150 11:38:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:15:15.150 11:38:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:15:15.151 11:38:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=16987.58 00:15:15.151 11:38:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 16987 00:15:15.151 11:38:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=16987 00:15:15.151 11:38:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:15:15.151 11:38:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=15300 00:15:15.151 11:38:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=18700 00:15:15.151 11:38:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 16987 -lt 15300 ']' 00:15:15.151 11:38:46 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 16987 -gt 18700 ']' 00:15:15.151 00:15:15.151 real 0m5.224s 00:15:15.151 user 0m0.114s 00:15:15.151 sys 0m0.033s 00:15:15.151 11:38:46 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:15.151 ************************************ 00:15:15.151 END TEST bdev_qos_iops 00:15:15.151 ************************************ 00:15:15.151 11:38:46 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:15:15.151 11:38:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:15:15.151 11:38:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:15:15.151 11:38:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:15:15.151 11:38:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:15.151 11:38:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:15.151 11:38:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:15:15.151 11:38:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:15:20.441 11:38:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 25257.07 101028.27 0.00 0.00 102400.00 0.00 0.00 ' 00:15:20.441 11:38:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:15:20.441 11:38:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:20.441 11:38:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:15:20.441 11:38:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=102400.00 00:15:20.441 11:38:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 102400 00:15:20.441 11:38:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=102400 00:15:20.441 11:38:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=10 00:15:20.441 11:38:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 10 -lt 2 ']' 00:15:20.441 11:38:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 10 Null_1 00:15:20.441 11:38:52 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.441 11:38:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:20.441 11:38:52 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.441 11:38:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 10 BANDWIDTH Null_1 00:15:20.441 11:38:52 blockdev_general.bdev_qos -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:15:20.441 11:38:52 blockdev_general.bdev_qos -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:20.441 11:38:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:20.441 ************************************ 00:15:20.441 START TEST bdev_qos_bw 00:15:20.441 ************************************ 00:15:20.441 11:38:52 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # run_qos_test 10 BANDWIDTH Null_1 00:15:20.441 11:38:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=10 00:15:20.441 11:38:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:15:20.441 11:38:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:15:20.441 11:38:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:15:20.441 11:38:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:15:20.441 11:38:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:20.441 11:38:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:15:20.441 11:38:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:20.441 11:38:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 2560.88 10243.53 0.00 0.00 10544.00 0.00 0.00 ' 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=10544.00 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 10544 00:15:25.710 ************************************ 00:15:25.710 END TEST bdev_qos_bw 00:15:25.710 ************************************ 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=10544 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=10240 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=9216 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=11264 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 10544 -lt 9216 ']' 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 10544 -gt 11264 ']' 00:15:25.710 00:15:25.710 real 0m5.273s 00:15:25.710 user 0m0.130s 00:15:25.710 sys 0m0.036s 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:15:25.710 11:38:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:15:25.710 11:38:57 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.710 11:38:57 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:25.710 11:38:57 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.710 11:38:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:15:25.710 11:38:57 blockdev_general.bdev_qos -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:15:25.710 11:38:57 blockdev_general.bdev_qos -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:25.710 11:38:57 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:25.710 ************************************ 00:15:25.710 START TEST bdev_qos_ro_bw 00:15:25.710 ************************************ 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:25.710 11:38:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:15:30.976 11:39:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.79 2047.14 0.00 0.00 2068.00 0.00 0.00 ' 00:15:30.976 11:39:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:15:30.976 11:39:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:30.976 11:39:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:15:30.976 11:39:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2068.00 00:15:30.976 11:39:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2068 00:15:30.976 ************************************ 00:15:30.976 END TEST bdev_qos_ro_bw 00:15:30.976 ************************************ 00:15:30.976 11:39:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2068 00:15:30.976 11:39:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:30.976 11:39:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:15:30.976 11:39:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:15:30.976 11:39:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:15:30.976 11:39:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2068 -lt 1843 ']' 00:15:30.976 11:39:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2068 -gt 2252 ']' 00:15:30.976 00:15:30.976 real 0m5.188s 00:15:30.976 user 0m0.146s 00:15:30.976 sys 0m0.017s 00:15:30.976 11:39:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:30.976 11:39:02 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:15:30.976 11:39:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:15:30.976 11:39:02 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:30.976 11:39:02 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:31.544 11:39:03 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.544 11:39:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:15:31.544 11:39:03 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:31.545 11:39:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:31.545 00:15:31.545 Latency(us) 00:15:31.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.545 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:31.545 Malloc_0 : 26.78 23856.58 93.19 0.00 0.00 10628.47 1981.68 503316.48 00:15:31.545 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:31.545 Null_1 : 27.05 24701.50 96.49 0.00 0.00 10338.31 729.48 255652.82 00:15:31.545 =================================================================================================================== 00:15:31.545 Total : 48558.08 189.68 0.00 0.00 10480.14 729.48 503316.48 00:15:31.545 0 00:15:31.545 11:39:03 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.545 11:39:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 120200 00:15:31.545 11:39:03 blockdev_general.bdev_qos -- common/autotest_common.sh@949 -- # '[' -z 120200 ']' 00:15:31.545 11:39:03 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # kill -0 120200 00:15:31.545 11:39:03 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # uname 00:15:31.545 11:39:03 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:31.545 11:39:03 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 120200 00:15:31.545 11:39:03 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:31.545 11:39:03 blockdev_general.bdev_qos -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:31.545 11:39:03 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # echo 'killing process with pid 120200' 00:15:31.545 killing process with pid 120200 00:15:31.545 11:39:03 blockdev_general.bdev_qos -- common/autotest_common.sh@968 -- # kill 120200 00:15:31.545 Received shutdown signal, test time was about 27.093090 seconds 00:15:31.545 00:15:31.545 Latency(us) 00:15:31.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.545 =================================================================================================================== 00:15:31.545 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:31.545 11:39:03 blockdev_general.bdev_qos -- common/autotest_common.sh@973 -- # wait 120200 00:15:33.448 ************************************ 00:15:33.448 END TEST bdev_qos 00:15:33.448 ************************************ 00:15:33.448 11:39:05 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:15:33.448 00:15:33.448 real 0m30.285s 00:15:33.448 user 0m31.097s 00:15:33.448 sys 0m0.682s 00:15:33.448 11:39:05 blockdev_general.bdev_qos -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:33.448 11:39:05 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:33.448 11:39:05 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:15:33.448 11:39:05 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:33.448 11:39:05 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:33.448 11:39:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:33.707 ************************************ 00:15:33.707 START TEST bdev_qd_sampling 00:15:33.707 ************************************ 00:15:33.707 Process bdev QD sampling period testing pid: 120678 00:15:33.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.707 11:39:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # qd_sampling_test_suite '' 00:15:33.707 11:39:05 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:15:33.707 11:39:05 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=120678 00:15:33.707 11:39:05 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 120678' 00:15:33.707 11:39:05 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:15:33.707 11:39:05 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 120678 00:15:33.707 11:39:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@830 -- # '[' -z 120678 ']' 00:15:33.707 11:39:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.707 11:39:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:33.707 11:39:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.707 11:39:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:33.707 11:39:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:33.707 11:39:05 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:15:33.707 [2024-06-10 11:39:05.573291] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:15:33.707 [2024-06-10 11:39:05.573647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120678 ] 00:15:33.707 [2024-06-10 11:39:05.749332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:33.966 [2024-06-10 11:39:05.998724] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.966 [2024-06-10 11:39:05.998727] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.533 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:34.533 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@863 -- # return 0 00:15:34.533 11:39:06 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:15:34.533 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.533 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:34.791 Malloc_QD 00:15:34.791 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.791 11:39:06 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:15:34.791 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_name=Malloc_QD 00:15:34.791 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:34.791 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # local i 00:15:34.791 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:34.791 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:34.791 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:15:34.791 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.791 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:34.791 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.791 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:15:34.791 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.791 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:34.791 [ 00:15:34.791 { 00:15:34.791 "name": "Malloc_QD", 00:15:34.791 "aliases": [ 00:15:34.791 "ce419363-bddc-4f48-9304-9ef8e89ce5df" 00:15:34.791 ], 00:15:34.791 "product_name": "Malloc disk", 00:15:34.791 "block_size": 512, 00:15:34.791 "num_blocks": 262144, 00:15:34.791 "uuid": "ce419363-bddc-4f48-9304-9ef8e89ce5df", 00:15:34.791 "assigned_rate_limits": { 00:15:34.791 "rw_ios_per_sec": 0, 00:15:34.791 "rw_mbytes_per_sec": 0, 00:15:34.791 "r_mbytes_per_sec": 0, 00:15:34.791 "w_mbytes_per_sec": 0 00:15:34.791 }, 00:15:34.791 "claimed": false, 00:15:34.791 "zoned": false, 00:15:34.791 "supported_io_types": { 00:15:34.791 "read": true, 00:15:34.791 "write": true, 00:15:34.791 "unmap": true, 00:15:34.791 "write_zeroes": true, 00:15:34.791 "flush": true, 00:15:34.791 "reset": true, 00:15:34.791 "compare": false, 00:15:34.791 "compare_and_write": false, 00:15:34.791 "abort": true, 00:15:34.791 "nvme_admin": false, 00:15:34.791 "nvme_io": false 00:15:34.791 }, 00:15:34.791 "memory_domains": [ 00:15:34.791 { 00:15:34.791 "dma_device_id": "system", 00:15:34.791 "dma_device_type": 1 00:15:34.791 }, 00:15:34.791 { 00:15:34.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.791 "dma_device_type": 2 00:15:34.791 } 00:15:34.791 ], 00:15:34.791 "driver_specific": {} 00:15:34.791 } 00:15:34.791 ] 00:15:34.791 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.791 11:39:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # return 0 00:15:34.792 11:39:06 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:15:34.792 11:39:06 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:34.792 Running I/O for 5 seconds... 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:15:36.775 "tick_rate": 2100000000, 00:15:36.775 "ticks": 1886956866814, 00:15:36.775 "bdevs": [ 00:15:36.775 { 00:15:36.775 "name": "Malloc_QD", 00:15:36.775 "bytes_read": 887132672, 00:15:36.775 "num_read_ops": 216579, 00:15:36.775 "bytes_written": 0, 00:15:36.775 "num_write_ops": 0, 00:15:36.775 "bytes_unmapped": 0, 00:15:36.775 "num_unmap_ops": 0, 00:15:36.775 "bytes_copied": 0, 00:15:36.775 "num_copy_ops": 0, 00:15:36.775 "read_latency_ticks": 2081834077920, 00:15:36.775 "max_read_latency_ticks": 11873348, 00:15:36.775 "min_read_latency_ticks": 328156, 00:15:36.775 "write_latency_ticks": 0, 00:15:36.775 "max_write_latency_ticks": 0, 00:15:36.775 "min_write_latency_ticks": 0, 00:15:36.775 "unmap_latency_ticks": 0, 00:15:36.775 "max_unmap_latency_ticks": 0, 00:15:36.775 "min_unmap_latency_ticks": 0, 00:15:36.775 "copy_latency_ticks": 0, 00:15:36.775 "max_copy_latency_ticks": 0, 00:15:36.775 "min_copy_latency_ticks": 0, 00:15:36.775 "io_error": {}, 00:15:36.775 "queue_depth_polling_period": 10, 00:15:36.775 "queue_depth": 512, 00:15:36.775 "io_time": 30, 00:15:36.775 "weighted_io_time": 15360 00:15:36.775 } 00:15:36.775 ] 00:15:36.775 }' 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:36.775 11:39:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:36.775 00:15:36.775 Latency(us) 00:15:36.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.775 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:15:36.775 Malloc_QD : 2.01 55878.85 218.28 0.00 0.00 4570.39 1076.66 5679.79 00:15:36.775 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:36.775 Malloc_QD : 2.01 55820.90 218.05 0.00 0.00 4575.41 764.59 5149.26 00:15:36.775 =================================================================================================================== 00:15:36.775 Total : 111699.76 436.33 0.00 0.00 4572.90 764.59 5679.79 00:15:37.035 0 00:15:37.035 11:39:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:37.035 11:39:08 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 120678 00:15:37.035 11:39:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@949 -- # '[' -z 120678 ']' 00:15:37.035 11:39:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # kill -0 120678 00:15:37.035 11:39:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # uname 00:15:37.035 11:39:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:37.035 11:39:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 120678 00:15:37.035 11:39:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:37.035 11:39:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:37.035 11:39:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # echo 'killing process with pid 120678' 00:15:37.035 killing process with pid 120678 00:15:37.035 11:39:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@968 -- # kill 120678 00:15:37.035 Received shutdown signal, test time was about 2.211812 seconds 00:15:37.035 00:15:37.035 Latency(us) 00:15:37.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.035 =================================================================================================================== 00:15:37.035 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:37.035 11:39:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@973 -- # wait 120678 00:15:38.997 ************************************ 00:15:38.997 END TEST bdev_qd_sampling 00:15:38.997 ************************************ 00:15:38.997 11:39:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:15:38.997 00:15:38.997 real 0m5.310s 00:15:38.997 user 0m9.686s 00:15:38.997 sys 0m0.350s 00:15:38.997 11:39:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:38.997 11:39:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:38.997 11:39:10 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:15:38.997 11:39:10 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:38.997 11:39:10 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:38.997 11:39:10 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:38.997 ************************************ 00:15:38.997 START TEST bdev_error 00:15:38.997 ************************************ 00:15:38.997 11:39:10 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # error_test_suite '' 00:15:38.997 11:39:10 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:15:38.997 11:39:10 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:15:38.997 11:39:10 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:15:38.997 11:39:10 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=120784 00:15:38.997 11:39:10 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:15:38.997 11:39:10 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 120784' 00:15:38.997 Process error testing pid: 120784 00:15:38.997 11:39:10 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 120784 00:15:38.997 11:39:10 blockdev_general.bdev_error -- common/autotest_common.sh@830 -- # '[' -z 120784 ']' 00:15:38.997 11:39:10 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.997 11:39:10 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:38.997 11:39:10 blockdev_general.bdev_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.997 11:39:10 blockdev_general.bdev_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:38.997 11:39:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:38.997 [2024-06-10 11:39:10.955291] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:15:38.997 [2024-06-10 11:39:10.955718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120784 ] 00:15:39.256 [2024-06-10 11:39:11.139555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.515 [2024-06-10 11:39:11.413953] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.083 11:39:11 blockdev_general.bdev_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:40.083 11:39:11 blockdev_general.bdev_error -- common/autotest_common.sh@863 -- # return 0 00:15:40.083 11:39:11 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:15:40.083 11:39:11 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.083 11:39:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:40.083 Dev_1 00:15:40.083 11:39:11 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.083 11:39:12 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_name=Dev_1 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local i 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:40.083 [ 00:15:40.083 { 00:15:40.083 "name": "Dev_1", 00:15:40.083 "aliases": [ 00:15:40.083 "32c3154a-02a5-46c0-82a4-f02af4575c37" 00:15:40.083 ], 00:15:40.083 "product_name": "Malloc disk", 00:15:40.083 "block_size": 512, 00:15:40.083 "num_blocks": 262144, 00:15:40.083 "uuid": "32c3154a-02a5-46c0-82a4-f02af4575c37", 00:15:40.083 "assigned_rate_limits": { 00:15:40.083 "rw_ios_per_sec": 0, 00:15:40.083 "rw_mbytes_per_sec": 0, 00:15:40.083 "r_mbytes_per_sec": 0, 00:15:40.083 "w_mbytes_per_sec": 0 00:15:40.083 }, 00:15:40.083 "claimed": false, 00:15:40.083 "zoned": false, 00:15:40.083 "supported_io_types": { 00:15:40.083 "read": true, 00:15:40.083 "write": true, 00:15:40.083 "unmap": true, 00:15:40.083 "write_zeroes": true, 00:15:40.083 "flush": true, 00:15:40.083 "reset": true, 00:15:40.083 "compare": false, 00:15:40.083 "compare_and_write": false, 00:15:40.083 "abort": true, 00:15:40.083 "nvme_admin": false, 00:15:40.083 "nvme_io": false 00:15:40.083 }, 00:15:40.083 "memory_domains": [ 00:15:40.083 { 00:15:40.083 "dma_device_id": "system", 00:15:40.083 "dma_device_type": 1 00:15:40.083 }, 00:15:40.083 { 00:15:40.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.083 "dma_device_type": 2 00:15:40.083 } 00:15:40.083 ], 00:15:40.083 "driver_specific": {} 00:15:40.083 } 00:15:40.083 ] 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # return 0 00:15:40.083 11:39:12 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:40.083 true 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.083 11:39:12 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.083 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:40.346 Dev_2 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.346 11:39:12 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_name=Dev_2 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local i 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:40.346 [ 00:15:40.346 { 00:15:40.346 "name": "Dev_2", 00:15:40.346 "aliases": [ 00:15:40.346 "6038c48e-c4a2-430e-8b33-8b589b40b03b" 00:15:40.346 ], 00:15:40.346 "product_name": "Malloc disk", 00:15:40.346 "block_size": 512, 00:15:40.346 "num_blocks": 262144, 00:15:40.346 "uuid": "6038c48e-c4a2-430e-8b33-8b589b40b03b", 00:15:40.346 "assigned_rate_limits": { 00:15:40.346 "rw_ios_per_sec": 0, 00:15:40.346 "rw_mbytes_per_sec": 0, 00:15:40.346 "r_mbytes_per_sec": 0, 00:15:40.346 "w_mbytes_per_sec": 0 00:15:40.346 }, 00:15:40.346 "claimed": false, 00:15:40.346 "zoned": false, 00:15:40.346 "supported_io_types": { 00:15:40.346 "read": true, 00:15:40.346 "write": true, 00:15:40.346 "unmap": true, 00:15:40.346 "write_zeroes": true, 00:15:40.346 "flush": true, 00:15:40.346 "reset": true, 00:15:40.346 "compare": false, 00:15:40.346 "compare_and_write": false, 00:15:40.346 "abort": true, 00:15:40.346 "nvme_admin": false, 00:15:40.346 "nvme_io": false 00:15:40.346 }, 00:15:40.346 "memory_domains": [ 00:15:40.346 { 00:15:40.346 "dma_device_id": "system", 00:15:40.346 "dma_device_type": 1 00:15:40.346 }, 00:15:40.346 { 00:15:40.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.346 "dma_device_type": 2 00:15:40.346 } 00:15:40.346 ], 00:15:40.346 "driver_specific": {} 00:15:40.346 } 00:15:40.346 ] 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # return 0 00:15:40.346 11:39:12 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:40.346 11:39:12 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.346 11:39:12 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:15:40.346 11:39:12 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:15:40.346 Running I/O for 5 seconds... 00:15:41.280 11:39:13 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 120784 00:15:41.280 11:39:13 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 120784' 00:15:41.280 Process is existed as continue on error is set. Pid: 120784 00:15:41.280 11:39:13 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:15:41.280 11:39:13 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.280 11:39:13 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:41.280 11:39:13 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.280 11:39:13 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:15:41.280 11:39:13 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.280 11:39:13 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:41.280 Timeout while waiting for response: 00:15:41.280 00:15:41.280 00:15:41.844 11:39:13 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.844 11:39:13 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:15:46.089 00:15:46.089 Latency(us) 00:15:46.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.089 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:46.089 EE_Dev_1 : 0.93 45754.75 178.73 5.40 0.00 346.95 117.03 975.24 00:15:46.089 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:46.089 Dev_2 : 5.00 86753.47 338.88 0.00 0.00 181.65 55.34 451387.00 00:15:46.089 =================================================================================================================== 00:15:46.089 Total : 132508.22 517.61 5.40 0.00 196.36 55.34 451387.00 00:15:47.022 11:39:18 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 120784 00:15:47.022 11:39:18 blockdev_general.bdev_error -- common/autotest_common.sh@949 -- # '[' -z 120784 ']' 00:15:47.022 11:39:18 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # kill -0 120784 00:15:47.022 11:39:18 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # uname 00:15:47.022 11:39:18 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:47.022 11:39:18 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 120784 00:15:47.022 killing process with pid 120784 00:15:47.022 Received shutdown signal, test time was about 5.000000 seconds 00:15:47.022 00:15:47.022 Latency(us) 00:15:47.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.022 =================================================================================================================== 00:15:47.022 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:47.022 11:39:18 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:47.022 11:39:18 blockdev_general.bdev_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:47.022 11:39:18 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 120784' 00:15:47.022 11:39:18 blockdev_general.bdev_error -- common/autotest_common.sh@968 -- # kill 120784 00:15:47.022 11:39:18 blockdev_general.bdev_error -- common/autotest_common.sh@973 -- # wait 120784 00:15:48.924 11:39:20 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:15:48.924 11:39:20 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=120908 00:15:48.924 Process error testing pid: 120908 00:15:48.924 11:39:20 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 120908' 00:15:48.924 11:39:20 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 120908 00:15:48.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.924 11:39:20 blockdev_general.bdev_error -- common/autotest_common.sh@830 -- # '[' -z 120908 ']' 00:15:48.924 11:39:20 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.924 11:39:20 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:48.924 11:39:20 blockdev_general.bdev_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.924 11:39:20 blockdev_general.bdev_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:48.924 11:39:20 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:48.924 [2024-06-10 11:39:20.926732] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:15:48.924 [2024-06-10 11:39:20.927145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120908 ] 00:15:49.182 [2024-06-10 11:39:21.110846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.440 [2024-06-10 11:39:21.433006] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.007 11:39:21 blockdev_general.bdev_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:50.007 11:39:21 blockdev_general.bdev_error -- common/autotest_common.sh@863 -- # return 0 00:15:50.007 11:39:21 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:15:50.007 11:39:21 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.007 11:39:21 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:50.007 Dev_1 00:15:50.007 11:39:21 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.007 11:39:21 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:15:50.007 11:39:21 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_name=Dev_1 00:15:50.007 11:39:21 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:50.007 11:39:21 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local i 00:15:50.007 11:39:21 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:50.007 11:39:21 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:50.007 11:39:21 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:15:50.007 11:39:21 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.007 11:39:21 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:50.007 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.007 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:15:50.007 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.007 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:50.007 [ 00:15:50.007 { 00:15:50.007 "name": "Dev_1", 00:15:50.007 "aliases": [ 00:15:50.007 "69bda054-3e47-4e68-a5ed-27adafba07f6" 00:15:50.007 ], 00:15:50.007 "product_name": "Malloc disk", 00:15:50.007 "block_size": 512, 00:15:50.007 "num_blocks": 262144, 00:15:50.007 "uuid": "69bda054-3e47-4e68-a5ed-27adafba07f6", 00:15:50.007 "assigned_rate_limits": { 00:15:50.007 "rw_ios_per_sec": 0, 00:15:50.007 "rw_mbytes_per_sec": 0, 00:15:50.007 "r_mbytes_per_sec": 0, 00:15:50.007 "w_mbytes_per_sec": 0 00:15:50.007 }, 00:15:50.007 "claimed": false, 00:15:50.007 "zoned": false, 00:15:50.007 "supported_io_types": { 00:15:50.007 "read": true, 00:15:50.007 "write": true, 00:15:50.007 "unmap": true, 00:15:50.007 "write_zeroes": true, 00:15:50.007 "flush": true, 00:15:50.007 "reset": true, 00:15:50.007 "compare": false, 00:15:50.007 "compare_and_write": false, 00:15:50.007 "abort": true, 00:15:50.007 "nvme_admin": false, 00:15:50.007 "nvme_io": false 00:15:50.007 }, 00:15:50.007 "memory_domains": [ 00:15:50.007 { 00:15:50.007 "dma_device_id": "system", 00:15:50.007 "dma_device_type": 1 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.007 "dma_device_type": 2 00:15:50.007 } 00:15:50.007 ], 00:15:50.007 "driver_specific": {} 00:15:50.007 } 00:15:50.007 ] 00:15:50.007 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.007 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # return 0 00:15:50.007 11:39:22 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:15:50.007 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.007 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:50.007 true 00:15:50.007 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.007 11:39:22 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:15:50.007 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.007 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:50.268 Dev_2 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.269 11:39:22 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_name=Dev_2 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local i 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:50.269 [ 00:15:50.269 { 00:15:50.269 "name": "Dev_2", 00:15:50.269 "aliases": [ 00:15:50.269 "a01afa64-efd9-4920-ad88-61ce06b286fc" 00:15:50.269 ], 00:15:50.269 "product_name": "Malloc disk", 00:15:50.269 "block_size": 512, 00:15:50.269 "num_blocks": 262144, 00:15:50.269 "uuid": "a01afa64-efd9-4920-ad88-61ce06b286fc", 00:15:50.269 "assigned_rate_limits": { 00:15:50.269 "rw_ios_per_sec": 0, 00:15:50.269 "rw_mbytes_per_sec": 0, 00:15:50.269 "r_mbytes_per_sec": 0, 00:15:50.269 "w_mbytes_per_sec": 0 00:15:50.269 }, 00:15:50.269 "claimed": false, 00:15:50.269 "zoned": false, 00:15:50.269 "supported_io_types": { 00:15:50.269 "read": true, 00:15:50.269 "write": true, 00:15:50.269 "unmap": true, 00:15:50.269 "write_zeroes": true, 00:15:50.269 "flush": true, 00:15:50.269 "reset": true, 00:15:50.269 "compare": false, 00:15:50.269 "compare_and_write": false, 00:15:50.269 "abort": true, 00:15:50.269 "nvme_admin": false, 00:15:50.269 "nvme_io": false 00:15:50.269 }, 00:15:50.269 "memory_domains": [ 00:15:50.269 { 00:15:50.269 "dma_device_id": "system", 00:15:50.269 "dma_device_type": 1 00:15:50.269 }, 00:15:50.269 { 00:15:50.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.269 "dma_device_type": 2 00:15:50.269 } 00:15:50.269 ], 00:15:50.269 "driver_specific": {} 00:15:50.269 } 00:15:50.269 ] 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # return 0 00:15:50.269 11:39:22 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.269 11:39:22 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 120908 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@649 -- # local es=0 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # valid_exec_arg wait 120908 00:15:50.269 11:39:22 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@637 -- # local arg=wait 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@641 -- # type -t wait 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:50.269 11:39:22 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # wait 120908 00:15:50.527 Running I/O for 5 seconds... 00:15:50.527 task offset: 184824 on job bdev=EE_Dev_1 fails 00:15:50.527 00:15:50.527 Latency(us) 00:15:50.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.527 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:50.527 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:15:50.527 EE_Dev_1 : 0.00 29100.53 113.67 6613.76 0.00 349.62 138.48 639.76 00:15:50.527 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:50.527 Dev_2 : 0.00 19875.78 77.64 0.00 0.00 562.16 124.83 1037.65 00:15:50.527 =================================================================================================================== 00:15:50.527 Total : 48976.31 191.31 6613.76 0.00 464.90 124.83 1037.65 00:15:50.527 [2024-06-10 11:39:22.344913] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:50.527 request: 00:15:50.527 { 00:15:50.527 "method": "perform_tests", 00:15:50.527 "req_id": 1 00:15:50.527 } 00:15:50.527 Got JSON-RPC error response 00:15:50.527 response: 00:15:50.527 { 00:15:50.527 "code": -32603, 00:15:50.527 "message": "bdevperf failed with error Operation not permitted" 00:15:50.527 } 00:15:53.057 11:39:24 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # es=255 00:15:53.057 11:39:24 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:53.057 11:39:24 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # es=127 00:15:53.057 11:39:24 blockdev_general.bdev_error -- common/autotest_common.sh@662 -- # case "$es" in 00:15:53.057 11:39:24 blockdev_general.bdev_error -- common/autotest_common.sh@669 -- # es=1 00:15:53.057 11:39:24 blockdev_general.bdev_error -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:53.057 00:15:53.057 real 0m14.040s 00:15:53.057 user 0m14.098s 00:15:53.057 sys 0m0.867s 00:15:53.057 11:39:24 blockdev_general.bdev_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:53.057 11:39:24 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:53.057 ************************************ 00:15:53.057 END TEST bdev_error 00:15:53.057 ************************************ 00:15:53.057 11:39:24 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:15:53.057 11:39:24 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:53.057 11:39:24 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:53.057 11:39:24 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:53.057 ************************************ 00:15:53.057 START TEST bdev_stat 00:15:53.057 ************************************ 00:15:53.057 11:39:24 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # stat_test_suite '' 00:15:53.057 11:39:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:15:53.057 11:39:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=120985 00:15:53.057 11:39:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 120985' 00:15:53.057 11:39:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:15:53.057 Process Bdev IO statistics testing pid: 120985 00:15:53.057 11:39:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:15:53.057 11:39:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 120985 00:15:53.057 11:39:24 blockdev_general.bdev_stat -- common/autotest_common.sh@830 -- # '[' -z 120985 ']' 00:15:53.057 11:39:24 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.057 11:39:24 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:53.057 11:39:24 blockdev_general.bdev_stat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.057 11:39:24 blockdev_general.bdev_stat -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:53.057 11:39:24 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:53.057 [2024-06-10 11:39:25.043160] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:15:53.057 [2024-06-10 11:39:25.043698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120985 ] 00:15:53.315 [2024-06-10 11:39:25.219051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:53.595 [2024-06-10 11:39:25.535542] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.595 [2024-06-10 11:39:25.535542] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@863 -- # return 0 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:54.199 Malloc_STAT 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_name=Malloc_STAT 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # local i 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:15:54.199 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:54.200 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:54.200 [ 00:15:54.200 { 00:15:54.200 "name": "Malloc_STAT", 00:15:54.200 "aliases": [ 00:15:54.200 "99fc928a-18b9-433c-818a-8b9816aea59a" 00:15:54.200 ], 00:15:54.200 "product_name": "Malloc disk", 00:15:54.200 "block_size": 512, 00:15:54.200 "num_blocks": 262144, 00:15:54.200 "uuid": "99fc928a-18b9-433c-818a-8b9816aea59a", 00:15:54.200 "assigned_rate_limits": { 00:15:54.200 "rw_ios_per_sec": 0, 00:15:54.200 "rw_mbytes_per_sec": 0, 00:15:54.200 "r_mbytes_per_sec": 0, 00:15:54.200 "w_mbytes_per_sec": 0 00:15:54.200 }, 00:15:54.200 "claimed": false, 00:15:54.200 "zoned": false, 00:15:54.200 "supported_io_types": { 00:15:54.200 "read": true, 00:15:54.200 "write": true, 00:15:54.200 "unmap": true, 00:15:54.200 "write_zeroes": true, 00:15:54.200 "flush": true, 00:15:54.200 "reset": true, 00:15:54.200 "compare": false, 00:15:54.200 "compare_and_write": false, 00:15:54.200 "abort": true, 00:15:54.200 "nvme_admin": false, 00:15:54.200 "nvme_io": false 00:15:54.200 }, 00:15:54.200 "memory_domains": [ 00:15:54.200 { 00:15:54.200 "dma_device_id": "system", 00:15:54.200 "dma_device_type": 1 00:15:54.200 }, 00:15:54.200 { 00:15:54.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.200 "dma_device_type": 2 00:15:54.200 } 00:15:54.200 ], 00:15:54.200 "driver_specific": {} 00:15:54.200 } 00:15:54.200 ] 00:15:54.200 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:54.200 11:39:26 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # return 0 00:15:54.200 11:39:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:15:54.200 11:39:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:54.458 Running I/O for 10 seconds... 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:15:56.361 "tick_rate": 2100000000, 00:15:56.361 "ticks": 1927930375926, 00:15:56.361 "bdevs": [ 00:15:56.361 { 00:15:56.361 "name": "Malloc_STAT", 00:15:56.361 "bytes_read": 842043904, 00:15:56.361 "num_read_ops": 205571, 00:15:56.361 "bytes_written": 0, 00:15:56.361 "num_write_ops": 0, 00:15:56.361 "bytes_unmapped": 0, 00:15:56.361 "num_unmap_ops": 0, 00:15:56.361 "bytes_copied": 0, 00:15:56.361 "num_copy_ops": 0, 00:15:56.361 "read_latency_ticks": 2041318595582, 00:15:56.361 "max_read_latency_ticks": 13340218, 00:15:56.361 "min_read_latency_ticks": 309968, 00:15:56.361 "write_latency_ticks": 0, 00:15:56.361 "max_write_latency_ticks": 0, 00:15:56.361 "min_write_latency_ticks": 0, 00:15:56.361 "unmap_latency_ticks": 0, 00:15:56.361 "max_unmap_latency_ticks": 0, 00:15:56.361 "min_unmap_latency_ticks": 0, 00:15:56.361 "copy_latency_ticks": 0, 00:15:56.361 "max_copy_latency_ticks": 0, 00:15:56.361 "min_copy_latency_ticks": 0, 00:15:56.361 "io_error": {} 00:15:56.361 } 00:15:56.361 ] 00:15:56.361 }' 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=205571 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:15:56.361 "tick_rate": 2100000000, 00:15:56.361 "ticks": 1928057880982, 00:15:56.361 "name": "Malloc_STAT", 00:15:56.361 "channels": [ 00:15:56.361 { 00:15:56.361 "thread_id": 2, 00:15:56.361 "bytes_read": 428867584, 00:15:56.361 "num_read_ops": 104704, 00:15:56.361 "bytes_written": 0, 00:15:56.361 "num_write_ops": 0, 00:15:56.361 "bytes_unmapped": 0, 00:15:56.361 "num_unmap_ops": 0, 00:15:56.361 "bytes_copied": 0, 00:15:56.361 "num_copy_ops": 0, 00:15:56.361 "read_latency_ticks": 1052081487250, 00:15:56.361 "max_read_latency_ticks": 13340218, 00:15:56.361 "min_read_latency_ticks": 7735090, 00:15:56.361 "write_latency_ticks": 0, 00:15:56.361 "max_write_latency_ticks": 0, 00:15:56.361 "min_write_latency_ticks": 0, 00:15:56.361 "unmap_latency_ticks": 0, 00:15:56.361 "max_unmap_latency_ticks": 0, 00:15:56.361 "min_unmap_latency_ticks": 0, 00:15:56.361 "copy_latency_ticks": 0, 00:15:56.361 "max_copy_latency_ticks": 0, 00:15:56.361 "min_copy_latency_ticks": 0 00:15:56.361 }, 00:15:56.361 { 00:15:56.361 "thread_id": 3, 00:15:56.361 "bytes_read": 439353344, 00:15:56.361 "num_read_ops": 107264, 00:15:56.361 "bytes_written": 0, 00:15:56.361 "num_write_ops": 0, 00:15:56.361 "bytes_unmapped": 0, 00:15:56.361 "num_unmap_ops": 0, 00:15:56.361 "bytes_copied": 0, 00:15:56.361 "num_copy_ops": 0, 00:15:56.361 "read_latency_ticks": 1052653504590, 00:15:56.361 "max_read_latency_ticks": 12277184, 00:15:56.361 "min_read_latency_ticks": 7498782, 00:15:56.361 "write_latency_ticks": 0, 00:15:56.361 "max_write_latency_ticks": 0, 00:15:56.361 "min_write_latency_ticks": 0, 00:15:56.361 "unmap_latency_ticks": 0, 00:15:56.361 "max_unmap_latency_ticks": 0, 00:15:56.361 "min_unmap_latency_ticks": 0, 00:15:56.361 "copy_latency_ticks": 0, 00:15:56.361 "max_copy_latency_ticks": 0, 00:15:56.361 "min_copy_latency_ticks": 0 00:15:56.361 } 00:15:56.361 ] 00:15:56.361 }' 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=104704 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=104704 00:15:56.361 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:15:56.629 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=107264 00:15:56.629 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=211968 00:15:56.629 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:15:56.629 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.629 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:56.629 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.629 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:15:56.629 "tick_rate": 2100000000, 00:15:56.629 "ticks": 1928327026640, 00:15:56.629 "bdevs": [ 00:15:56.629 { 00:15:56.629 "name": "Malloc_STAT", 00:15:56.629 "bytes_read": 924881408, 00:15:56.629 "num_read_ops": 225795, 00:15:56.629 "bytes_written": 0, 00:15:56.629 "num_write_ops": 0, 00:15:56.629 "bytes_unmapped": 0, 00:15:56.629 "num_unmap_ops": 0, 00:15:56.629 "bytes_copied": 0, 00:15:56.629 "num_copy_ops": 0, 00:15:56.629 "read_latency_ticks": 2242947579018, 00:15:56.629 "max_read_latency_ticks": 13340218, 00:15:56.629 "min_read_latency_ticks": 309968, 00:15:56.629 "write_latency_ticks": 0, 00:15:56.629 "max_write_latency_ticks": 0, 00:15:56.629 "min_write_latency_ticks": 0, 00:15:56.629 "unmap_latency_ticks": 0, 00:15:56.629 "max_unmap_latency_ticks": 0, 00:15:56.629 "min_unmap_latency_ticks": 0, 00:15:56.629 "copy_latency_ticks": 0, 00:15:56.629 "max_copy_latency_ticks": 0, 00:15:56.629 "min_copy_latency_ticks": 0, 00:15:56.629 "io_error": {} 00:15:56.629 } 00:15:56.629 ] 00:15:56.629 }' 00:15:56.629 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:15:56.629 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=225795 00:15:56.629 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 211968 -lt 205571 ']' 00:15:56.629 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 211968 -gt 225795 ']' 00:15:56.629 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:15:56.629 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.629 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:56.629 00:15:56.629 Latency(us) 00:15:56.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.629 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:15:56.629 Malloc_STAT : 2.17 53433.13 208.72 0.00 0.00 4779.31 1178.09 6366.35 00:15:56.629 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:56.629 Malloc_STAT : 2.17 54564.35 213.14 0.00 0.00 4680.91 795.79 6023.07 00:15:56.629 =================================================================================================================== 00:15:56.629 Total : 107997.47 421.87 0.00 0.00 4729.57 795.79 6366.35 00:15:56.887 0 00:15:56.887 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.887 11:39:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 120985 00:15:56.887 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@949 -- # '[' -z 120985 ']' 00:15:56.887 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # kill -0 120985 00:15:56.887 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # uname 00:15:56.887 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:56.887 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 120985 00:15:56.887 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:56.888 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:56.888 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 120985' 00:15:56.888 killing process with pid 120985 00:15:56.888 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@968 -- # kill 120985 00:15:56.888 Received shutdown signal, test time was about 2.381936 seconds 00:15:56.888 00:15:56.888 Latency(us) 00:15:56.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.888 =================================================================================================================== 00:15:56.888 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:56.888 11:39:28 blockdev_general.bdev_stat -- common/autotest_common.sh@973 -- # wait 120985 00:15:58.790 ************************************ 00:15:58.790 END TEST bdev_stat 00:15:58.790 ************************************ 00:15:58.790 11:39:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:15:58.790 00:15:58.790 real 0m5.642s 00:15:58.790 user 0m10.559s 00:15:58.790 sys 0m0.420s 00:15:58.790 11:39:30 blockdev_general.bdev_stat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:58.790 11:39:30 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:58.790 11:39:30 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:15:58.790 11:39:30 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:15:58.790 11:39:30 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:15:58.790 11:39:30 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:15:58.790 11:39:30 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:58.790 11:39:30 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:58.790 11:39:30 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:15:58.790 11:39:30 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:15:58.790 11:39:30 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:15:58.790 11:39:30 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:15:58.790 ************************************ 00:15:58.790 END TEST blockdev_general 00:15:58.790 ************************************ 00:15:58.790 00:15:58.790 real 2m41.467s 00:15:58.790 user 6m13.043s 00:15:58.790 sys 0m24.270s 00:15:58.790 11:39:30 blockdev_general -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:58.790 11:39:30 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:58.790 11:39:30 -- spdk/autotest.sh@194 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:15:58.790 11:39:30 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:15:58.790 11:39:30 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:58.790 11:39:30 -- common/autotest_common.sh@10 -- # set +x 00:15:58.790 ************************************ 00:15:58.790 START TEST bdev_raid 00:15:58.790 ************************************ 00:15:58.790 11:39:30 bdev_raid -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:15:58.790 * Looking for test storage... 00:15:58.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:58.790 11:39:30 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:58.790 11:39:30 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:15:58.790 11:39:30 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:15:58.790 11:39:30 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:15:59.049 11:39:30 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:15:59.049 11:39:30 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:15:59.049 11:39:30 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:15:59.049 11:39:30 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:15:59.049 11:39:30 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:15:59.049 11:39:30 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:15:59.049 11:39:30 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:15:59.049 11:39:30 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:15:59.049 11:39:30 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:59.049 11:39:30 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:59.049 11:39:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:59.049 ************************************ 00:15:59.049 START TEST raid_function_test_raid0 00:15:59.049 ************************************ 00:15:59.049 11:39:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1124 -- # raid_function_test raid0 00:15:59.049 11:39:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:15:59.049 11:39:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:15:59.049 11:39:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:15:59.049 11:39:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=121151 00:15:59.049 11:39:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 121151' 00:15:59.049 Process raid pid: 121151 00:15:59.049 11:39:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 121151 /var/tmp/spdk-raid.sock 00:15:59.049 11:39:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:59.049 11:39:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@830 -- # '[' -z 121151 ']' 00:15:59.049 11:39:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:59.049 11:39:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:59.049 11:39:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:59.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:59.049 11:39:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:59.049 11:39:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:15:59.049 [2024-06-10 11:39:30.959283] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:15:59.049 [2024-06-10 11:39:30.959739] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.307 [2024-06-10 11:39:31.155371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.643 [2024-06-10 11:39:31.436151] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.643 [2024-06-10 11:39:31.656903] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.902 11:39:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:59.902 11:39:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@863 -- # return 0 00:15:59.902 11:39:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:15:59.902 11:39:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:15:59.902 11:39:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:59.902 11:39:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:15:59.902 11:39:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:16:00.470 [2024-06-10 11:39:32.238027] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:00.470 [2024-06-10 11:39:32.240171] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:00.470 [2024-06-10 11:39:32.240416] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:00.470 [2024-06-10 11:39:32.240528] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:00.470 [2024-06-10 11:39:32.240735] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:00.470 [2024-06-10 11:39:32.241118] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:00.470 [2024-06-10 11:39:32.241239] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:16:00.470 [2024-06-10 11:39:32.241505] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.470 Base_1 00:16:00.470 Base_2 00:16:00.470 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:00.470 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:16:00.470 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:00.470 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:16:00.470 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:16:00.470 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:16:00.470 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:00.470 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:16:00.470 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:00.470 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:00.470 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:00.470 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:16:00.470 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:00.470 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:00.470 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:16:00.730 [2024-06-10 11:39:32.726177] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:00.730 /dev/nbd0 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local i 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # break 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.730 1+0 records in 00:16:00.730 1+0 records out 00:16:00.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387696 s, 10.6 MB/s 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # size=4096 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # return 0 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:00.730 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:00.989 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:00.989 { 00:16:00.989 "nbd_device": "/dev/nbd0", 00:16:00.989 "bdev_name": "raid" 00:16:00.989 } 00:16:00.989 ]' 00:16:00.989 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:00.989 11:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:00.989 { 00:16:00.989 "nbd_device": "/dev/nbd0", 00:16:00.989 "bdev_name": "raid" 00:16:00.989 } 00:16:00.989 ]' 00:16:00.989 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:16:01.247 4096+0 records in 00:16:01.247 4096+0 records out 00:16:01.247 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0341752 s, 61.4 MB/s 00:16:01.247 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:16:01.505 4096+0 records in 00:16:01.505 4096+0 records out 00:16:01.505 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.263335 s, 8.0 MB/s 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:16:01.505 128+0 records in 00:16:01.505 128+0 records out 00:16:01.505 65536 bytes (66 kB, 64 KiB) copied, 0.000816117 s, 80.3 MB/s 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:16:01.505 2035+0 records in 00:16:01.505 2035+0 records out 00:16:01.505 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00749068 s, 139 MB/s 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:16:01.505 456+0 records in 00:16:01.505 456+0 records out 00:16:01.505 233472 bytes (233 kB, 228 KiB) copied, 0.00177121 s, 132 MB/s 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:01.505 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:16:01.763 [2024-06-10 11:39:33.755853] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.763 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:01.763 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:01.763 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:01.763 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.763 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.763 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:01.763 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:16:01.763 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.763 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:01.763 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:01.763 11:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 121151 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@949 -- # '[' -z 121151 ']' 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # kill -0 121151 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # uname 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 121151 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 121151' 00:16:02.330 killing process with pid 121151 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # kill 121151 00:16:02.330 [2024-06-10 11:39:34.202330] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:02.330 [2024-06-10 11:39:34.202441] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.330 [2024-06-10 11:39:34.202506] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.330 [2024-06-10 11:39:34.202522] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:16:02.330 11:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # wait 121151 00:16:02.588 [2024-06-10 11:39:34.412861] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:03.993 ************************************ 00:16:03.993 END TEST raid_function_test_raid0 00:16:03.993 ************************************ 00:16:03.993 11:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:16:03.993 00:16:03.993 real 0m4.948s 00:16:03.993 user 0m6.154s 00:16:03.993 sys 0m1.120s 00:16:03.993 11:39:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:03.993 11:39:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:16:03.993 11:39:35 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:16:03.993 11:39:35 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:03.993 11:39:35 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:03.993 11:39:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:03.993 ************************************ 00:16:03.993 START TEST raid_function_test_concat 00:16:03.993 ************************************ 00:16:03.993 11:39:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1124 -- # raid_function_test concat 00:16:03.993 11:39:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:16:03.993 11:39:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:16:03.993 11:39:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:16:03.993 11:39:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=121316 00:16:03.993 11:39:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 121316' 00:16:03.993 Process raid pid: 121316 00:16:03.993 11:39:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 121316 /var/tmp/spdk-raid.sock 00:16:03.993 11:39:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:03.993 11:39:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@830 -- # '[' -z 121316 ']' 00:16:03.993 11:39:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:03.993 11:39:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:03.993 11:39:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:03.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:03.993 11:39:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:03.993 11:39:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:16:03.993 [2024-06-10 11:39:35.953464] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:16:03.993 [2024-06-10 11:39:35.953624] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.252 [2024-06-10 11:39:36.125544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.511 [2024-06-10 11:39:36.424980] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.770 [2024-06-10 11:39:36.663157] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.029 11:39:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:05.029 11:39:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@863 -- # return 0 00:16:05.029 11:39:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:16:05.029 11:39:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:16:05.029 11:39:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:05.029 11:39:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:16:05.029 11:39:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:16:05.287 [2024-06-10 11:39:37.310630] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:05.287 [2024-06-10 11:39:37.312680] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:05.287 [2024-06-10 11:39:37.312750] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:05.287 [2024-06-10 11:39:37.312761] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:05.287 [2024-06-10 11:39:37.312908] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:05.287 [2024-06-10 11:39:37.313223] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:05.287 [2024-06-10 11:39:37.313234] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:16:05.287 [2024-06-10 11:39:37.313410] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.287 Base_1 00:16:05.287 Base_2 00:16:05.287 11:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:05.287 11:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:05.287 11:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:16:05.546 11:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:16:05.546 11:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:16:05.546 11:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:16:05.546 11:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:05.546 11:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:16:05.546 11:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:05.546 11:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:05.546 11:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:05.546 11:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:16:05.546 11:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:05.546 11:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.546 11:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:16:05.805 [2024-06-10 11:39:37.734734] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:05.805 /dev/nbd0 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local i 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # break 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.805 1+0 records in 00:16:05.805 1+0 records out 00:16:05.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412465 s, 9.9 MB/s 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # size=4096 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # return 0 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:05.805 11:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:06.064 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:06.064 { 00:16:06.064 "nbd_device": "/dev/nbd0", 00:16:06.064 "bdev_name": "raid" 00:16:06.064 } 00:16:06.064 ]' 00:16:06.064 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:06.064 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:06.064 { 00:16:06.064 "nbd_device": "/dev/nbd0", 00:16:06.064 "bdev_name": "raid" 00:16:06.064 } 00:16:06.064 ]' 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:16:06.323 4096+0 records in 00:16:06.323 4096+0 records out 00:16:06.323 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0311134 s, 67.4 MB/s 00:16:06.323 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:16:06.581 4096+0 records in 00:16:06.581 4096+0 records out 00:16:06.581 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.260623 s, 8.0 MB/s 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:16:06.581 128+0 records in 00:16:06.581 128+0 records out 00:16:06.581 65536 bytes (66 kB, 64 KiB) copied, 0.000447535 s, 146 MB/s 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:16:06.581 2035+0 records in 00:16:06.581 2035+0 records out 00:16:06.581 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00725926 s, 144 MB/s 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:16:06.581 456+0 records in 00:16:06.581 456+0 records out 00:16:06.581 233472 bytes (233 kB, 228 KiB) copied, 0.00200987 s, 116 MB/s 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.581 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:16:06.839 [2024-06-10 11:39:38.862732] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.839 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:06.839 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:06.839 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:06.839 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.839 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.839 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:06.839 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:16:06.839 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.839 11:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:06.839 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:06.839 11:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:07.404 11:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:07.404 11:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:07.404 11:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:07.404 11:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:07.404 11:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:07.404 11:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:07.404 11:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:16:07.404 11:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:16:07.404 11:39:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:07.404 11:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:16:07.405 11:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:16:07.405 11:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 121316 00:16:07.405 11:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@949 -- # '[' -z 121316 ']' 00:16:07.405 11:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # kill -0 121316 00:16:07.405 11:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # uname 00:16:07.405 11:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:07.405 11:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 121316 00:16:07.405 11:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:07.405 11:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:07.405 11:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 121316' 00:16:07.405 killing process with pid 121316 00:16:07.405 11:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # kill 121316 00:16:07.405 [2024-06-10 11:39:39.243592] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:07.405 [2024-06-10 11:39:39.243687] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.405 [2024-06-10 11:39:39.243738] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.405 11:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # wait 121316 00:16:07.405 [2024-06-10 11:39:39.243750] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:16:07.405 [2024-06-10 11:39:39.442770] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.780 ************************************ 00:16:08.780 END TEST raid_function_test_concat 00:16:08.780 ************************************ 00:16:08.780 11:39:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:16:08.780 00:16:08.780 real 0m4.910s 00:16:08.780 user 0m6.086s 00:16:08.780 sys 0m1.203s 00:16:08.780 11:39:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:08.780 11:39:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:16:09.039 11:39:40 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:16:09.039 11:39:40 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:16:09.039 11:39:40 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:09.039 11:39:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:09.039 ************************************ 00:16:09.039 START TEST raid0_resize_test 00:16:09.039 ************************************ 00:16:09.039 11:39:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # raid0_resize_test 00:16:09.039 11:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:16:09.039 11:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:16:09.039 11:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:16:09.039 11:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:16:09.039 11:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:16:09.039 11:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:16:09.039 11:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=121484 00:16:09.039 11:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 121484' 00:16:09.039 Process raid pid: 121484 00:16:09.039 11:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 121484 /var/tmp/spdk-raid.sock 00:16:09.039 11:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:09.039 11:39:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@830 -- # '[' -z 121484 ']' 00:16:09.039 11:39:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:09.039 11:39:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:09.039 11:39:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:09.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:09.039 11:39:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:09.039 11:39:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.039 [2024-06-10 11:39:40.945156] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:16:09.039 [2024-06-10 11:39:40.945403] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.299 [2024-06-10 11:39:41.132972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.559 [2024-06-10 11:39:41.393872] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.817 [2024-06-10 11:39:41.621145] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.084 11:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:10.084 11:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@863 -- # return 0 00:16:10.084 11:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:16:10.359 Base_1 00:16:10.359 11:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:16:10.359 Base_2 00:16:10.359 11:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:16:10.926 [2024-06-10 11:39:42.711224] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:10.926 [2024-06-10 11:39:42.713433] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:10.926 [2024-06-10 11:39:42.713509] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:10.926 [2024-06-10 11:39:42.713520] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:10.926 [2024-06-10 11:39:42.713658] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:10.926 [2024-06-10 11:39:42.713988] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:10.926 [2024-06-10 11:39:42.714015] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000007280 00:16:10.926 [2024-06-10 11:39:42.714190] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.926 11:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:16:11.185 [2024-06-10 11:39:43.019273] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:11.185 [2024-06-10 11:39:43.019316] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:16:11.185 true 00:16:11.185 11:39:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:16:11.185 11:39:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:16:11.444 [2024-06-10 11:39:43.303437] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.444 11:39:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:16:11.444 11:39:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:16:11.444 11:39:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:16:11.444 11:39:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:16:11.702 [2024-06-10 11:39:43.583334] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:11.702 [2024-06-10 11:39:43.583379] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:16:11.702 [2024-06-10 11:39:43.583436] bdev_raid.c:2289:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:16:11.702 true 00:16:11.702 11:39:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:16:11.702 11:39:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:16:11.961 [2024-06-10 11:39:43.883491] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.961 11:39:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:16:11.961 11:39:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:16:11.961 11:39:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:16:11.961 11:39:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 121484 00:16:11.961 11:39:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@949 -- # '[' -z 121484 ']' 00:16:11.961 11:39:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # kill -0 121484 00:16:11.961 11:39:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # uname 00:16:11.961 11:39:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:11.961 11:39:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 121484 00:16:11.961 killing process with pid 121484 00:16:11.961 11:39:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:11.961 11:39:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:11.961 11:39:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 121484' 00:16:11.961 11:39:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # kill 121484 00:16:11.961 [2024-06-10 11:39:43.933007] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.961 [2024-06-10 11:39:43.933105] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.961 11:39:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # wait 121484 00:16:11.961 [2024-06-10 11:39:43.933169] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.961 [2024-06-10 11:39:43.933178] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Raid, state offline 00:16:11.961 [2024-06-10 11:39:43.933784] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:13.431 11:39:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:16:13.431 00:16:13.431 real 0m4.489s 00:16:13.431 user 0m6.331s 00:16:13.431 sys 0m0.654s 00:16:13.431 11:39:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:13.431 11:39:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.431 ************************************ 00:16:13.431 END TEST raid0_resize_test 00:16:13.431 ************************************ 00:16:13.431 11:39:45 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:16:13.431 11:39:45 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:13.431 11:39:45 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:16:13.431 11:39:45 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:16:13.431 11:39:45 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:13.431 11:39:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:13.431 ************************************ 00:16:13.431 START TEST raid_state_function_test 00:16:13.431 ************************************ 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 2 false 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=121582 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 121582' 00:16:13.431 Process raid pid: 121582 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 121582 /var/tmp/spdk-raid.sock 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 121582 ']' 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:13.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:13.431 11:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.691 [2024-06-10 11:39:45.504240] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:16:13.691 [2024-06-10 11:39:45.504438] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.691 [2024-06-10 11:39:45.689038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.950 [2024-06-10 11:39:45.964769] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.209 [2024-06-10 11:39:46.199765] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.468 11:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:14.468 11:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:16:14.468 11:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:14.730 [2024-06-10 11:39:46.653229] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:14.730 [2024-06-10 11:39:46.653324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:14.730 [2024-06-10 11:39:46.653337] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:14.730 [2024-06-10 11:39:46.653366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:14.730 11:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:14.730 11:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:14.730 11:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:14.730 11:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:14.730 11:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:14.730 11:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:14.730 11:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:14.730 11:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:14.730 11:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:14.730 11:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:14.730 11:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.730 11:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.988 11:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:14.988 "name": "Existed_Raid", 00:16:14.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.988 "strip_size_kb": 64, 00:16:14.988 "state": "configuring", 00:16:14.988 "raid_level": "raid0", 00:16:14.988 "superblock": false, 00:16:14.988 "num_base_bdevs": 2, 00:16:14.988 "num_base_bdevs_discovered": 0, 00:16:14.988 "num_base_bdevs_operational": 2, 00:16:14.988 "base_bdevs_list": [ 00:16:14.988 { 00:16:14.988 "name": "BaseBdev1", 00:16:14.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.988 "is_configured": false, 00:16:14.988 "data_offset": 0, 00:16:14.988 "data_size": 0 00:16:14.988 }, 00:16:14.988 { 00:16:14.988 "name": "BaseBdev2", 00:16:14.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.988 "is_configured": false, 00:16:14.988 "data_offset": 0, 00:16:14.988 "data_size": 0 00:16:14.988 } 00:16:14.988 ] 00:16:14.988 }' 00:16:14.988 11:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:14.988 11:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.604 11:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:15.863 [2024-06-10 11:39:47.917413] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:15.863 [2024-06-10 11:39:47.917457] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:16.121 11:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:16.379 [2024-06-10 11:39:48.197452] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:16.379 [2024-06-10 11:39:48.197521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:16.379 [2024-06-10 11:39:48.197530] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:16.379 [2024-06-10 11:39:48.197554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:16.379 11:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:16.637 [2024-06-10 11:39:48.514253] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.637 BaseBdev1 00:16:16.637 11:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:16.637 11:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:16:16.637 11:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:16.637 11:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:16:16.637 11:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:16.637 11:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:16.637 11:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:16.896 11:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:17.154 [ 00:16:17.154 { 00:16:17.154 "name": "BaseBdev1", 00:16:17.154 "aliases": [ 00:16:17.154 "21ff4bbd-e6c9-4345-b775-2c1cd4240ab5" 00:16:17.154 ], 00:16:17.154 "product_name": "Malloc disk", 00:16:17.154 "block_size": 512, 00:16:17.154 "num_blocks": 65536, 00:16:17.154 "uuid": "21ff4bbd-e6c9-4345-b775-2c1cd4240ab5", 00:16:17.154 "assigned_rate_limits": { 00:16:17.154 "rw_ios_per_sec": 0, 00:16:17.154 "rw_mbytes_per_sec": 0, 00:16:17.154 "r_mbytes_per_sec": 0, 00:16:17.154 "w_mbytes_per_sec": 0 00:16:17.154 }, 00:16:17.154 "claimed": true, 00:16:17.154 "claim_type": "exclusive_write", 00:16:17.154 "zoned": false, 00:16:17.154 "supported_io_types": { 00:16:17.154 "read": true, 00:16:17.154 "write": true, 00:16:17.154 "unmap": true, 00:16:17.154 "write_zeroes": true, 00:16:17.154 "flush": true, 00:16:17.154 "reset": true, 00:16:17.154 "compare": false, 00:16:17.154 "compare_and_write": false, 00:16:17.154 "abort": true, 00:16:17.154 "nvme_admin": false, 00:16:17.154 "nvme_io": false 00:16:17.154 }, 00:16:17.154 "memory_domains": [ 00:16:17.154 { 00:16:17.154 "dma_device_id": "system", 00:16:17.154 "dma_device_type": 1 00:16:17.154 }, 00:16:17.154 { 00:16:17.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.154 "dma_device_type": 2 00:16:17.154 } 00:16:17.154 ], 00:16:17.154 "driver_specific": {} 00:16:17.154 } 00:16:17.154 ] 00:16:17.154 11:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:16:17.155 11:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:17.155 11:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:17.155 11:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:17.155 11:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:17.155 11:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:17.155 11:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:17.155 11:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:17.155 11:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:17.155 11:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:17.155 11:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:17.155 11:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.155 11:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.413 11:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:17.413 "name": "Existed_Raid", 00:16:17.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.413 "strip_size_kb": 64, 00:16:17.414 "state": "configuring", 00:16:17.414 "raid_level": "raid0", 00:16:17.414 "superblock": false, 00:16:17.414 "num_base_bdevs": 2, 00:16:17.414 "num_base_bdevs_discovered": 1, 00:16:17.414 "num_base_bdevs_operational": 2, 00:16:17.414 "base_bdevs_list": [ 00:16:17.414 { 00:16:17.414 "name": "BaseBdev1", 00:16:17.414 "uuid": "21ff4bbd-e6c9-4345-b775-2c1cd4240ab5", 00:16:17.414 "is_configured": true, 00:16:17.414 "data_offset": 0, 00:16:17.414 "data_size": 65536 00:16:17.414 }, 00:16:17.414 { 00:16:17.414 "name": "BaseBdev2", 00:16:17.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.414 "is_configured": false, 00:16:17.414 "data_offset": 0, 00:16:17.414 "data_size": 0 00:16:17.414 } 00:16:17.414 ] 00:16:17.414 }' 00:16:17.414 11:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:17.414 11:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.980 11:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:17.980 [2024-06-10 11:39:50.002579] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:17.980 [2024-06-10 11:39:50.002638] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:17.980 11:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:18.238 [2024-06-10 11:39:50.286692] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.238 [2024-06-10 11:39:50.288969] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.238 [2024-06-10 11:39:50.289037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.497 11:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:18.497 11:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:18.497 11:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:18.497 11:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:18.497 11:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:18.497 11:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:18.497 11:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:18.497 11:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:18.497 11:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.497 11:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.497 11:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.497 11:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.497 11:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.497 11:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.757 11:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.757 "name": "Existed_Raid", 00:16:18.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.757 "strip_size_kb": 64, 00:16:18.757 "state": "configuring", 00:16:18.757 "raid_level": "raid0", 00:16:18.757 "superblock": false, 00:16:18.757 "num_base_bdevs": 2, 00:16:18.757 "num_base_bdevs_discovered": 1, 00:16:18.757 "num_base_bdevs_operational": 2, 00:16:18.757 "base_bdevs_list": [ 00:16:18.757 { 00:16:18.757 "name": "BaseBdev1", 00:16:18.757 "uuid": "21ff4bbd-e6c9-4345-b775-2c1cd4240ab5", 00:16:18.757 "is_configured": true, 00:16:18.757 "data_offset": 0, 00:16:18.757 "data_size": 65536 00:16:18.757 }, 00:16:18.757 { 00:16:18.757 "name": "BaseBdev2", 00:16:18.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.757 "is_configured": false, 00:16:18.757 "data_offset": 0, 00:16:18.757 "data_size": 0 00:16:18.757 } 00:16:18.757 ] 00:16:18.757 }' 00:16:18.757 11:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.757 11:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.324 11:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:19.582 [2024-06-10 11:39:51.490230] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.582 [2024-06-10 11:39:51.490291] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:19.582 [2024-06-10 11:39:51.490300] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:19.582 [2024-06-10 11:39:51.490425] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:19.582 [2024-06-10 11:39:51.490762] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:19.582 [2024-06-10 11:39:51.490784] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:19.582 [2024-06-10 11:39:51.491068] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.582 BaseBdev2 00:16:19.582 11:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:19.582 11:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:16:19.582 11:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:19.582 11:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:16:19.582 11:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:19.582 11:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:19.582 11:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:19.841 11:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:20.099 [ 00:16:20.099 { 00:16:20.099 "name": "BaseBdev2", 00:16:20.099 "aliases": [ 00:16:20.099 "99e172b0-0a65-43c1-8fb7-4c87ef855320" 00:16:20.099 ], 00:16:20.099 "product_name": "Malloc disk", 00:16:20.099 "block_size": 512, 00:16:20.099 "num_blocks": 65536, 00:16:20.099 "uuid": "99e172b0-0a65-43c1-8fb7-4c87ef855320", 00:16:20.099 "assigned_rate_limits": { 00:16:20.099 "rw_ios_per_sec": 0, 00:16:20.099 "rw_mbytes_per_sec": 0, 00:16:20.099 "r_mbytes_per_sec": 0, 00:16:20.099 "w_mbytes_per_sec": 0 00:16:20.099 }, 00:16:20.099 "claimed": true, 00:16:20.099 "claim_type": "exclusive_write", 00:16:20.099 "zoned": false, 00:16:20.099 "supported_io_types": { 00:16:20.099 "read": true, 00:16:20.099 "write": true, 00:16:20.099 "unmap": true, 00:16:20.099 "write_zeroes": true, 00:16:20.099 "flush": true, 00:16:20.099 "reset": true, 00:16:20.099 "compare": false, 00:16:20.099 "compare_and_write": false, 00:16:20.099 "abort": true, 00:16:20.099 "nvme_admin": false, 00:16:20.099 "nvme_io": false 00:16:20.099 }, 00:16:20.099 "memory_domains": [ 00:16:20.099 { 00:16:20.099 "dma_device_id": "system", 00:16:20.099 "dma_device_type": 1 00:16:20.099 }, 00:16:20.099 { 00:16:20.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.099 "dma_device_type": 2 00:16:20.099 } 00:16:20.099 ], 00:16:20.099 "driver_specific": {} 00:16:20.099 } 00:16:20.099 ] 00:16:20.099 11:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:16:20.099 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:20.099 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:20.099 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:16:20.099 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:20.099 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:20.099 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:20.099 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:20.099 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:20.099 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:20.099 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:20.099 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:20.099 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:20.099 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.099 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.388 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:20.388 "name": "Existed_Raid", 00:16:20.388 "uuid": "ae310a94-cc13-43b3-941a-2aa323111430", 00:16:20.388 "strip_size_kb": 64, 00:16:20.388 "state": "online", 00:16:20.388 "raid_level": "raid0", 00:16:20.388 "superblock": false, 00:16:20.388 "num_base_bdevs": 2, 00:16:20.388 "num_base_bdevs_discovered": 2, 00:16:20.388 "num_base_bdevs_operational": 2, 00:16:20.388 "base_bdevs_list": [ 00:16:20.388 { 00:16:20.388 "name": "BaseBdev1", 00:16:20.388 "uuid": "21ff4bbd-e6c9-4345-b775-2c1cd4240ab5", 00:16:20.388 "is_configured": true, 00:16:20.388 "data_offset": 0, 00:16:20.388 "data_size": 65536 00:16:20.388 }, 00:16:20.388 { 00:16:20.388 "name": "BaseBdev2", 00:16:20.388 "uuid": "99e172b0-0a65-43c1-8fb7-4c87ef855320", 00:16:20.388 "is_configured": true, 00:16:20.388 "data_offset": 0, 00:16:20.388 "data_size": 65536 00:16:20.388 } 00:16:20.388 ] 00:16:20.388 }' 00:16:20.388 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:20.388 11:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.970 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:20.970 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:20.970 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:20.970 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:20.970 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:20.970 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:20.970 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:20.970 11:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:21.229 [2024-06-10 11:39:53.234965] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.229 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:21.229 "name": "Existed_Raid", 00:16:21.229 "aliases": [ 00:16:21.229 "ae310a94-cc13-43b3-941a-2aa323111430" 00:16:21.229 ], 00:16:21.229 "product_name": "Raid Volume", 00:16:21.229 "block_size": 512, 00:16:21.229 "num_blocks": 131072, 00:16:21.229 "uuid": "ae310a94-cc13-43b3-941a-2aa323111430", 00:16:21.229 "assigned_rate_limits": { 00:16:21.229 "rw_ios_per_sec": 0, 00:16:21.229 "rw_mbytes_per_sec": 0, 00:16:21.229 "r_mbytes_per_sec": 0, 00:16:21.229 "w_mbytes_per_sec": 0 00:16:21.229 }, 00:16:21.229 "claimed": false, 00:16:21.229 "zoned": false, 00:16:21.229 "supported_io_types": { 00:16:21.229 "read": true, 00:16:21.229 "write": true, 00:16:21.229 "unmap": true, 00:16:21.229 "write_zeroes": true, 00:16:21.229 "flush": true, 00:16:21.229 "reset": true, 00:16:21.229 "compare": false, 00:16:21.229 "compare_and_write": false, 00:16:21.229 "abort": false, 00:16:21.229 "nvme_admin": false, 00:16:21.229 "nvme_io": false 00:16:21.229 }, 00:16:21.229 "memory_domains": [ 00:16:21.229 { 00:16:21.229 "dma_device_id": "system", 00:16:21.229 "dma_device_type": 1 00:16:21.229 }, 00:16:21.229 { 00:16:21.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.229 "dma_device_type": 2 00:16:21.229 }, 00:16:21.229 { 00:16:21.229 "dma_device_id": "system", 00:16:21.229 "dma_device_type": 1 00:16:21.229 }, 00:16:21.229 { 00:16:21.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.229 "dma_device_type": 2 00:16:21.229 } 00:16:21.229 ], 00:16:21.229 "driver_specific": { 00:16:21.229 "raid": { 00:16:21.229 "uuid": "ae310a94-cc13-43b3-941a-2aa323111430", 00:16:21.229 "strip_size_kb": 64, 00:16:21.229 "state": "online", 00:16:21.229 "raid_level": "raid0", 00:16:21.229 "superblock": false, 00:16:21.229 "num_base_bdevs": 2, 00:16:21.229 "num_base_bdevs_discovered": 2, 00:16:21.229 "num_base_bdevs_operational": 2, 00:16:21.229 "base_bdevs_list": [ 00:16:21.229 { 00:16:21.229 "name": "BaseBdev1", 00:16:21.229 "uuid": "21ff4bbd-e6c9-4345-b775-2c1cd4240ab5", 00:16:21.229 "is_configured": true, 00:16:21.229 "data_offset": 0, 00:16:21.229 "data_size": 65536 00:16:21.229 }, 00:16:21.229 { 00:16:21.229 "name": "BaseBdev2", 00:16:21.229 "uuid": "99e172b0-0a65-43c1-8fb7-4c87ef855320", 00:16:21.229 "is_configured": true, 00:16:21.229 "data_offset": 0, 00:16:21.229 "data_size": 65536 00:16:21.229 } 00:16:21.229 ] 00:16:21.229 } 00:16:21.229 } 00:16:21.229 }' 00:16:21.229 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:21.488 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:21.488 BaseBdev2' 00:16:21.488 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:21.488 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:21.488 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:21.746 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:21.746 "name": "BaseBdev1", 00:16:21.746 "aliases": [ 00:16:21.746 "21ff4bbd-e6c9-4345-b775-2c1cd4240ab5" 00:16:21.746 ], 00:16:21.746 "product_name": "Malloc disk", 00:16:21.746 "block_size": 512, 00:16:21.746 "num_blocks": 65536, 00:16:21.746 "uuid": "21ff4bbd-e6c9-4345-b775-2c1cd4240ab5", 00:16:21.746 "assigned_rate_limits": { 00:16:21.746 "rw_ios_per_sec": 0, 00:16:21.746 "rw_mbytes_per_sec": 0, 00:16:21.746 "r_mbytes_per_sec": 0, 00:16:21.746 "w_mbytes_per_sec": 0 00:16:21.746 }, 00:16:21.746 "claimed": true, 00:16:21.746 "claim_type": "exclusive_write", 00:16:21.746 "zoned": false, 00:16:21.746 "supported_io_types": { 00:16:21.746 "read": true, 00:16:21.746 "write": true, 00:16:21.746 "unmap": true, 00:16:21.746 "write_zeroes": true, 00:16:21.746 "flush": true, 00:16:21.746 "reset": true, 00:16:21.746 "compare": false, 00:16:21.746 "compare_and_write": false, 00:16:21.746 "abort": true, 00:16:21.746 "nvme_admin": false, 00:16:21.746 "nvme_io": false 00:16:21.746 }, 00:16:21.746 "memory_domains": [ 00:16:21.746 { 00:16:21.746 "dma_device_id": "system", 00:16:21.746 "dma_device_type": 1 00:16:21.746 }, 00:16:21.746 { 00:16:21.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.746 "dma_device_type": 2 00:16:21.746 } 00:16:21.746 ], 00:16:21.746 "driver_specific": {} 00:16:21.746 }' 00:16:21.746 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:21.746 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:21.746 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:21.746 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:21.746 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:21.746 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:21.746 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:22.004 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:22.004 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:22.004 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:22.004 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:22.004 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:22.004 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:22.004 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:22.004 11:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:22.261 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:22.261 "name": "BaseBdev2", 00:16:22.261 "aliases": [ 00:16:22.261 "99e172b0-0a65-43c1-8fb7-4c87ef855320" 00:16:22.261 ], 00:16:22.261 "product_name": "Malloc disk", 00:16:22.261 "block_size": 512, 00:16:22.261 "num_blocks": 65536, 00:16:22.261 "uuid": "99e172b0-0a65-43c1-8fb7-4c87ef855320", 00:16:22.261 "assigned_rate_limits": { 00:16:22.261 "rw_ios_per_sec": 0, 00:16:22.261 "rw_mbytes_per_sec": 0, 00:16:22.261 "r_mbytes_per_sec": 0, 00:16:22.261 "w_mbytes_per_sec": 0 00:16:22.261 }, 00:16:22.261 "claimed": true, 00:16:22.261 "claim_type": "exclusive_write", 00:16:22.261 "zoned": false, 00:16:22.261 "supported_io_types": { 00:16:22.261 "read": true, 00:16:22.261 "write": true, 00:16:22.261 "unmap": true, 00:16:22.261 "write_zeroes": true, 00:16:22.261 "flush": true, 00:16:22.261 "reset": true, 00:16:22.261 "compare": false, 00:16:22.261 "compare_and_write": false, 00:16:22.261 "abort": true, 00:16:22.261 "nvme_admin": false, 00:16:22.261 "nvme_io": false 00:16:22.261 }, 00:16:22.261 "memory_domains": [ 00:16:22.261 { 00:16:22.261 "dma_device_id": "system", 00:16:22.261 "dma_device_type": 1 00:16:22.261 }, 00:16:22.261 { 00:16:22.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.261 "dma_device_type": 2 00:16:22.261 } 00:16:22.261 ], 00:16:22.261 "driver_specific": {} 00:16:22.261 }' 00:16:22.261 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.261 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.519 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:22.519 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.519 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.519 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:22.519 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:22.519 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:22.519 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:22.519 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:22.519 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:22.519 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:22.519 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:22.777 [2024-06-10 11:39:54.819243] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:22.777 [2024-06-10 11:39:54.819283] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.777 [2024-06-10 11:39:54.819345] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.076 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:23.076 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:23.076 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:23.076 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:23.076 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:23.076 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:16:23.076 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:23.076 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:23.076 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:23.076 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:23.076 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:23.076 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.076 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.076 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.076 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.076 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.076 11:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.348 11:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.348 "name": "Existed_Raid", 00:16:23.348 "uuid": "ae310a94-cc13-43b3-941a-2aa323111430", 00:16:23.348 "strip_size_kb": 64, 00:16:23.348 "state": "offline", 00:16:23.348 "raid_level": "raid0", 00:16:23.348 "superblock": false, 00:16:23.348 "num_base_bdevs": 2, 00:16:23.348 "num_base_bdevs_discovered": 1, 00:16:23.348 "num_base_bdevs_operational": 1, 00:16:23.348 "base_bdevs_list": [ 00:16:23.348 { 00:16:23.348 "name": null, 00:16:23.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.348 "is_configured": false, 00:16:23.348 "data_offset": 0, 00:16:23.348 "data_size": 65536 00:16:23.348 }, 00:16:23.348 { 00:16:23.348 "name": "BaseBdev2", 00:16:23.348 "uuid": "99e172b0-0a65-43c1-8fb7-4c87ef855320", 00:16:23.348 "is_configured": true, 00:16:23.348 "data_offset": 0, 00:16:23.348 "data_size": 65536 00:16:23.348 } 00:16:23.348 ] 00:16:23.348 }' 00:16:23.348 11:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.348 11:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.915 11:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:23.915 11:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:23.915 11:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:23.915 11:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.174 11:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:24.174 11:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:24.174 11:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:24.432 [2024-06-10 11:39:56.327177] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:24.432 [2024-06-10 11:39:56.327279] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:24.432 11:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:24.432 11:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:24.432 11:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.432 11:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:24.998 11:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:24.998 11:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:24.998 11:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:24.998 11:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 121582 00:16:24.999 11:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 121582 ']' 00:16:24.999 11:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 121582 00:16:24.999 11:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:16:24.999 11:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:24.999 11:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 121582 00:16:24.999 11:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:24.999 11:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:24.999 killing process with pid 121582 00:16:24.999 11:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 121582' 00:16:24.999 11:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 121582 00:16:24.999 [2024-06-10 11:39:56.779638] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.999 11:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 121582 00:16:24.999 [2024-06-10 11:39:56.779799] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:26.375 00:16:26.375 real 0m12.827s 00:16:26.375 user 0m21.969s 00:16:26.375 sys 0m1.811s 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.375 ************************************ 00:16:26.375 END TEST raid_state_function_test 00:16:26.375 ************************************ 00:16:26.375 11:39:58 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:16:26.375 11:39:58 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:16:26.375 11:39:58 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:26.375 11:39:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:26.375 ************************************ 00:16:26.375 START TEST raid_state_function_test_sb 00:16:26.375 ************************************ 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 2 true 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=121977 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 121977' 00:16:26.375 Process raid pid: 121977 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 121977 /var/tmp/spdk-raid.sock 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 121977 ']' 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:26.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:26.375 11:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.375 [2024-06-10 11:39:58.376962] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:16:26.375 [2024-06-10 11:39:58.377152] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.633 [2024-06-10 11:39:58.546934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.890 [2024-06-10 11:39:58.819833] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.148 [2024-06-10 11:39:59.056571] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.406 11:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:27.406 11:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:16:27.406 11:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:27.664 [2024-06-10 11:39:59.616799] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:27.664 [2024-06-10 11:39:59.616898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:27.664 [2024-06-10 11:39:59.616912] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:27.664 [2024-06-10 11:39:59.616942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:27.664 11:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:27.664 11:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:27.664 11:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:27.664 11:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:27.664 11:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:27.664 11:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:27.664 11:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.664 11:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.664 11:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.664 11:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.664 11:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.664 11:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.922 11:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.922 "name": "Existed_Raid", 00:16:27.922 "uuid": "240e4324-635d-41e4-9dc0-827c79e39627", 00:16:27.922 "strip_size_kb": 64, 00:16:27.922 "state": "configuring", 00:16:27.922 "raid_level": "raid0", 00:16:27.922 "superblock": true, 00:16:27.922 "num_base_bdevs": 2, 00:16:27.922 "num_base_bdevs_discovered": 0, 00:16:27.922 "num_base_bdevs_operational": 2, 00:16:27.922 "base_bdevs_list": [ 00:16:27.922 { 00:16:27.922 "name": "BaseBdev1", 00:16:27.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.922 "is_configured": false, 00:16:27.922 "data_offset": 0, 00:16:27.922 "data_size": 0 00:16:27.922 }, 00:16:27.922 { 00:16:27.922 "name": "BaseBdev2", 00:16:27.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.922 "is_configured": false, 00:16:27.922 "data_offset": 0, 00:16:27.922 "data_size": 0 00:16:27.922 } 00:16:27.922 ] 00:16:27.922 }' 00:16:27.922 11:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.922 11:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.488 11:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:29.054 [2024-06-10 11:40:00.816894] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:29.054 [2024-06-10 11:40:00.816959] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:29.054 11:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:29.054 [2024-06-10 11:40:01.100958] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.054 [2024-06-10 11:40:01.101036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.054 [2024-06-10 11:40:01.101047] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.054 [2024-06-10 11:40:01.101073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.312 11:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:29.569 [2024-06-10 11:40:01.434779] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.569 BaseBdev1 00:16:29.569 11:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:29.569 11:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:16:29.569 11:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:29.569 11:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:16:29.569 11:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:29.569 11:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:29.569 11:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:29.826 11:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:30.392 [ 00:16:30.392 { 00:16:30.392 "name": "BaseBdev1", 00:16:30.392 "aliases": [ 00:16:30.392 "58078059-499a-40b8-8279-03b7624f639c" 00:16:30.392 ], 00:16:30.392 "product_name": "Malloc disk", 00:16:30.392 "block_size": 512, 00:16:30.392 "num_blocks": 65536, 00:16:30.392 "uuid": "58078059-499a-40b8-8279-03b7624f639c", 00:16:30.392 "assigned_rate_limits": { 00:16:30.392 "rw_ios_per_sec": 0, 00:16:30.392 "rw_mbytes_per_sec": 0, 00:16:30.392 "r_mbytes_per_sec": 0, 00:16:30.392 "w_mbytes_per_sec": 0 00:16:30.392 }, 00:16:30.392 "claimed": true, 00:16:30.392 "claim_type": "exclusive_write", 00:16:30.392 "zoned": false, 00:16:30.392 "supported_io_types": { 00:16:30.392 "read": true, 00:16:30.392 "write": true, 00:16:30.392 "unmap": true, 00:16:30.392 "write_zeroes": true, 00:16:30.392 "flush": true, 00:16:30.392 "reset": true, 00:16:30.392 "compare": false, 00:16:30.392 "compare_and_write": false, 00:16:30.392 "abort": true, 00:16:30.392 "nvme_admin": false, 00:16:30.392 "nvme_io": false 00:16:30.392 }, 00:16:30.392 "memory_domains": [ 00:16:30.392 { 00:16:30.392 "dma_device_id": "system", 00:16:30.392 "dma_device_type": 1 00:16:30.392 }, 00:16:30.392 { 00:16:30.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.392 "dma_device_type": 2 00:16:30.392 } 00:16:30.392 ], 00:16:30.392 "driver_specific": {} 00:16:30.392 } 00:16:30.392 ] 00:16:30.392 11:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:16:30.392 11:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:30.392 11:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:30.392 11:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:30.392 11:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:30.392 11:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:30.392 11:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:30.392 11:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:30.392 11:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:30.392 11:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:30.392 11:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:30.392 11:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.392 11:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.650 11:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.650 "name": "Existed_Raid", 00:16:30.650 "uuid": "99ae04eb-8b7f-4878-955d-dc28034b8cf0", 00:16:30.650 "strip_size_kb": 64, 00:16:30.650 "state": "configuring", 00:16:30.650 "raid_level": "raid0", 00:16:30.650 "superblock": true, 00:16:30.650 "num_base_bdevs": 2, 00:16:30.650 "num_base_bdevs_discovered": 1, 00:16:30.650 "num_base_bdevs_operational": 2, 00:16:30.650 "base_bdevs_list": [ 00:16:30.650 { 00:16:30.650 "name": "BaseBdev1", 00:16:30.650 "uuid": "58078059-499a-40b8-8279-03b7624f639c", 00:16:30.650 "is_configured": true, 00:16:30.650 "data_offset": 2048, 00:16:30.650 "data_size": 63488 00:16:30.650 }, 00:16:30.650 { 00:16:30.650 "name": "BaseBdev2", 00:16:30.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.650 "is_configured": false, 00:16:30.650 "data_offset": 0, 00:16:30.650 "data_size": 0 00:16:30.650 } 00:16:30.650 ] 00:16:30.650 }' 00:16:30.650 11:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.650 11:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.217 11:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:31.217 [2024-06-10 11:40:03.200404] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:31.217 [2024-06-10 11:40:03.200472] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:31.217 11:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:31.475 [2024-06-10 11:40:03.412514] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.475 [2024-06-10 11:40:03.414754] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.475 [2024-06-10 11:40:03.414819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.475 11:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:31.475 11:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:31.475 11:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:31.475 11:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:31.475 11:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:31.475 11:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:31.475 11:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:31.475 11:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:31.475 11:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:31.475 11:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:31.475 11:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:31.475 11:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:31.475 11:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.475 11:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.734 11:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:31.734 "name": "Existed_Raid", 00:16:31.734 "uuid": "e01e459a-2fdb-4100-8ea6-66184444866f", 00:16:31.734 "strip_size_kb": 64, 00:16:31.734 "state": "configuring", 00:16:31.734 "raid_level": "raid0", 00:16:31.734 "superblock": true, 00:16:31.734 "num_base_bdevs": 2, 00:16:31.734 "num_base_bdevs_discovered": 1, 00:16:31.734 "num_base_bdevs_operational": 2, 00:16:31.734 "base_bdevs_list": [ 00:16:31.734 { 00:16:31.734 "name": "BaseBdev1", 00:16:31.734 "uuid": "58078059-499a-40b8-8279-03b7624f639c", 00:16:31.734 "is_configured": true, 00:16:31.734 "data_offset": 2048, 00:16:31.734 "data_size": 63488 00:16:31.734 }, 00:16:31.734 { 00:16:31.734 "name": "BaseBdev2", 00:16:31.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.734 "is_configured": false, 00:16:31.734 "data_offset": 0, 00:16:31.734 "data_size": 0 00:16:31.734 } 00:16:31.734 ] 00:16:31.734 }' 00:16:31.734 11:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:31.734 11:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.301 11:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:32.560 [2024-06-10 11:40:04.572055] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.560 [2024-06-10 11:40:04.572310] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:32.560 [2024-06-10 11:40:04.572323] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:32.560 [2024-06-10 11:40:04.572442] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:32.560 [2024-06-10 11:40:04.572785] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:32.560 [2024-06-10 11:40:04.572810] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:32.560 [2024-06-10 11:40:04.572965] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.560 BaseBdev2 00:16:32.560 11:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:32.560 11:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:16:32.560 11:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:32.560 11:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:16:32.560 11:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:32.560 11:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:32.560 11:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:33.129 11:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:33.129 [ 00:16:33.129 { 00:16:33.129 "name": "BaseBdev2", 00:16:33.129 "aliases": [ 00:16:33.129 "1462ec5f-2e25-4159-8721-066ba971f7ae" 00:16:33.129 ], 00:16:33.129 "product_name": "Malloc disk", 00:16:33.129 "block_size": 512, 00:16:33.129 "num_blocks": 65536, 00:16:33.129 "uuid": "1462ec5f-2e25-4159-8721-066ba971f7ae", 00:16:33.129 "assigned_rate_limits": { 00:16:33.129 "rw_ios_per_sec": 0, 00:16:33.129 "rw_mbytes_per_sec": 0, 00:16:33.129 "r_mbytes_per_sec": 0, 00:16:33.129 "w_mbytes_per_sec": 0 00:16:33.129 }, 00:16:33.129 "claimed": true, 00:16:33.129 "claim_type": "exclusive_write", 00:16:33.129 "zoned": false, 00:16:33.129 "supported_io_types": { 00:16:33.129 "read": true, 00:16:33.129 "write": true, 00:16:33.129 "unmap": true, 00:16:33.129 "write_zeroes": true, 00:16:33.129 "flush": true, 00:16:33.129 "reset": true, 00:16:33.129 "compare": false, 00:16:33.129 "compare_and_write": false, 00:16:33.129 "abort": true, 00:16:33.129 "nvme_admin": false, 00:16:33.129 "nvme_io": false 00:16:33.129 }, 00:16:33.129 "memory_domains": [ 00:16:33.129 { 00:16:33.129 "dma_device_id": "system", 00:16:33.129 "dma_device_type": 1 00:16:33.129 }, 00:16:33.129 { 00:16:33.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.129 "dma_device_type": 2 00:16:33.129 } 00:16:33.129 ], 00:16:33.129 "driver_specific": {} 00:16:33.129 } 00:16:33.129 ] 00:16:33.129 11:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:16:33.129 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:33.129 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:33.129 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:16:33.129 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:33.129 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:33.129 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:33.129 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:33.129 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:33.129 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.129 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.129 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.129 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.129 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.129 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.388 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.388 "name": "Existed_Raid", 00:16:33.388 "uuid": "e01e459a-2fdb-4100-8ea6-66184444866f", 00:16:33.388 "strip_size_kb": 64, 00:16:33.388 "state": "online", 00:16:33.388 "raid_level": "raid0", 00:16:33.388 "superblock": true, 00:16:33.388 "num_base_bdevs": 2, 00:16:33.388 "num_base_bdevs_discovered": 2, 00:16:33.388 "num_base_bdevs_operational": 2, 00:16:33.388 "base_bdevs_list": [ 00:16:33.388 { 00:16:33.388 "name": "BaseBdev1", 00:16:33.388 "uuid": "58078059-499a-40b8-8279-03b7624f639c", 00:16:33.388 "is_configured": true, 00:16:33.388 "data_offset": 2048, 00:16:33.388 "data_size": 63488 00:16:33.388 }, 00:16:33.388 { 00:16:33.388 "name": "BaseBdev2", 00:16:33.388 "uuid": "1462ec5f-2e25-4159-8721-066ba971f7ae", 00:16:33.388 "is_configured": true, 00:16:33.388 "data_offset": 2048, 00:16:33.388 "data_size": 63488 00:16:33.388 } 00:16:33.388 ] 00:16:33.388 }' 00:16:33.388 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.388 11:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.955 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:33.955 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:33.955 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:33.955 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:33.955 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:33.955 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:33.955 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:33.955 11:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:34.214 [2024-06-10 11:40:06.123395] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.214 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:34.214 "name": "Existed_Raid", 00:16:34.214 "aliases": [ 00:16:34.214 "e01e459a-2fdb-4100-8ea6-66184444866f" 00:16:34.214 ], 00:16:34.214 "product_name": "Raid Volume", 00:16:34.214 "block_size": 512, 00:16:34.214 "num_blocks": 126976, 00:16:34.214 "uuid": "e01e459a-2fdb-4100-8ea6-66184444866f", 00:16:34.214 "assigned_rate_limits": { 00:16:34.214 "rw_ios_per_sec": 0, 00:16:34.214 "rw_mbytes_per_sec": 0, 00:16:34.214 "r_mbytes_per_sec": 0, 00:16:34.214 "w_mbytes_per_sec": 0 00:16:34.214 }, 00:16:34.214 "claimed": false, 00:16:34.214 "zoned": false, 00:16:34.214 "supported_io_types": { 00:16:34.214 "read": true, 00:16:34.214 "write": true, 00:16:34.214 "unmap": true, 00:16:34.214 "write_zeroes": true, 00:16:34.214 "flush": true, 00:16:34.214 "reset": true, 00:16:34.214 "compare": false, 00:16:34.214 "compare_and_write": false, 00:16:34.214 "abort": false, 00:16:34.214 "nvme_admin": false, 00:16:34.214 "nvme_io": false 00:16:34.214 }, 00:16:34.214 "memory_domains": [ 00:16:34.214 { 00:16:34.214 "dma_device_id": "system", 00:16:34.214 "dma_device_type": 1 00:16:34.214 }, 00:16:34.214 { 00:16:34.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.214 "dma_device_type": 2 00:16:34.214 }, 00:16:34.214 { 00:16:34.214 "dma_device_id": "system", 00:16:34.214 "dma_device_type": 1 00:16:34.214 }, 00:16:34.214 { 00:16:34.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.214 "dma_device_type": 2 00:16:34.214 } 00:16:34.214 ], 00:16:34.214 "driver_specific": { 00:16:34.214 "raid": { 00:16:34.214 "uuid": "e01e459a-2fdb-4100-8ea6-66184444866f", 00:16:34.214 "strip_size_kb": 64, 00:16:34.214 "state": "online", 00:16:34.214 "raid_level": "raid0", 00:16:34.214 "superblock": true, 00:16:34.214 "num_base_bdevs": 2, 00:16:34.214 "num_base_bdevs_discovered": 2, 00:16:34.214 "num_base_bdevs_operational": 2, 00:16:34.214 "base_bdevs_list": [ 00:16:34.214 { 00:16:34.214 "name": "BaseBdev1", 00:16:34.214 "uuid": "58078059-499a-40b8-8279-03b7624f639c", 00:16:34.214 "is_configured": true, 00:16:34.214 "data_offset": 2048, 00:16:34.214 "data_size": 63488 00:16:34.214 }, 00:16:34.214 { 00:16:34.214 "name": "BaseBdev2", 00:16:34.214 "uuid": "1462ec5f-2e25-4159-8721-066ba971f7ae", 00:16:34.214 "is_configured": true, 00:16:34.214 "data_offset": 2048, 00:16:34.214 "data_size": 63488 00:16:34.214 } 00:16:34.214 ] 00:16:34.214 } 00:16:34.214 } 00:16:34.214 }' 00:16:34.214 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:34.214 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:34.214 BaseBdev2' 00:16:34.214 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:34.214 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:34.214 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:34.472 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:34.472 "name": "BaseBdev1", 00:16:34.472 "aliases": [ 00:16:34.472 "58078059-499a-40b8-8279-03b7624f639c" 00:16:34.472 ], 00:16:34.472 "product_name": "Malloc disk", 00:16:34.472 "block_size": 512, 00:16:34.472 "num_blocks": 65536, 00:16:34.472 "uuid": "58078059-499a-40b8-8279-03b7624f639c", 00:16:34.472 "assigned_rate_limits": { 00:16:34.472 "rw_ios_per_sec": 0, 00:16:34.472 "rw_mbytes_per_sec": 0, 00:16:34.472 "r_mbytes_per_sec": 0, 00:16:34.472 "w_mbytes_per_sec": 0 00:16:34.472 }, 00:16:34.472 "claimed": true, 00:16:34.472 "claim_type": "exclusive_write", 00:16:34.472 "zoned": false, 00:16:34.472 "supported_io_types": { 00:16:34.472 "read": true, 00:16:34.472 "write": true, 00:16:34.472 "unmap": true, 00:16:34.472 "write_zeroes": true, 00:16:34.472 "flush": true, 00:16:34.472 "reset": true, 00:16:34.473 "compare": false, 00:16:34.473 "compare_and_write": false, 00:16:34.473 "abort": true, 00:16:34.473 "nvme_admin": false, 00:16:34.473 "nvme_io": false 00:16:34.473 }, 00:16:34.473 "memory_domains": [ 00:16:34.473 { 00:16:34.473 "dma_device_id": "system", 00:16:34.473 "dma_device_type": 1 00:16:34.473 }, 00:16:34.473 { 00:16:34.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.473 "dma_device_type": 2 00:16:34.473 } 00:16:34.473 ], 00:16:34.473 "driver_specific": {} 00:16:34.473 }' 00:16:34.473 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:34.731 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:34.731 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:34.731 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:34.731 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:34.731 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:34.731 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:34.731 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:34.731 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:34.731 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:34.990 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:34.990 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:34.990 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:34.990 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:34.990 11:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:35.248 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:35.248 "name": "BaseBdev2", 00:16:35.248 "aliases": [ 00:16:35.248 "1462ec5f-2e25-4159-8721-066ba971f7ae" 00:16:35.248 ], 00:16:35.248 "product_name": "Malloc disk", 00:16:35.248 "block_size": 512, 00:16:35.248 "num_blocks": 65536, 00:16:35.248 "uuid": "1462ec5f-2e25-4159-8721-066ba971f7ae", 00:16:35.248 "assigned_rate_limits": { 00:16:35.248 "rw_ios_per_sec": 0, 00:16:35.248 "rw_mbytes_per_sec": 0, 00:16:35.248 "r_mbytes_per_sec": 0, 00:16:35.248 "w_mbytes_per_sec": 0 00:16:35.248 }, 00:16:35.248 "claimed": true, 00:16:35.248 "claim_type": "exclusive_write", 00:16:35.248 "zoned": false, 00:16:35.248 "supported_io_types": { 00:16:35.248 "read": true, 00:16:35.248 "write": true, 00:16:35.248 "unmap": true, 00:16:35.248 "write_zeroes": true, 00:16:35.248 "flush": true, 00:16:35.248 "reset": true, 00:16:35.248 "compare": false, 00:16:35.248 "compare_and_write": false, 00:16:35.248 "abort": true, 00:16:35.248 "nvme_admin": false, 00:16:35.248 "nvme_io": false 00:16:35.248 }, 00:16:35.248 "memory_domains": [ 00:16:35.248 { 00:16:35.248 "dma_device_id": "system", 00:16:35.248 "dma_device_type": 1 00:16:35.248 }, 00:16:35.248 { 00:16:35.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.248 "dma_device_type": 2 00:16:35.248 } 00:16:35.248 ], 00:16:35.248 "driver_specific": {} 00:16:35.248 }' 00:16:35.248 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:35.248 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:35.248 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:35.248 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:35.248 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:35.248 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:35.248 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:35.248 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:35.528 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:35.528 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:35.528 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:35.528 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:35.528 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:35.863 [2024-06-10 11:40:07.712238] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:35.863 [2024-06-10 11:40:07.712285] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.863 [2024-06-10 11:40:07.712336] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.863 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:35.863 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:35.863 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:35.863 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:16:35.863 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:35.863 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:16:35.863 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:35.863 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:35.863 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:35.863 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:35.863 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:35.863 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.863 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.863 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.863 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.863 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.863 11:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.121 11:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:36.121 "name": "Existed_Raid", 00:16:36.121 "uuid": "e01e459a-2fdb-4100-8ea6-66184444866f", 00:16:36.121 "strip_size_kb": 64, 00:16:36.121 "state": "offline", 00:16:36.121 "raid_level": "raid0", 00:16:36.121 "superblock": true, 00:16:36.121 "num_base_bdevs": 2, 00:16:36.121 "num_base_bdevs_discovered": 1, 00:16:36.121 "num_base_bdevs_operational": 1, 00:16:36.121 "base_bdevs_list": [ 00:16:36.121 { 00:16:36.121 "name": null, 00:16:36.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.121 "is_configured": false, 00:16:36.121 "data_offset": 2048, 00:16:36.121 "data_size": 63488 00:16:36.121 }, 00:16:36.121 { 00:16:36.121 "name": "BaseBdev2", 00:16:36.121 "uuid": "1462ec5f-2e25-4159-8721-066ba971f7ae", 00:16:36.121 "is_configured": true, 00:16:36.121 "data_offset": 2048, 00:16:36.121 "data_size": 63488 00:16:36.121 } 00:16:36.121 ] 00:16:36.121 }' 00:16:36.121 11:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:36.121 11:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.688 11:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:36.688 11:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:36.688 11:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.688 11:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:36.948 11:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:36.948 11:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:36.948 11:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:37.208 [2024-06-10 11:40:09.199501] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:37.208 [2024-06-10 11:40:09.199579] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:37.466 11:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:37.466 11:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:37.466 11:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.466 11:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:37.726 11:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:37.726 11:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:37.726 11:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:37.726 11:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 121977 00:16:37.726 11:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 121977 ']' 00:16:37.726 11:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 121977 00:16:37.726 11:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:16:37.726 11:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:37.726 11:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 121977 00:16:37.726 11:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:37.726 11:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:37.726 11:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 121977' 00:16:37.726 killing process with pid 121977 00:16:37.726 11:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 121977 00:16:37.726 11:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 121977 00:16:37.726 [2024-06-10 11:40:09.684078] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:37.727 [2024-06-10 11:40:09.684207] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:39.105 ************************************ 00:16:39.105 END TEST raid_state_function_test_sb 00:16:39.105 ************************************ 00:16:39.105 11:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:39.105 00:16:39.105 real 0m12.835s 00:16:39.105 user 0m21.961s 00:16:39.105 sys 0m1.763s 00:16:39.105 11:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:39.105 11:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.364 11:40:11 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:16:39.364 11:40:11 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:16:39.364 11:40:11 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:39.364 11:40:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:39.364 ************************************ 00:16:39.364 START TEST raid_superblock_test 00:16:39.364 ************************************ 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid0 2 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=122372 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 122372 /var/tmp/spdk-raid.sock 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 122372 ']' 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:39.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:39.364 11:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.364 [2024-06-10 11:40:11.257399] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:16:39.364 [2024-06-10 11:40:11.257586] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122372 ] 00:16:39.364 [2024-06-10 11:40:11.421561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.623 [2024-06-10 11:40:11.628311] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.881 [2024-06-10 11:40:11.866365] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:40.139 11:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:40.139 11:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:16:40.139 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:40.139 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:40.139 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:40.139 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:40.139 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:40.139 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:40.139 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:40.139 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:40.139 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:40.397 malloc1 00:16:40.655 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:40.655 [2024-06-10 11:40:12.666814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:40.655 [2024-06-10 11:40:12.666934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.655 [2024-06-10 11:40:12.666977] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:40.655 [2024-06-10 11:40:12.667017] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.655 [2024-06-10 11:40:12.669710] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.655 [2024-06-10 11:40:12.669769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:40.655 pt1 00:16:40.655 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:40.655 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:40.655 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:40.655 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:40.655 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:40.655 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:40.655 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:40.655 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:40.655 11:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:41.221 malloc2 00:16:41.221 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:41.221 [2024-06-10 11:40:13.202648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:41.221 [2024-06-10 11:40:13.202784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.221 [2024-06-10 11:40:13.202840] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:41.221 [2024-06-10 11:40:13.202862] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.221 [2024-06-10 11:40:13.205438] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.221 [2024-06-10 11:40:13.205497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:41.221 pt2 00:16:41.221 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:41.221 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:41.221 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:41.481 [2024-06-10 11:40:13.398764] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:41.481 [2024-06-10 11:40:13.401023] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:41.481 [2024-06-10 11:40:13.401225] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:16:41.481 [2024-06-10 11:40:13.401238] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:41.481 [2024-06-10 11:40:13.401387] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:41.481 [2024-06-10 11:40:13.401747] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:16:41.481 [2024-06-10 11:40:13.401769] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:16:41.481 [2024-06-10 11:40:13.401944] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.481 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:41.481 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:41.481 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:41.481 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:41.481 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:41.481 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:41.481 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.481 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.481 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.481 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.481 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.481 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.740 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.740 "name": "raid_bdev1", 00:16:41.740 "uuid": "3f35e82e-16c8-4141-9f98-c7d11246edf7", 00:16:41.740 "strip_size_kb": 64, 00:16:41.740 "state": "online", 00:16:41.740 "raid_level": "raid0", 00:16:41.740 "superblock": true, 00:16:41.740 "num_base_bdevs": 2, 00:16:41.740 "num_base_bdevs_discovered": 2, 00:16:41.740 "num_base_bdevs_operational": 2, 00:16:41.740 "base_bdevs_list": [ 00:16:41.740 { 00:16:41.740 "name": "pt1", 00:16:41.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:41.740 "is_configured": true, 00:16:41.740 "data_offset": 2048, 00:16:41.740 "data_size": 63488 00:16:41.740 }, 00:16:41.740 { 00:16:41.740 "name": "pt2", 00:16:41.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.740 "is_configured": true, 00:16:41.740 "data_offset": 2048, 00:16:41.740 "data_size": 63488 00:16:41.740 } 00:16:41.740 ] 00:16:41.740 }' 00:16:41.740 11:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.740 11:40:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.674 11:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:42.674 11:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:42.674 11:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:42.674 11:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:42.674 11:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:42.674 11:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:42.674 11:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:42.674 11:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:42.674 [2024-06-10 11:40:14.711315] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.931 11:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:42.932 "name": "raid_bdev1", 00:16:42.932 "aliases": [ 00:16:42.932 "3f35e82e-16c8-4141-9f98-c7d11246edf7" 00:16:42.932 ], 00:16:42.932 "product_name": "Raid Volume", 00:16:42.932 "block_size": 512, 00:16:42.932 "num_blocks": 126976, 00:16:42.932 "uuid": "3f35e82e-16c8-4141-9f98-c7d11246edf7", 00:16:42.932 "assigned_rate_limits": { 00:16:42.932 "rw_ios_per_sec": 0, 00:16:42.932 "rw_mbytes_per_sec": 0, 00:16:42.932 "r_mbytes_per_sec": 0, 00:16:42.932 "w_mbytes_per_sec": 0 00:16:42.932 }, 00:16:42.932 "claimed": false, 00:16:42.932 "zoned": false, 00:16:42.932 "supported_io_types": { 00:16:42.932 "read": true, 00:16:42.932 "write": true, 00:16:42.932 "unmap": true, 00:16:42.932 "write_zeroes": true, 00:16:42.932 "flush": true, 00:16:42.932 "reset": true, 00:16:42.932 "compare": false, 00:16:42.932 "compare_and_write": false, 00:16:42.932 "abort": false, 00:16:42.932 "nvme_admin": false, 00:16:42.932 "nvme_io": false 00:16:42.932 }, 00:16:42.932 "memory_domains": [ 00:16:42.932 { 00:16:42.932 "dma_device_id": "system", 00:16:42.932 "dma_device_type": 1 00:16:42.932 }, 00:16:42.932 { 00:16:42.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.932 "dma_device_type": 2 00:16:42.932 }, 00:16:42.932 { 00:16:42.932 "dma_device_id": "system", 00:16:42.932 "dma_device_type": 1 00:16:42.932 }, 00:16:42.932 { 00:16:42.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.932 "dma_device_type": 2 00:16:42.932 } 00:16:42.932 ], 00:16:42.932 "driver_specific": { 00:16:42.932 "raid": { 00:16:42.932 "uuid": "3f35e82e-16c8-4141-9f98-c7d11246edf7", 00:16:42.932 "strip_size_kb": 64, 00:16:42.932 "state": "online", 00:16:42.932 "raid_level": "raid0", 00:16:42.932 "superblock": true, 00:16:42.932 "num_base_bdevs": 2, 00:16:42.932 "num_base_bdevs_discovered": 2, 00:16:42.932 "num_base_bdevs_operational": 2, 00:16:42.932 "base_bdevs_list": [ 00:16:42.932 { 00:16:42.932 "name": "pt1", 00:16:42.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:42.932 "is_configured": true, 00:16:42.932 "data_offset": 2048, 00:16:42.932 "data_size": 63488 00:16:42.932 }, 00:16:42.932 { 00:16:42.932 "name": "pt2", 00:16:42.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.932 "is_configured": true, 00:16:42.932 "data_offset": 2048, 00:16:42.932 "data_size": 63488 00:16:42.932 } 00:16:42.932 ] 00:16:42.932 } 00:16:42.932 } 00:16:42.932 }' 00:16:42.932 11:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:42.932 11:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:42.932 pt2' 00:16:42.932 11:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:42.932 11:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:42.932 11:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:43.189 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:43.189 "name": "pt1", 00:16:43.189 "aliases": [ 00:16:43.189 "00000000-0000-0000-0000-000000000001" 00:16:43.189 ], 00:16:43.189 "product_name": "passthru", 00:16:43.189 "block_size": 512, 00:16:43.189 "num_blocks": 65536, 00:16:43.189 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:43.189 "assigned_rate_limits": { 00:16:43.189 "rw_ios_per_sec": 0, 00:16:43.189 "rw_mbytes_per_sec": 0, 00:16:43.190 "r_mbytes_per_sec": 0, 00:16:43.190 "w_mbytes_per_sec": 0 00:16:43.190 }, 00:16:43.190 "claimed": true, 00:16:43.190 "claim_type": "exclusive_write", 00:16:43.190 "zoned": false, 00:16:43.190 "supported_io_types": { 00:16:43.190 "read": true, 00:16:43.190 "write": true, 00:16:43.190 "unmap": true, 00:16:43.190 "write_zeroes": true, 00:16:43.190 "flush": true, 00:16:43.190 "reset": true, 00:16:43.190 "compare": false, 00:16:43.190 "compare_and_write": false, 00:16:43.190 "abort": true, 00:16:43.190 "nvme_admin": false, 00:16:43.190 "nvme_io": false 00:16:43.190 }, 00:16:43.190 "memory_domains": [ 00:16:43.190 { 00:16:43.190 "dma_device_id": "system", 00:16:43.190 "dma_device_type": 1 00:16:43.190 }, 00:16:43.190 { 00:16:43.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.190 "dma_device_type": 2 00:16:43.190 } 00:16:43.190 ], 00:16:43.190 "driver_specific": { 00:16:43.190 "passthru": { 00:16:43.190 "name": "pt1", 00:16:43.190 "base_bdev_name": "malloc1" 00:16:43.190 } 00:16:43.190 } 00:16:43.190 }' 00:16:43.190 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.190 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.190 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:43.190 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.190 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.190 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:43.190 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.448 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.448 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:43.448 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.448 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.448 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:43.448 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:43.448 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:43.448 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:43.706 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:43.706 "name": "pt2", 00:16:43.706 "aliases": [ 00:16:43.706 "00000000-0000-0000-0000-000000000002" 00:16:43.706 ], 00:16:43.706 "product_name": "passthru", 00:16:43.706 "block_size": 512, 00:16:43.706 "num_blocks": 65536, 00:16:43.706 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.706 "assigned_rate_limits": { 00:16:43.706 "rw_ios_per_sec": 0, 00:16:43.706 "rw_mbytes_per_sec": 0, 00:16:43.706 "r_mbytes_per_sec": 0, 00:16:43.706 "w_mbytes_per_sec": 0 00:16:43.706 }, 00:16:43.706 "claimed": true, 00:16:43.706 "claim_type": "exclusive_write", 00:16:43.706 "zoned": false, 00:16:43.706 "supported_io_types": { 00:16:43.706 "read": true, 00:16:43.706 "write": true, 00:16:43.706 "unmap": true, 00:16:43.706 "write_zeroes": true, 00:16:43.706 "flush": true, 00:16:43.706 "reset": true, 00:16:43.706 "compare": false, 00:16:43.706 "compare_and_write": false, 00:16:43.706 "abort": true, 00:16:43.706 "nvme_admin": false, 00:16:43.706 "nvme_io": false 00:16:43.706 }, 00:16:43.706 "memory_domains": [ 00:16:43.706 { 00:16:43.706 "dma_device_id": "system", 00:16:43.706 "dma_device_type": 1 00:16:43.706 }, 00:16:43.706 { 00:16:43.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.706 "dma_device_type": 2 00:16:43.706 } 00:16:43.706 ], 00:16:43.706 "driver_specific": { 00:16:43.706 "passthru": { 00:16:43.706 "name": "pt2", 00:16:43.706 "base_bdev_name": "malloc2" 00:16:43.706 } 00:16:43.706 } 00:16:43.706 }' 00:16:43.706 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.706 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.706 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:43.706 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.706 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.706 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:43.706 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.964 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.964 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:43.964 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.964 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.964 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:43.964 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:43.964 11:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:44.272 [2024-06-10 11:40:16.200413] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.272 11:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=3f35e82e-16c8-4141-9f98-c7d11246edf7 00:16:44.272 11:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 3f35e82e-16c8-4141-9f98-c7d11246edf7 ']' 00:16:44.272 11:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:44.531 [2024-06-10 11:40:16.496183] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:44.531 [2024-06-10 11:40:16.496230] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.531 [2024-06-10 11:40:16.496328] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.531 [2024-06-10 11:40:16.496380] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.531 [2024-06-10 11:40:16.496391] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:16:44.531 11:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.531 11:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:44.789 11:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:44.790 11:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:44.790 11:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:44.790 11:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:45.048 11:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:45.048 11:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:45.307 11:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:45.307 11:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:45.566 11:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:45.566 11:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:45.566 11:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:16:45.566 11:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:45.566 11:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.566 11:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:45.566 11:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.566 11:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:45.566 11:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.566 11:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:45.566 11:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.566 11:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:45.566 11:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:45.825 [2024-06-10 11:40:17.868485] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:45.825 [2024-06-10 11:40:17.870586] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:45.825 [2024-06-10 11:40:17.870680] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:45.825 [2024-06-10 11:40:17.870775] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:45.825 [2024-06-10 11:40:17.870800] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.825 [2024-06-10 11:40:17.870809] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:16:45.825 request: 00:16:45.825 { 00:16:45.825 "name": "raid_bdev1", 00:16:45.825 "raid_level": "raid0", 00:16:45.826 "base_bdevs": [ 00:16:45.826 "malloc1", 00:16:45.826 "malloc2" 00:16:45.826 ], 00:16:45.826 "strip_size_kb": 64, 00:16:45.826 "superblock": false, 00:16:45.826 "method": "bdev_raid_create", 00:16:45.826 "req_id": 1 00:16:45.826 } 00:16:45.826 Got JSON-RPC error response 00:16:45.826 response: 00:16:45.826 { 00:16:45.826 "code": -17, 00:16:45.826 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:45.826 } 00:16:46.083 11:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:16:46.083 11:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:46.083 11:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:46.083 11:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:46.083 11:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.083 11:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:46.083 11:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:46.083 11:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:46.083 11:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:46.341 [2024-06-10 11:40:18.264490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:46.341 [2024-06-10 11:40:18.264578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.341 [2024-06-10 11:40:18.264609] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:46.341 [2024-06-10 11:40:18.264638] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.341 [2024-06-10 11:40:18.267084] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.341 [2024-06-10 11:40:18.267154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:46.341 [2024-06-10 11:40:18.267266] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:46.341 [2024-06-10 11:40:18.267323] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:46.341 pt1 00:16:46.341 11:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:16:46.341 11:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:46.341 11:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:46.341 11:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:46.341 11:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:46.341 11:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:46.341 11:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:46.341 11:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:46.341 11:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:46.341 11:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:46.341 11:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.341 11:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.603 11:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:46.603 "name": "raid_bdev1", 00:16:46.603 "uuid": "3f35e82e-16c8-4141-9f98-c7d11246edf7", 00:16:46.603 "strip_size_kb": 64, 00:16:46.603 "state": "configuring", 00:16:46.603 "raid_level": "raid0", 00:16:46.603 "superblock": true, 00:16:46.603 "num_base_bdevs": 2, 00:16:46.603 "num_base_bdevs_discovered": 1, 00:16:46.603 "num_base_bdevs_operational": 2, 00:16:46.603 "base_bdevs_list": [ 00:16:46.603 { 00:16:46.603 "name": "pt1", 00:16:46.603 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:46.603 "is_configured": true, 00:16:46.603 "data_offset": 2048, 00:16:46.603 "data_size": 63488 00:16:46.603 }, 00:16:46.603 { 00:16:46.603 "name": null, 00:16:46.603 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:46.603 "is_configured": false, 00:16:46.603 "data_offset": 2048, 00:16:46.603 "data_size": 63488 00:16:46.603 } 00:16:46.603 ] 00:16:46.603 }' 00:16:46.603 11:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:46.603 11:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.171 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:16:47.171 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:47.171 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:47.171 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:47.430 [2024-06-10 11:40:19.436780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:47.430 [2024-06-10 11:40:19.436891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.430 [2024-06-10 11:40:19.436926] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:47.431 [2024-06-10 11:40:19.436955] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.431 [2024-06-10 11:40:19.437460] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.431 [2024-06-10 11:40:19.437521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:47.431 [2024-06-10 11:40:19.437627] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:47.431 [2024-06-10 11:40:19.437652] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:47.431 [2024-06-10 11:40:19.437760] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:16:47.431 [2024-06-10 11:40:19.437771] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:47.431 [2024-06-10 11:40:19.437892] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:47.431 [2024-06-10 11:40:19.438210] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:16:47.431 [2024-06-10 11:40:19.438229] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:16:47.431 [2024-06-10 11:40:19.438366] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.431 pt2 00:16:47.431 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:47.431 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:47.431 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:47.431 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:47.431 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:47.431 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:47.431 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:47.431 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:47.431 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.431 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.431 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.431 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.431 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.431 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.688 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.688 "name": "raid_bdev1", 00:16:47.688 "uuid": "3f35e82e-16c8-4141-9f98-c7d11246edf7", 00:16:47.688 "strip_size_kb": 64, 00:16:47.688 "state": "online", 00:16:47.688 "raid_level": "raid0", 00:16:47.688 "superblock": true, 00:16:47.688 "num_base_bdevs": 2, 00:16:47.688 "num_base_bdevs_discovered": 2, 00:16:47.688 "num_base_bdevs_operational": 2, 00:16:47.688 "base_bdevs_list": [ 00:16:47.688 { 00:16:47.688 "name": "pt1", 00:16:47.688 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:47.688 "is_configured": true, 00:16:47.688 "data_offset": 2048, 00:16:47.688 "data_size": 63488 00:16:47.688 }, 00:16:47.688 { 00:16:47.688 "name": "pt2", 00:16:47.688 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:47.688 "is_configured": true, 00:16:47.688 "data_offset": 2048, 00:16:47.688 "data_size": 63488 00:16:47.688 } 00:16:47.688 ] 00:16:47.688 }' 00:16:47.688 11:40:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.688 11:40:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.653 11:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:48.653 11:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:48.653 11:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:48.653 11:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:48.653 11:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:48.653 11:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:48.653 11:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:48.653 11:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:48.653 [2024-06-10 11:40:20.685322] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.653 11:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:48.653 "name": "raid_bdev1", 00:16:48.653 "aliases": [ 00:16:48.653 "3f35e82e-16c8-4141-9f98-c7d11246edf7" 00:16:48.653 ], 00:16:48.653 "product_name": "Raid Volume", 00:16:48.653 "block_size": 512, 00:16:48.653 "num_blocks": 126976, 00:16:48.653 "uuid": "3f35e82e-16c8-4141-9f98-c7d11246edf7", 00:16:48.653 "assigned_rate_limits": { 00:16:48.653 "rw_ios_per_sec": 0, 00:16:48.653 "rw_mbytes_per_sec": 0, 00:16:48.653 "r_mbytes_per_sec": 0, 00:16:48.653 "w_mbytes_per_sec": 0 00:16:48.653 }, 00:16:48.653 "claimed": false, 00:16:48.653 "zoned": false, 00:16:48.653 "supported_io_types": { 00:16:48.653 "read": true, 00:16:48.653 "write": true, 00:16:48.653 "unmap": true, 00:16:48.653 "write_zeroes": true, 00:16:48.653 "flush": true, 00:16:48.653 "reset": true, 00:16:48.653 "compare": false, 00:16:48.653 "compare_and_write": false, 00:16:48.653 "abort": false, 00:16:48.653 "nvme_admin": false, 00:16:48.653 "nvme_io": false 00:16:48.653 }, 00:16:48.653 "memory_domains": [ 00:16:48.653 { 00:16:48.653 "dma_device_id": "system", 00:16:48.653 "dma_device_type": 1 00:16:48.653 }, 00:16:48.653 { 00:16:48.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.653 "dma_device_type": 2 00:16:48.653 }, 00:16:48.653 { 00:16:48.653 "dma_device_id": "system", 00:16:48.653 "dma_device_type": 1 00:16:48.653 }, 00:16:48.653 { 00:16:48.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.653 "dma_device_type": 2 00:16:48.653 } 00:16:48.653 ], 00:16:48.653 "driver_specific": { 00:16:48.653 "raid": { 00:16:48.653 "uuid": "3f35e82e-16c8-4141-9f98-c7d11246edf7", 00:16:48.653 "strip_size_kb": 64, 00:16:48.653 "state": "online", 00:16:48.653 "raid_level": "raid0", 00:16:48.653 "superblock": true, 00:16:48.653 "num_base_bdevs": 2, 00:16:48.653 "num_base_bdevs_discovered": 2, 00:16:48.653 "num_base_bdevs_operational": 2, 00:16:48.653 "base_bdevs_list": [ 00:16:48.653 { 00:16:48.653 "name": "pt1", 00:16:48.653 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:48.653 "is_configured": true, 00:16:48.653 "data_offset": 2048, 00:16:48.653 "data_size": 63488 00:16:48.653 }, 00:16:48.653 { 00:16:48.653 "name": "pt2", 00:16:48.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.653 "is_configured": true, 00:16:48.653 "data_offset": 2048, 00:16:48.653 "data_size": 63488 00:16:48.653 } 00:16:48.653 ] 00:16:48.653 } 00:16:48.653 } 00:16:48.653 }' 00:16:48.911 11:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:48.911 11:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:48.911 pt2' 00:16:48.911 11:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:48.911 11:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:48.911 11:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:49.169 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:49.169 "name": "pt1", 00:16:49.169 "aliases": [ 00:16:49.169 "00000000-0000-0000-0000-000000000001" 00:16:49.169 ], 00:16:49.169 "product_name": "passthru", 00:16:49.169 "block_size": 512, 00:16:49.169 "num_blocks": 65536, 00:16:49.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.169 "assigned_rate_limits": { 00:16:49.169 "rw_ios_per_sec": 0, 00:16:49.170 "rw_mbytes_per_sec": 0, 00:16:49.170 "r_mbytes_per_sec": 0, 00:16:49.170 "w_mbytes_per_sec": 0 00:16:49.170 }, 00:16:49.170 "claimed": true, 00:16:49.170 "claim_type": "exclusive_write", 00:16:49.170 "zoned": false, 00:16:49.170 "supported_io_types": { 00:16:49.170 "read": true, 00:16:49.170 "write": true, 00:16:49.170 "unmap": true, 00:16:49.170 "write_zeroes": true, 00:16:49.170 "flush": true, 00:16:49.170 "reset": true, 00:16:49.170 "compare": false, 00:16:49.170 "compare_and_write": false, 00:16:49.170 "abort": true, 00:16:49.170 "nvme_admin": false, 00:16:49.170 "nvme_io": false 00:16:49.170 }, 00:16:49.170 "memory_domains": [ 00:16:49.170 { 00:16:49.170 "dma_device_id": "system", 00:16:49.170 "dma_device_type": 1 00:16:49.170 }, 00:16:49.170 { 00:16:49.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.170 "dma_device_type": 2 00:16:49.170 } 00:16:49.170 ], 00:16:49.170 "driver_specific": { 00:16:49.170 "passthru": { 00:16:49.170 "name": "pt1", 00:16:49.170 "base_bdev_name": "malloc1" 00:16:49.170 } 00:16:49.170 } 00:16:49.170 }' 00:16:49.170 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:49.170 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:49.170 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:49.170 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:49.428 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:49.428 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:49.428 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:49.428 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:49.428 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:49.428 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:49.428 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:49.687 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:49.687 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:49.687 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:49.687 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:49.945 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:49.945 "name": "pt2", 00:16:49.945 "aliases": [ 00:16:49.945 "00000000-0000-0000-0000-000000000002" 00:16:49.945 ], 00:16:49.945 "product_name": "passthru", 00:16:49.945 "block_size": 512, 00:16:49.945 "num_blocks": 65536, 00:16:49.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.945 "assigned_rate_limits": { 00:16:49.945 "rw_ios_per_sec": 0, 00:16:49.945 "rw_mbytes_per_sec": 0, 00:16:49.945 "r_mbytes_per_sec": 0, 00:16:49.945 "w_mbytes_per_sec": 0 00:16:49.945 }, 00:16:49.945 "claimed": true, 00:16:49.945 "claim_type": "exclusive_write", 00:16:49.945 "zoned": false, 00:16:49.945 "supported_io_types": { 00:16:49.945 "read": true, 00:16:49.945 "write": true, 00:16:49.945 "unmap": true, 00:16:49.945 "write_zeroes": true, 00:16:49.945 "flush": true, 00:16:49.945 "reset": true, 00:16:49.945 "compare": false, 00:16:49.945 "compare_and_write": false, 00:16:49.945 "abort": true, 00:16:49.945 "nvme_admin": false, 00:16:49.945 "nvme_io": false 00:16:49.945 }, 00:16:49.945 "memory_domains": [ 00:16:49.945 { 00:16:49.945 "dma_device_id": "system", 00:16:49.945 "dma_device_type": 1 00:16:49.945 }, 00:16:49.945 { 00:16:49.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.945 "dma_device_type": 2 00:16:49.945 } 00:16:49.945 ], 00:16:49.945 "driver_specific": { 00:16:49.945 "passthru": { 00:16:49.945 "name": "pt2", 00:16:49.945 "base_bdev_name": "malloc2" 00:16:49.945 } 00:16:49.945 } 00:16:49.945 }' 00:16:49.945 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:49.945 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:49.945 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:49.945 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:49.945 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:49.945 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:49.945 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:49.945 11:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.204 11:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:50.204 11:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.204 11:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.204 11:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:50.204 11:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:50.204 11:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:50.462 [2024-06-10 11:40:22.469734] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.462 11:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 3f35e82e-16c8-4141-9f98-c7d11246edf7 '!=' 3f35e82e-16c8-4141-9f98-c7d11246edf7 ']' 00:16:50.462 11:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:16:50.462 11:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:50.462 11:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:50.462 11:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 122372 00:16:50.462 11:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 122372 ']' 00:16:50.462 11:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 122372 00:16:50.462 11:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:16:50.462 11:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:50.462 11:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 122372 00:16:50.462 killing process with pid 122372 00:16:50.462 11:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:50.462 11:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:50.462 11:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 122372' 00:16:50.462 11:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 122372 00:16:50.462 11:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 122372 00:16:50.462 [2024-06-10 11:40:22.512889] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:50.462 [2024-06-10 11:40:22.512984] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.462 [2024-06-10 11:40:22.513037] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.462 [2024-06-10 11:40:22.513049] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:16:50.721 [2024-06-10 11:40:22.741264] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.632 ************************************ 00:16:52.633 END TEST raid_superblock_test 00:16:52.633 ************************************ 00:16:52.633 11:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:52.633 00:16:52.633 real 0m13.039s 00:16:52.633 user 0m22.459s 00:16:52.633 sys 0m1.768s 00:16:52.633 11:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:52.633 11:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.633 11:40:24 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:16:52.633 11:40:24 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:16:52.633 11:40:24 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:52.633 11:40:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.633 ************************************ 00:16:52.633 START TEST raid_read_error_test 00:16:52.633 ************************************ 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 2 read 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.8Oz7FsoNsj 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=122752 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 122752 /var/tmp/spdk-raid.sock 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 122752 ']' 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:52.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:52.633 11:40:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.633 [2024-06-10 11:40:24.356681] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:16:52.633 [2024-06-10 11:40:24.356860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122752 ] 00:16:52.633 [2024-06-10 11:40:24.519457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.892 [2024-06-10 11:40:24.780150] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.187 [2024-06-10 11:40:25.038226] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.446 11:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:53.446 11:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:16:53.446 11:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:53.446 11:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:53.703 BaseBdev1_malloc 00:16:53.703 11:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:53.961 true 00:16:53.961 11:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:54.219 [2024-06-10 11:40:26.082503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:54.219 [2024-06-10 11:40:26.082624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.219 [2024-06-10 11:40:26.082699] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:16:54.219 [2024-06-10 11:40:26.082724] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.219 [2024-06-10 11:40:26.085461] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.219 [2024-06-10 11:40:26.085532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:54.219 BaseBdev1 00:16:54.219 11:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:54.219 11:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:54.476 BaseBdev2_malloc 00:16:54.476 11:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:54.735 true 00:16:54.735 11:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:54.993 [2024-06-10 11:40:26.860170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:54.993 [2024-06-10 11:40:26.860297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.993 [2024-06-10 11:40:26.860360] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:54.993 [2024-06-10 11:40:26.860383] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.993 [2024-06-10 11:40:26.863002] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.993 [2024-06-10 11:40:26.863056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:54.993 BaseBdev2 00:16:54.993 11:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:55.251 [2024-06-10 11:40:27.196313] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.251 [2024-06-10 11:40:27.198637] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:55.251 [2024-06-10 11:40:27.198966] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:16:55.251 [2024-06-10 11:40:27.198991] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:55.251 [2024-06-10 11:40:27.199150] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:55.251 [2024-06-10 11:40:27.199515] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:16:55.251 [2024-06-10 11:40:27.199537] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:16:55.251 [2024-06-10 11:40:27.199716] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.251 11:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:55.251 11:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:55.251 11:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:55.251 11:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:55.251 11:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:55.251 11:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:55.251 11:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.251 11:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.252 11:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.252 11:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.252 11:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.252 11:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.509 11:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:55.509 "name": "raid_bdev1", 00:16:55.509 "uuid": "c3348ce6-0905-4de4-a325-bdf6ef72beb3", 00:16:55.509 "strip_size_kb": 64, 00:16:55.509 "state": "online", 00:16:55.509 "raid_level": "raid0", 00:16:55.509 "superblock": true, 00:16:55.509 "num_base_bdevs": 2, 00:16:55.509 "num_base_bdevs_discovered": 2, 00:16:55.509 "num_base_bdevs_operational": 2, 00:16:55.509 "base_bdevs_list": [ 00:16:55.509 { 00:16:55.509 "name": "BaseBdev1", 00:16:55.509 "uuid": "9da3ceac-5e9b-54dd-8fb2-a3d916b71208", 00:16:55.509 "is_configured": true, 00:16:55.509 "data_offset": 2048, 00:16:55.509 "data_size": 63488 00:16:55.509 }, 00:16:55.509 { 00:16:55.509 "name": "BaseBdev2", 00:16:55.509 "uuid": "47a8902d-ec75-57d9-ba94-1d545cd685f4", 00:16:55.509 "is_configured": true, 00:16:55.509 "data_offset": 2048, 00:16:55.509 "data_size": 63488 00:16:55.509 } 00:16:55.509 ] 00:16:55.509 }' 00:16:55.509 11:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:55.509 11:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.076 11:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:56.076 11:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:56.335 [2024-06-10 11:40:28.238273] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:57.269 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:57.526 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:57.526 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:57.526 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:16:57.526 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:57.526 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:57.526 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:57.526 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:57.526 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:57.526 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:57.526 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.526 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.526 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.526 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.526 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.526 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.782 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.782 "name": "raid_bdev1", 00:16:57.782 "uuid": "c3348ce6-0905-4de4-a325-bdf6ef72beb3", 00:16:57.782 "strip_size_kb": 64, 00:16:57.782 "state": "online", 00:16:57.782 "raid_level": "raid0", 00:16:57.782 "superblock": true, 00:16:57.782 "num_base_bdevs": 2, 00:16:57.782 "num_base_bdevs_discovered": 2, 00:16:57.782 "num_base_bdevs_operational": 2, 00:16:57.782 "base_bdevs_list": [ 00:16:57.782 { 00:16:57.782 "name": "BaseBdev1", 00:16:57.782 "uuid": "9da3ceac-5e9b-54dd-8fb2-a3d916b71208", 00:16:57.782 "is_configured": true, 00:16:57.782 "data_offset": 2048, 00:16:57.782 "data_size": 63488 00:16:57.782 }, 00:16:57.782 { 00:16:57.782 "name": "BaseBdev2", 00:16:57.782 "uuid": "47a8902d-ec75-57d9-ba94-1d545cd685f4", 00:16:57.782 "is_configured": true, 00:16:57.782 "data_offset": 2048, 00:16:57.782 "data_size": 63488 00:16:57.782 } 00:16:57.782 ] 00:16:57.782 }' 00:16:57.782 11:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.782 11:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.345 11:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:58.602 [2024-06-10 11:40:30.484515] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.602 [2024-06-10 11:40:30.484577] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.602 [2024-06-10 11:40:30.487959] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.602 [2024-06-10 11:40:30.488017] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.602 [2024-06-10 11:40:30.488060] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.602 [2024-06-10 11:40:30.488073] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:16:58.602 0 00:16:58.602 11:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 122752 00:16:58.602 11:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 122752 ']' 00:16:58.602 11:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 122752 00:16:58.602 11:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:16:58.602 11:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:58.602 11:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 122752 00:16:58.602 11:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:58.602 11:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:58.602 11:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 122752' 00:16:58.602 killing process with pid 122752 00:16:58.602 11:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 122752 00:16:58.602 [2024-06-10 11:40:30.535341] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:58.602 11:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 122752 00:16:58.859 [2024-06-10 11:40:30.684180] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.787 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:00.787 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:00.787 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.8Oz7FsoNsj 00:17:00.787 ************************************ 00:17:00.787 END TEST raid_read_error_test 00:17:00.787 ************************************ 00:17:00.787 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:17:00.787 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:17:00.787 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:00.787 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:00.787 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:17:00.787 00:17:00.787 real 0m8.196s 00:17:00.787 user 0m11.939s 00:17:00.787 sys 0m1.036s 00:17:00.787 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:00.787 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.787 11:40:32 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:17:00.787 11:40:32 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:17:00.787 11:40:32 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:00.787 11:40:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.787 ************************************ 00:17:00.787 START TEST raid_write_error_test 00:17:00.787 ************************************ 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 2 write 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.oezWISnfvh 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=122957 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 122957 /var/tmp/spdk-raid.sock 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 122957 ']' 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:00.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:00.787 11:40:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.787 [2024-06-10 11:40:32.625955] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:17:00.787 [2024-06-10 11:40:32.626190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122957 ] 00:17:00.787 [2024-06-10 11:40:32.806889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.045 [2024-06-10 11:40:33.058138] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.303 [2024-06-10 11:40:33.320794] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.560 11:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:01.560 11:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:17:01.560 11:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:01.560 11:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:01.817 BaseBdev1_malloc 00:17:01.817 11:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:02.075 true 00:17:02.075 11:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:02.332 [2024-06-10 11:40:34.289694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:02.332 [2024-06-10 11:40:34.289830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.333 [2024-06-10 11:40:34.289882] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:02.333 [2024-06-10 11:40:34.289905] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.333 [2024-06-10 11:40:34.292770] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.333 [2024-06-10 11:40:34.292839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:02.333 BaseBdev1 00:17:02.333 11:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:02.333 11:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:02.896 BaseBdev2_malloc 00:17:02.896 11:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:03.154 true 00:17:03.154 11:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:03.412 [2024-06-10 11:40:35.272204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:03.412 [2024-06-10 11:40:35.272323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.412 [2024-06-10 11:40:35.272396] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:03.412 [2024-06-10 11:40:35.272420] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.412 [2024-06-10 11:40:35.275050] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.412 [2024-06-10 11:40:35.275107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:03.412 BaseBdev2 00:17:03.412 11:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:03.671 [2024-06-10 11:40:35.496286] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.671 [2024-06-10 11:40:35.498619] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.671 [2024-06-10 11:40:35.498886] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:17:03.671 [2024-06-10 11:40:35.498919] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:03.671 [2024-06-10 11:40:35.499075] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:03.671 [2024-06-10 11:40:35.499451] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:17:03.671 [2024-06-10 11:40:35.499470] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:17:03.671 [2024-06-10 11:40:35.499648] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.671 11:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:17:03.671 11:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:03.671 11:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:03.671 11:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:03.671 11:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:03.671 11:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:03.671 11:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.671 11:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.671 11:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.671 11:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.671 11:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.671 11:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.930 11:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:03.930 "name": "raid_bdev1", 00:17:03.930 "uuid": "3ac7994c-9f5c-494a-a05f-bab814368366", 00:17:03.930 "strip_size_kb": 64, 00:17:03.930 "state": "online", 00:17:03.930 "raid_level": "raid0", 00:17:03.930 "superblock": true, 00:17:03.930 "num_base_bdevs": 2, 00:17:03.930 "num_base_bdevs_discovered": 2, 00:17:03.930 "num_base_bdevs_operational": 2, 00:17:03.930 "base_bdevs_list": [ 00:17:03.930 { 00:17:03.930 "name": "BaseBdev1", 00:17:03.930 "uuid": "b2f097d6-c3e5-5c15-844d-bfc0d6dcf3c5", 00:17:03.930 "is_configured": true, 00:17:03.930 "data_offset": 2048, 00:17:03.930 "data_size": 63488 00:17:03.930 }, 00:17:03.930 { 00:17:03.930 "name": "BaseBdev2", 00:17:03.930 "uuid": "12b95a43-ef64-5507-8c64-3ca210359ad8", 00:17:03.930 "is_configured": true, 00:17:03.930 "data_offset": 2048, 00:17:03.930 "data_size": 63488 00:17:03.930 } 00:17:03.930 ] 00:17:03.930 }' 00:17:03.930 11:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:03.930 11:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.498 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:04.498 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:04.757 [2024-06-10 11:40:36.602226] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:05.693 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:05.951 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:05.951 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:05.951 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:17:05.951 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:17:05.951 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:05.951 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:05.951 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:05.951 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:05.951 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:05.951 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.951 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.951 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.951 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.951 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.951 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.210 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:06.210 "name": "raid_bdev1", 00:17:06.210 "uuid": "3ac7994c-9f5c-494a-a05f-bab814368366", 00:17:06.210 "strip_size_kb": 64, 00:17:06.210 "state": "online", 00:17:06.210 "raid_level": "raid0", 00:17:06.210 "superblock": true, 00:17:06.210 "num_base_bdevs": 2, 00:17:06.210 "num_base_bdevs_discovered": 2, 00:17:06.210 "num_base_bdevs_operational": 2, 00:17:06.210 "base_bdevs_list": [ 00:17:06.210 { 00:17:06.210 "name": "BaseBdev1", 00:17:06.210 "uuid": "b2f097d6-c3e5-5c15-844d-bfc0d6dcf3c5", 00:17:06.210 "is_configured": true, 00:17:06.210 "data_offset": 2048, 00:17:06.210 "data_size": 63488 00:17:06.210 }, 00:17:06.210 { 00:17:06.210 "name": "BaseBdev2", 00:17:06.210 "uuid": "12b95a43-ef64-5507-8c64-3ca210359ad8", 00:17:06.210 "is_configured": true, 00:17:06.210 "data_offset": 2048, 00:17:06.210 "data_size": 63488 00:17:06.210 } 00:17:06.210 ] 00:17:06.210 }' 00:17:06.210 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:06.210 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.779 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:06.779 [2024-06-10 11:40:38.836276] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.779 [2024-06-10 11:40:38.836330] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.037 [2024-06-10 11:40:38.839380] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.037 [2024-06-10 11:40:38.839437] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.037 [2024-06-10 11:40:38.839471] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.037 [2024-06-10 11:40:38.839481] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:17:07.037 0 00:17:07.037 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 122957 00:17:07.037 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 122957 ']' 00:17:07.037 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 122957 00:17:07.037 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:17:07.037 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:07.037 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 122957 00:17:07.037 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:07.037 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:07.037 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 122957' 00:17:07.037 killing process with pid 122957 00:17:07.037 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 122957 00:17:07.037 [2024-06-10 11:40:38.886763] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:07.037 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 122957 00:17:07.037 [2024-06-10 11:40:39.062122] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:08.939 11:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:08.939 11:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.oezWISnfvh 00:17:08.939 11:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:08.939 ************************************ 00:17:08.939 END TEST raid_write_error_test 00:17:08.939 ************************************ 00:17:08.939 11:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:17:08.939 11:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:17:08.939 11:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:08.939 11:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:08.939 11:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:17:08.939 00:17:08.939 real 0m8.328s 00:17:08.939 user 0m12.142s 00:17:08.939 sys 0m0.995s 00:17:08.939 11:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:08.939 11:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.939 11:40:40 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:08.939 11:40:40 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:17:08.939 11:40:40 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:17:08.939 11:40:40 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:08.939 11:40:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:08.939 ************************************ 00:17:08.939 START TEST raid_state_function_test 00:17:08.939 ************************************ 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 2 false 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=123165 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 123165' 00:17:08.939 Process raid pid: 123165 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 123165 /var/tmp/spdk-raid.sock 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 123165 ']' 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:08.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:08.939 11:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.939 [2024-06-10 11:40:40.982947] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:17:08.939 [2024-06-10 11:40:40.983183] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.198 [2024-06-10 11:40:41.149893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.457 [2024-06-10 11:40:41.407132] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.715 [2024-06-10 11:40:41.640339] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.974 11:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:09.974 11:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:17:09.974 11:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:10.234 [2024-06-10 11:40:42.187106] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:10.234 [2024-06-10 11:40:42.187218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:10.234 [2024-06-10 11:40:42.187232] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:10.234 [2024-06-10 11:40:42.187263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:10.234 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:10.234 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:10.234 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:10.234 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:10.234 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:10.234 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:10.234 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.234 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.234 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.234 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.234 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.234 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.493 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.493 "name": "Existed_Raid", 00:17:10.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.493 "strip_size_kb": 64, 00:17:10.493 "state": "configuring", 00:17:10.493 "raid_level": "concat", 00:17:10.493 "superblock": false, 00:17:10.493 "num_base_bdevs": 2, 00:17:10.493 "num_base_bdevs_discovered": 0, 00:17:10.493 "num_base_bdevs_operational": 2, 00:17:10.493 "base_bdevs_list": [ 00:17:10.493 { 00:17:10.493 "name": "BaseBdev1", 00:17:10.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.493 "is_configured": false, 00:17:10.493 "data_offset": 0, 00:17:10.493 "data_size": 0 00:17:10.493 }, 00:17:10.493 { 00:17:10.493 "name": "BaseBdev2", 00:17:10.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.493 "is_configured": false, 00:17:10.493 "data_offset": 0, 00:17:10.493 "data_size": 0 00:17:10.493 } 00:17:10.493 ] 00:17:10.493 }' 00:17:10.493 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.493 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.061 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:11.320 [2024-06-10 11:40:43.219201] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:11.320 [2024-06-10 11:40:43.219255] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:11.320 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:11.579 [2024-06-10 11:40:43.483264] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:11.579 [2024-06-10 11:40:43.483347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:11.579 [2024-06-10 11:40:43.483360] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:11.579 [2024-06-10 11:40:43.483387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:11.579 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:11.837 [2024-06-10 11:40:43.801109] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.837 BaseBdev1 00:17:11.837 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:11.837 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:17:11.837 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:11.837 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:17:11.837 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:11.837 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:11.837 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:12.095 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:12.660 [ 00:17:12.660 { 00:17:12.660 "name": "BaseBdev1", 00:17:12.660 "aliases": [ 00:17:12.660 "173e9d91-3669-41e9-ab8b-b6a7ed62af49" 00:17:12.660 ], 00:17:12.660 "product_name": "Malloc disk", 00:17:12.660 "block_size": 512, 00:17:12.660 "num_blocks": 65536, 00:17:12.661 "uuid": "173e9d91-3669-41e9-ab8b-b6a7ed62af49", 00:17:12.661 "assigned_rate_limits": { 00:17:12.661 "rw_ios_per_sec": 0, 00:17:12.661 "rw_mbytes_per_sec": 0, 00:17:12.661 "r_mbytes_per_sec": 0, 00:17:12.661 "w_mbytes_per_sec": 0 00:17:12.661 }, 00:17:12.661 "claimed": true, 00:17:12.661 "claim_type": "exclusive_write", 00:17:12.661 "zoned": false, 00:17:12.661 "supported_io_types": { 00:17:12.661 "read": true, 00:17:12.661 "write": true, 00:17:12.661 "unmap": true, 00:17:12.661 "write_zeroes": true, 00:17:12.661 "flush": true, 00:17:12.661 "reset": true, 00:17:12.661 "compare": false, 00:17:12.661 "compare_and_write": false, 00:17:12.661 "abort": true, 00:17:12.661 "nvme_admin": false, 00:17:12.661 "nvme_io": false 00:17:12.661 }, 00:17:12.661 "memory_domains": [ 00:17:12.661 { 00:17:12.661 "dma_device_id": "system", 00:17:12.661 "dma_device_type": 1 00:17:12.661 }, 00:17:12.661 { 00:17:12.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.661 "dma_device_type": 2 00:17:12.661 } 00:17:12.661 ], 00:17:12.661 "driver_specific": {} 00:17:12.661 } 00:17:12.661 ] 00:17:12.661 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:17:12.661 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:12.661 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:12.661 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:12.661 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:12.661 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:12.661 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:12.661 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:12.661 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:12.661 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:12.661 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:12.661 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.661 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.918 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:12.918 "name": "Existed_Raid", 00:17:12.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.918 "strip_size_kb": 64, 00:17:12.918 "state": "configuring", 00:17:12.918 "raid_level": "concat", 00:17:12.918 "superblock": false, 00:17:12.918 "num_base_bdevs": 2, 00:17:12.918 "num_base_bdevs_discovered": 1, 00:17:12.918 "num_base_bdevs_operational": 2, 00:17:12.918 "base_bdevs_list": [ 00:17:12.919 { 00:17:12.919 "name": "BaseBdev1", 00:17:12.919 "uuid": "173e9d91-3669-41e9-ab8b-b6a7ed62af49", 00:17:12.919 "is_configured": true, 00:17:12.919 "data_offset": 0, 00:17:12.919 "data_size": 65536 00:17:12.919 }, 00:17:12.919 { 00:17:12.919 "name": "BaseBdev2", 00:17:12.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.919 "is_configured": false, 00:17:12.919 "data_offset": 0, 00:17:12.919 "data_size": 0 00:17:12.919 } 00:17:12.919 ] 00:17:12.919 }' 00:17:12.919 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:12.919 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.483 11:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:14.051 [2024-06-10 11:40:45.817680] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:14.051 [2024-06-10 11:40:45.817770] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:14.051 11:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:14.318 [2024-06-10 11:40:46.137756] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.318 [2024-06-10 11:40:46.140303] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:14.318 [2024-06-10 11:40:46.140380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:14.318 11:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:14.318 11:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:14.318 11:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:14.318 11:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:14.318 11:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:14.318 11:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:14.318 11:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:14.318 11:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:14.318 11:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:14.318 11:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:14.318 11:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:14.318 11:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:14.318 11:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.318 11:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.576 11:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:14.576 "name": "Existed_Raid", 00:17:14.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.576 "strip_size_kb": 64, 00:17:14.576 "state": "configuring", 00:17:14.576 "raid_level": "concat", 00:17:14.576 "superblock": false, 00:17:14.576 "num_base_bdevs": 2, 00:17:14.576 "num_base_bdevs_discovered": 1, 00:17:14.576 "num_base_bdevs_operational": 2, 00:17:14.576 "base_bdevs_list": [ 00:17:14.576 { 00:17:14.576 "name": "BaseBdev1", 00:17:14.576 "uuid": "173e9d91-3669-41e9-ab8b-b6a7ed62af49", 00:17:14.576 "is_configured": true, 00:17:14.576 "data_offset": 0, 00:17:14.576 "data_size": 65536 00:17:14.576 }, 00:17:14.576 { 00:17:14.576 "name": "BaseBdev2", 00:17:14.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.576 "is_configured": false, 00:17:14.576 "data_offset": 0, 00:17:14.576 "data_size": 0 00:17:14.576 } 00:17:14.576 ] 00:17:14.576 }' 00:17:14.576 11:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:14.576 11:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.143 11:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:15.401 [2024-06-10 11:40:47.320255] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:15.401 [2024-06-10 11:40:47.320318] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:15.401 [2024-06-10 11:40:47.320327] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:15.401 [2024-06-10 11:40:47.320488] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:15.401 [2024-06-10 11:40:47.320847] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:15.401 [2024-06-10 11:40:47.320869] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:15.401 [2024-06-10 11:40:47.321141] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.401 BaseBdev2 00:17:15.401 11:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:15.401 11:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:17:15.401 11:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:15.401 11:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:17:15.401 11:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:15.401 11:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:15.401 11:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:15.660 11:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:15.918 [ 00:17:15.918 { 00:17:15.918 "name": "BaseBdev2", 00:17:15.918 "aliases": [ 00:17:15.918 "753e1869-e436-4bf5-b433-527951786e91" 00:17:15.918 ], 00:17:15.918 "product_name": "Malloc disk", 00:17:15.918 "block_size": 512, 00:17:15.918 "num_blocks": 65536, 00:17:15.918 "uuid": "753e1869-e436-4bf5-b433-527951786e91", 00:17:15.918 "assigned_rate_limits": { 00:17:15.918 "rw_ios_per_sec": 0, 00:17:15.918 "rw_mbytes_per_sec": 0, 00:17:15.918 "r_mbytes_per_sec": 0, 00:17:15.918 "w_mbytes_per_sec": 0 00:17:15.918 }, 00:17:15.918 "claimed": true, 00:17:15.918 "claim_type": "exclusive_write", 00:17:15.918 "zoned": false, 00:17:15.918 "supported_io_types": { 00:17:15.918 "read": true, 00:17:15.918 "write": true, 00:17:15.918 "unmap": true, 00:17:15.918 "write_zeroes": true, 00:17:15.918 "flush": true, 00:17:15.918 "reset": true, 00:17:15.918 "compare": false, 00:17:15.918 "compare_and_write": false, 00:17:15.918 "abort": true, 00:17:15.918 "nvme_admin": false, 00:17:15.918 "nvme_io": false 00:17:15.918 }, 00:17:15.918 "memory_domains": [ 00:17:15.918 { 00:17:15.918 "dma_device_id": "system", 00:17:15.918 "dma_device_type": 1 00:17:15.918 }, 00:17:15.918 { 00:17:15.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.918 "dma_device_type": 2 00:17:15.918 } 00:17:15.918 ], 00:17:15.918 "driver_specific": {} 00:17:15.918 } 00:17:15.918 ] 00:17:15.918 11:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:17:15.918 11:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:15.918 11:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:15.918 11:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:17:15.918 11:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:15.918 11:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:15.918 11:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:15.918 11:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:15.918 11:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:15.918 11:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.918 11:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.918 11:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.918 11:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:15.918 11:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.918 11:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.240 11:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:16.240 "name": "Existed_Raid", 00:17:16.240 "uuid": "bc860aa1-c9ea-434b-a5ba-0c8200d5d802", 00:17:16.240 "strip_size_kb": 64, 00:17:16.240 "state": "online", 00:17:16.240 "raid_level": "concat", 00:17:16.240 "superblock": false, 00:17:16.240 "num_base_bdevs": 2, 00:17:16.240 "num_base_bdevs_discovered": 2, 00:17:16.240 "num_base_bdevs_operational": 2, 00:17:16.240 "base_bdevs_list": [ 00:17:16.240 { 00:17:16.240 "name": "BaseBdev1", 00:17:16.240 "uuid": "173e9d91-3669-41e9-ab8b-b6a7ed62af49", 00:17:16.240 "is_configured": true, 00:17:16.240 "data_offset": 0, 00:17:16.240 "data_size": 65536 00:17:16.240 }, 00:17:16.240 { 00:17:16.240 "name": "BaseBdev2", 00:17:16.240 "uuid": "753e1869-e436-4bf5-b433-527951786e91", 00:17:16.240 "is_configured": true, 00:17:16.240 "data_offset": 0, 00:17:16.240 "data_size": 65536 00:17:16.240 } 00:17:16.240 ] 00:17:16.240 }' 00:17:16.240 11:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:16.240 11:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.498 11:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:16.498 11:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:16.498 11:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:16.498 11:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:16.498 11:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:16.498 11:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:16.498 11:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:16.498 11:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:16.757 [2024-06-10 11:40:48.747610] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.757 11:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:16.757 "name": "Existed_Raid", 00:17:16.757 "aliases": [ 00:17:16.757 "bc860aa1-c9ea-434b-a5ba-0c8200d5d802" 00:17:16.757 ], 00:17:16.757 "product_name": "Raid Volume", 00:17:16.757 "block_size": 512, 00:17:16.757 "num_blocks": 131072, 00:17:16.757 "uuid": "bc860aa1-c9ea-434b-a5ba-0c8200d5d802", 00:17:16.757 "assigned_rate_limits": { 00:17:16.757 "rw_ios_per_sec": 0, 00:17:16.757 "rw_mbytes_per_sec": 0, 00:17:16.757 "r_mbytes_per_sec": 0, 00:17:16.757 "w_mbytes_per_sec": 0 00:17:16.757 }, 00:17:16.757 "claimed": false, 00:17:16.757 "zoned": false, 00:17:16.757 "supported_io_types": { 00:17:16.757 "read": true, 00:17:16.757 "write": true, 00:17:16.757 "unmap": true, 00:17:16.757 "write_zeroes": true, 00:17:16.757 "flush": true, 00:17:16.757 "reset": true, 00:17:16.757 "compare": false, 00:17:16.757 "compare_and_write": false, 00:17:16.757 "abort": false, 00:17:16.757 "nvme_admin": false, 00:17:16.757 "nvme_io": false 00:17:16.757 }, 00:17:16.757 "memory_domains": [ 00:17:16.757 { 00:17:16.757 "dma_device_id": "system", 00:17:16.757 "dma_device_type": 1 00:17:16.757 }, 00:17:16.757 { 00:17:16.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.757 "dma_device_type": 2 00:17:16.757 }, 00:17:16.757 { 00:17:16.757 "dma_device_id": "system", 00:17:16.757 "dma_device_type": 1 00:17:16.757 }, 00:17:16.757 { 00:17:16.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.757 "dma_device_type": 2 00:17:16.757 } 00:17:16.757 ], 00:17:16.757 "driver_specific": { 00:17:16.757 "raid": { 00:17:16.757 "uuid": "bc860aa1-c9ea-434b-a5ba-0c8200d5d802", 00:17:16.757 "strip_size_kb": 64, 00:17:16.757 "state": "online", 00:17:16.757 "raid_level": "concat", 00:17:16.757 "superblock": false, 00:17:16.757 "num_base_bdevs": 2, 00:17:16.757 "num_base_bdevs_discovered": 2, 00:17:16.757 "num_base_bdevs_operational": 2, 00:17:16.757 "base_bdevs_list": [ 00:17:16.757 { 00:17:16.757 "name": "BaseBdev1", 00:17:16.757 "uuid": "173e9d91-3669-41e9-ab8b-b6a7ed62af49", 00:17:16.757 "is_configured": true, 00:17:16.757 "data_offset": 0, 00:17:16.757 "data_size": 65536 00:17:16.757 }, 00:17:16.757 { 00:17:16.757 "name": "BaseBdev2", 00:17:16.757 "uuid": "753e1869-e436-4bf5-b433-527951786e91", 00:17:16.757 "is_configured": true, 00:17:16.757 "data_offset": 0, 00:17:16.758 "data_size": 65536 00:17:16.758 } 00:17:16.758 ] 00:17:16.758 } 00:17:16.758 } 00:17:16.758 }' 00:17:16.758 11:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:17.016 11:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:17.016 BaseBdev2' 00:17:17.016 11:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:17.016 11:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:17.016 11:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:17.017 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:17.017 "name": "BaseBdev1", 00:17:17.017 "aliases": [ 00:17:17.017 "173e9d91-3669-41e9-ab8b-b6a7ed62af49" 00:17:17.017 ], 00:17:17.017 "product_name": "Malloc disk", 00:17:17.017 "block_size": 512, 00:17:17.017 "num_blocks": 65536, 00:17:17.017 "uuid": "173e9d91-3669-41e9-ab8b-b6a7ed62af49", 00:17:17.017 "assigned_rate_limits": { 00:17:17.017 "rw_ios_per_sec": 0, 00:17:17.017 "rw_mbytes_per_sec": 0, 00:17:17.017 "r_mbytes_per_sec": 0, 00:17:17.017 "w_mbytes_per_sec": 0 00:17:17.017 }, 00:17:17.017 "claimed": true, 00:17:17.017 "claim_type": "exclusive_write", 00:17:17.017 "zoned": false, 00:17:17.017 "supported_io_types": { 00:17:17.017 "read": true, 00:17:17.017 "write": true, 00:17:17.017 "unmap": true, 00:17:17.017 "write_zeroes": true, 00:17:17.017 "flush": true, 00:17:17.017 "reset": true, 00:17:17.017 "compare": false, 00:17:17.017 "compare_and_write": false, 00:17:17.017 "abort": true, 00:17:17.017 "nvme_admin": false, 00:17:17.017 "nvme_io": false 00:17:17.017 }, 00:17:17.017 "memory_domains": [ 00:17:17.017 { 00:17:17.017 "dma_device_id": "system", 00:17:17.017 "dma_device_type": 1 00:17:17.017 }, 00:17:17.017 { 00:17:17.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.017 "dma_device_type": 2 00:17:17.017 } 00:17:17.017 ], 00:17:17.017 "driver_specific": {} 00:17:17.017 }' 00:17:17.017 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.277 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.277 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:17.277 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.277 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.277 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:17.277 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.277 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.277 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:17.277 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:17.534 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:17.534 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:17.534 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:17.534 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:17.534 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:17.793 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:17.793 "name": "BaseBdev2", 00:17:17.793 "aliases": [ 00:17:17.793 "753e1869-e436-4bf5-b433-527951786e91" 00:17:17.793 ], 00:17:17.793 "product_name": "Malloc disk", 00:17:17.793 "block_size": 512, 00:17:17.793 "num_blocks": 65536, 00:17:17.793 "uuid": "753e1869-e436-4bf5-b433-527951786e91", 00:17:17.793 "assigned_rate_limits": { 00:17:17.793 "rw_ios_per_sec": 0, 00:17:17.793 "rw_mbytes_per_sec": 0, 00:17:17.793 "r_mbytes_per_sec": 0, 00:17:17.793 "w_mbytes_per_sec": 0 00:17:17.793 }, 00:17:17.793 "claimed": true, 00:17:17.793 "claim_type": "exclusive_write", 00:17:17.793 "zoned": false, 00:17:17.793 "supported_io_types": { 00:17:17.793 "read": true, 00:17:17.793 "write": true, 00:17:17.793 "unmap": true, 00:17:17.793 "write_zeroes": true, 00:17:17.793 "flush": true, 00:17:17.793 "reset": true, 00:17:17.793 "compare": false, 00:17:17.793 "compare_and_write": false, 00:17:17.793 "abort": true, 00:17:17.793 "nvme_admin": false, 00:17:17.793 "nvme_io": false 00:17:17.793 }, 00:17:17.793 "memory_domains": [ 00:17:17.793 { 00:17:17.793 "dma_device_id": "system", 00:17:17.793 "dma_device_type": 1 00:17:17.793 }, 00:17:17.793 { 00:17:17.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.793 "dma_device_type": 2 00:17:17.793 } 00:17:17.793 ], 00:17:17.793 "driver_specific": {} 00:17:17.793 }' 00:17:17.793 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.793 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.793 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:17.793 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.793 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.052 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:18.053 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.053 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.053 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:18.053 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.053 11:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.053 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:18.053 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:18.312 [2024-06-10 11:40:50.323355] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:18.312 [2024-06-10 11:40:50.323398] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.312 [2024-06-10 11:40:50.323472] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.591 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:18.591 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:17:18.591 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:18.591 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:18.592 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:18.592 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:17:18.592 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:18.592 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:18.592 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:18.592 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:18.592 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:18.592 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.592 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.592 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.592 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.592 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.592 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.856 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:18.856 "name": "Existed_Raid", 00:17:18.856 "uuid": "bc860aa1-c9ea-434b-a5ba-0c8200d5d802", 00:17:18.856 "strip_size_kb": 64, 00:17:18.856 "state": "offline", 00:17:18.856 "raid_level": "concat", 00:17:18.856 "superblock": false, 00:17:18.856 "num_base_bdevs": 2, 00:17:18.856 "num_base_bdevs_discovered": 1, 00:17:18.856 "num_base_bdevs_operational": 1, 00:17:18.856 "base_bdevs_list": [ 00:17:18.856 { 00:17:18.856 "name": null, 00:17:18.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.856 "is_configured": false, 00:17:18.856 "data_offset": 0, 00:17:18.856 "data_size": 65536 00:17:18.856 }, 00:17:18.856 { 00:17:18.856 "name": "BaseBdev2", 00:17:18.856 "uuid": "753e1869-e436-4bf5-b433-527951786e91", 00:17:18.856 "is_configured": true, 00:17:18.856 "data_offset": 0, 00:17:18.856 "data_size": 65536 00:17:18.856 } 00:17:18.856 ] 00:17:18.856 }' 00:17:18.856 11:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:18.856 11:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.443 11:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:19.443 11:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:19.443 11:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.443 11:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:19.716 11:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:19.716 11:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:19.716 11:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:19.990 [2024-06-10 11:40:51.851175] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:19.990 [2024-06-10 11:40:51.851243] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:19.990 11:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:19.990 11:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:19.990 11:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.990 11:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:20.263 11:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:20.263 11:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:20.263 11:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:20.263 11:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 123165 00:17:20.263 11:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 123165 ']' 00:17:20.263 11:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 123165 00:17:20.263 11:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:17:20.263 11:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:20.263 11:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 123165 00:17:20.263 11:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:20.263 11:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:20.263 11:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 123165' 00:17:20.263 killing process with pid 123165 00:17:20.263 11:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 123165 00:17:20.263 11:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 123165 00:17:20.263 [2024-06-10 11:40:52.193976] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:20.263 [2024-06-10 11:40:52.194113] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:21.641 ************************************ 00:17:21.641 END TEST raid_state_function_test 00:17:21.641 ************************************ 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:21.641 00:17:21.641 real 0m12.616s 00:17:21.641 user 0m21.906s 00:17:21.641 sys 0m1.553s 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.641 11:40:53 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:17:21.641 11:40:53 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:17:21.641 11:40:53 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:21.641 11:40:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:21.641 ************************************ 00:17:21.641 START TEST raid_state_function_test_sb 00:17:21.641 ************************************ 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 2 true 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=123548 00:17:21.641 Process raid pid: 123548 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 123548' 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 123548 /var/tmp/spdk-raid.sock 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 123548 ']' 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:21.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:21.641 11:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.641 [2024-06-10 11:40:53.686096] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:17:21.641 [2024-06-10 11:40:53.686306] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.900 [2024-06-10 11:40:53.868100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.158 [2024-06-10 11:40:54.094012] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.415 [2024-06-10 11:40:54.299060] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.674 11:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:22.674 11:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:17:22.674 11:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:22.932 [2024-06-10 11:40:54.777224] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:22.932 [2024-06-10 11:40:54.777322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:22.932 [2024-06-10 11:40:54.777334] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:22.932 [2024-06-10 11:40:54.777360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:22.932 11:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:22.932 11:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:22.932 11:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:22.932 11:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:22.932 11:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:22.932 11:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:22.932 11:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:22.932 11:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:22.932 11:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:22.932 11:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:22.932 11:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.932 11:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.191 11:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:23.191 "name": "Existed_Raid", 00:17:23.191 "uuid": "a15f3bd2-7502-4da2-854b-a2415489b330", 00:17:23.191 "strip_size_kb": 64, 00:17:23.191 "state": "configuring", 00:17:23.191 "raid_level": "concat", 00:17:23.191 "superblock": true, 00:17:23.191 "num_base_bdevs": 2, 00:17:23.191 "num_base_bdevs_discovered": 0, 00:17:23.191 "num_base_bdevs_operational": 2, 00:17:23.191 "base_bdevs_list": [ 00:17:23.191 { 00:17:23.191 "name": "BaseBdev1", 00:17:23.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.191 "is_configured": false, 00:17:23.191 "data_offset": 0, 00:17:23.191 "data_size": 0 00:17:23.191 }, 00:17:23.191 { 00:17:23.191 "name": "BaseBdev2", 00:17:23.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.191 "is_configured": false, 00:17:23.191 "data_offset": 0, 00:17:23.191 "data_size": 0 00:17:23.191 } 00:17:23.191 ] 00:17:23.191 }' 00:17:23.191 11:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:23.191 11:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.759 11:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:24.018 [2024-06-10 11:40:55.961340] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.018 [2024-06-10 11:40:55.961385] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:24.018 11:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:24.277 [2024-06-10 11:40:56.225432] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.277 [2024-06-10 11:40:56.225505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.277 [2024-06-10 11:40:56.225515] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.277 [2024-06-10 11:40:56.225540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.277 11:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:24.536 [2024-06-10 11:40:56.454598] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.536 BaseBdev1 00:17:24.536 11:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:24.536 11:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:17:24.536 11:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:24.536 11:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:17:24.536 11:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:24.536 11:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:24.536 11:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:24.795 11:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:25.056 [ 00:17:25.056 { 00:17:25.056 "name": "BaseBdev1", 00:17:25.056 "aliases": [ 00:17:25.056 "59220710-62ee-4dfc-a910-435e24f554b6" 00:17:25.056 ], 00:17:25.056 "product_name": "Malloc disk", 00:17:25.056 "block_size": 512, 00:17:25.056 "num_blocks": 65536, 00:17:25.056 "uuid": "59220710-62ee-4dfc-a910-435e24f554b6", 00:17:25.056 "assigned_rate_limits": { 00:17:25.056 "rw_ios_per_sec": 0, 00:17:25.056 "rw_mbytes_per_sec": 0, 00:17:25.056 "r_mbytes_per_sec": 0, 00:17:25.056 "w_mbytes_per_sec": 0 00:17:25.056 }, 00:17:25.056 "claimed": true, 00:17:25.056 "claim_type": "exclusive_write", 00:17:25.056 "zoned": false, 00:17:25.056 "supported_io_types": { 00:17:25.056 "read": true, 00:17:25.056 "write": true, 00:17:25.056 "unmap": true, 00:17:25.056 "write_zeroes": true, 00:17:25.056 "flush": true, 00:17:25.056 "reset": true, 00:17:25.056 "compare": false, 00:17:25.056 "compare_and_write": false, 00:17:25.056 "abort": true, 00:17:25.056 "nvme_admin": false, 00:17:25.056 "nvme_io": false 00:17:25.056 }, 00:17:25.056 "memory_domains": [ 00:17:25.056 { 00:17:25.056 "dma_device_id": "system", 00:17:25.056 "dma_device_type": 1 00:17:25.056 }, 00:17:25.056 { 00:17:25.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.056 "dma_device_type": 2 00:17:25.056 } 00:17:25.056 ], 00:17:25.056 "driver_specific": {} 00:17:25.056 } 00:17:25.056 ] 00:17:25.056 11:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:17:25.056 11:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:25.056 11:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:25.056 11:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:25.056 11:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:25.056 11:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:25.056 11:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:25.056 11:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.056 11:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.056 11:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.056 11:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.056 11:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.056 11:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.321 11:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.321 "name": "Existed_Raid", 00:17:25.321 "uuid": "79eb4585-377d-466a-9b50-20acb581fcda", 00:17:25.321 "strip_size_kb": 64, 00:17:25.321 "state": "configuring", 00:17:25.321 "raid_level": "concat", 00:17:25.321 "superblock": true, 00:17:25.321 "num_base_bdevs": 2, 00:17:25.321 "num_base_bdevs_discovered": 1, 00:17:25.321 "num_base_bdevs_operational": 2, 00:17:25.321 "base_bdevs_list": [ 00:17:25.321 { 00:17:25.321 "name": "BaseBdev1", 00:17:25.321 "uuid": "59220710-62ee-4dfc-a910-435e24f554b6", 00:17:25.321 "is_configured": true, 00:17:25.321 "data_offset": 2048, 00:17:25.321 "data_size": 63488 00:17:25.321 }, 00:17:25.321 { 00:17:25.321 "name": "BaseBdev2", 00:17:25.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.321 "is_configured": false, 00:17:25.321 "data_offset": 0, 00:17:25.321 "data_size": 0 00:17:25.321 } 00:17:25.321 ] 00:17:25.321 }' 00:17:25.321 11:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.321 11:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.910 11:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:26.172 [2024-06-10 11:40:58.203133] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:26.172 [2024-06-10 11:40:58.203219] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:26.172 11:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:26.738 [2024-06-10 11:40:58.523220] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.738 [2024-06-10 11:40:58.525569] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.738 [2024-06-10 11:40:58.525640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.738 11:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:26.738 11:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:26.738 11:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:26.738 11:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:26.738 11:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:26.738 11:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:26.738 11:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:26.738 11:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:26.738 11:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.738 11:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.738 11:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.738 11:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.738 11:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.738 11:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.997 11:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:26.997 "name": "Existed_Raid", 00:17:26.997 "uuid": "f884a7b0-0d4d-4cd2-93a4-8874a3271dac", 00:17:26.997 "strip_size_kb": 64, 00:17:26.997 "state": "configuring", 00:17:26.997 "raid_level": "concat", 00:17:26.997 "superblock": true, 00:17:26.997 "num_base_bdevs": 2, 00:17:26.997 "num_base_bdevs_discovered": 1, 00:17:26.997 "num_base_bdevs_operational": 2, 00:17:26.997 "base_bdevs_list": [ 00:17:26.997 { 00:17:26.997 "name": "BaseBdev1", 00:17:26.997 "uuid": "59220710-62ee-4dfc-a910-435e24f554b6", 00:17:26.997 "is_configured": true, 00:17:26.997 "data_offset": 2048, 00:17:26.997 "data_size": 63488 00:17:26.997 }, 00:17:26.997 { 00:17:26.997 "name": "BaseBdev2", 00:17:26.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.997 "is_configured": false, 00:17:26.997 "data_offset": 0, 00:17:26.997 "data_size": 0 00:17:26.997 } 00:17:26.997 ] 00:17:26.997 }' 00:17:26.997 11:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:26.997 11:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.563 11:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:27.820 [2024-06-10 11:40:59.802819] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.820 [2024-06-10 11:40:59.803074] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:27.820 [2024-06-10 11:40:59.803088] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:27.820 [2024-06-10 11:40:59.803238] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:27.820 [2024-06-10 11:40:59.803560] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:27.820 [2024-06-10 11:40:59.803581] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:27.820 [2024-06-10 11:40:59.803739] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.820 BaseBdev2 00:17:27.820 11:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:27.820 11:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:17:27.820 11:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:27.820 11:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:17:27.820 11:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:27.820 11:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:27.820 11:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:28.078 11:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:28.337 [ 00:17:28.337 { 00:17:28.337 "name": "BaseBdev2", 00:17:28.337 "aliases": [ 00:17:28.337 "1654c933-e9bd-4e79-8bb1-36938abc5da9" 00:17:28.337 ], 00:17:28.337 "product_name": "Malloc disk", 00:17:28.337 "block_size": 512, 00:17:28.337 "num_blocks": 65536, 00:17:28.337 "uuid": "1654c933-e9bd-4e79-8bb1-36938abc5da9", 00:17:28.337 "assigned_rate_limits": { 00:17:28.337 "rw_ios_per_sec": 0, 00:17:28.337 "rw_mbytes_per_sec": 0, 00:17:28.337 "r_mbytes_per_sec": 0, 00:17:28.337 "w_mbytes_per_sec": 0 00:17:28.337 }, 00:17:28.337 "claimed": true, 00:17:28.337 "claim_type": "exclusive_write", 00:17:28.337 "zoned": false, 00:17:28.337 "supported_io_types": { 00:17:28.337 "read": true, 00:17:28.337 "write": true, 00:17:28.337 "unmap": true, 00:17:28.337 "write_zeroes": true, 00:17:28.337 "flush": true, 00:17:28.337 "reset": true, 00:17:28.337 "compare": false, 00:17:28.337 "compare_and_write": false, 00:17:28.337 "abort": true, 00:17:28.337 "nvme_admin": false, 00:17:28.337 "nvme_io": false 00:17:28.337 }, 00:17:28.337 "memory_domains": [ 00:17:28.337 { 00:17:28.337 "dma_device_id": "system", 00:17:28.337 "dma_device_type": 1 00:17:28.337 }, 00:17:28.337 { 00:17:28.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.337 "dma_device_type": 2 00:17:28.337 } 00:17:28.337 ], 00:17:28.337 "driver_specific": {} 00:17:28.337 } 00:17:28.337 ] 00:17:28.337 11:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:17:28.337 11:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:28.337 11:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:28.337 11:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:17:28.337 11:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:28.337 11:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:28.337 11:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:28.337 11:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:28.337 11:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:28.337 11:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.337 11:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.337 11:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.337 11:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.337 11:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.337 11:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.905 11:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.905 "name": "Existed_Raid", 00:17:28.905 "uuid": "f884a7b0-0d4d-4cd2-93a4-8874a3271dac", 00:17:28.905 "strip_size_kb": 64, 00:17:28.905 "state": "online", 00:17:28.905 "raid_level": "concat", 00:17:28.905 "superblock": true, 00:17:28.905 "num_base_bdevs": 2, 00:17:28.905 "num_base_bdevs_discovered": 2, 00:17:28.905 "num_base_bdevs_operational": 2, 00:17:28.905 "base_bdevs_list": [ 00:17:28.905 { 00:17:28.905 "name": "BaseBdev1", 00:17:28.905 "uuid": "59220710-62ee-4dfc-a910-435e24f554b6", 00:17:28.905 "is_configured": true, 00:17:28.905 "data_offset": 2048, 00:17:28.905 "data_size": 63488 00:17:28.905 }, 00:17:28.905 { 00:17:28.905 "name": "BaseBdev2", 00:17:28.905 "uuid": "1654c933-e9bd-4e79-8bb1-36938abc5da9", 00:17:28.905 "is_configured": true, 00:17:28.905 "data_offset": 2048, 00:17:28.905 "data_size": 63488 00:17:28.905 } 00:17:28.905 ] 00:17:28.905 }' 00:17:28.905 11:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.905 11:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.163 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:29.164 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:29.164 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:29.164 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:29.164 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:29.164 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:29.164 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:29.164 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:29.730 [2024-06-10 11:41:01.519502] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.730 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:29.730 "name": "Existed_Raid", 00:17:29.730 "aliases": [ 00:17:29.730 "f884a7b0-0d4d-4cd2-93a4-8874a3271dac" 00:17:29.730 ], 00:17:29.730 "product_name": "Raid Volume", 00:17:29.730 "block_size": 512, 00:17:29.730 "num_blocks": 126976, 00:17:29.730 "uuid": "f884a7b0-0d4d-4cd2-93a4-8874a3271dac", 00:17:29.730 "assigned_rate_limits": { 00:17:29.730 "rw_ios_per_sec": 0, 00:17:29.730 "rw_mbytes_per_sec": 0, 00:17:29.730 "r_mbytes_per_sec": 0, 00:17:29.730 "w_mbytes_per_sec": 0 00:17:29.730 }, 00:17:29.730 "claimed": false, 00:17:29.730 "zoned": false, 00:17:29.730 "supported_io_types": { 00:17:29.730 "read": true, 00:17:29.730 "write": true, 00:17:29.730 "unmap": true, 00:17:29.730 "write_zeroes": true, 00:17:29.730 "flush": true, 00:17:29.731 "reset": true, 00:17:29.731 "compare": false, 00:17:29.731 "compare_and_write": false, 00:17:29.731 "abort": false, 00:17:29.731 "nvme_admin": false, 00:17:29.731 "nvme_io": false 00:17:29.731 }, 00:17:29.731 "memory_domains": [ 00:17:29.731 { 00:17:29.731 "dma_device_id": "system", 00:17:29.731 "dma_device_type": 1 00:17:29.731 }, 00:17:29.731 { 00:17:29.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.731 "dma_device_type": 2 00:17:29.731 }, 00:17:29.731 { 00:17:29.731 "dma_device_id": "system", 00:17:29.731 "dma_device_type": 1 00:17:29.731 }, 00:17:29.731 { 00:17:29.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.731 "dma_device_type": 2 00:17:29.731 } 00:17:29.731 ], 00:17:29.731 "driver_specific": { 00:17:29.731 "raid": { 00:17:29.731 "uuid": "f884a7b0-0d4d-4cd2-93a4-8874a3271dac", 00:17:29.731 "strip_size_kb": 64, 00:17:29.731 "state": "online", 00:17:29.731 "raid_level": "concat", 00:17:29.731 "superblock": true, 00:17:29.731 "num_base_bdevs": 2, 00:17:29.731 "num_base_bdevs_discovered": 2, 00:17:29.731 "num_base_bdevs_operational": 2, 00:17:29.731 "base_bdevs_list": [ 00:17:29.731 { 00:17:29.731 "name": "BaseBdev1", 00:17:29.731 "uuid": "59220710-62ee-4dfc-a910-435e24f554b6", 00:17:29.731 "is_configured": true, 00:17:29.731 "data_offset": 2048, 00:17:29.731 "data_size": 63488 00:17:29.731 }, 00:17:29.731 { 00:17:29.731 "name": "BaseBdev2", 00:17:29.731 "uuid": "1654c933-e9bd-4e79-8bb1-36938abc5da9", 00:17:29.731 "is_configured": true, 00:17:29.731 "data_offset": 2048, 00:17:29.731 "data_size": 63488 00:17:29.731 } 00:17:29.731 ] 00:17:29.731 } 00:17:29.731 } 00:17:29.731 }' 00:17:29.731 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:29.731 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:29.731 BaseBdev2' 00:17:29.731 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:29.731 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:29.731 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:29.989 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:29.989 "name": "BaseBdev1", 00:17:29.989 "aliases": [ 00:17:29.989 "59220710-62ee-4dfc-a910-435e24f554b6" 00:17:29.989 ], 00:17:29.989 "product_name": "Malloc disk", 00:17:29.989 "block_size": 512, 00:17:29.989 "num_blocks": 65536, 00:17:29.989 "uuid": "59220710-62ee-4dfc-a910-435e24f554b6", 00:17:29.989 "assigned_rate_limits": { 00:17:29.989 "rw_ios_per_sec": 0, 00:17:29.989 "rw_mbytes_per_sec": 0, 00:17:29.989 "r_mbytes_per_sec": 0, 00:17:29.989 "w_mbytes_per_sec": 0 00:17:29.989 }, 00:17:29.989 "claimed": true, 00:17:29.989 "claim_type": "exclusive_write", 00:17:29.989 "zoned": false, 00:17:29.989 "supported_io_types": { 00:17:29.989 "read": true, 00:17:29.989 "write": true, 00:17:29.989 "unmap": true, 00:17:29.989 "write_zeroes": true, 00:17:29.989 "flush": true, 00:17:29.989 "reset": true, 00:17:29.989 "compare": false, 00:17:29.989 "compare_and_write": false, 00:17:29.989 "abort": true, 00:17:29.989 "nvme_admin": false, 00:17:29.989 "nvme_io": false 00:17:29.989 }, 00:17:29.989 "memory_domains": [ 00:17:29.989 { 00:17:29.989 "dma_device_id": "system", 00:17:29.989 "dma_device_type": 1 00:17:29.989 }, 00:17:29.989 { 00:17:29.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.989 "dma_device_type": 2 00:17:29.989 } 00:17:29.989 ], 00:17:29.989 "driver_specific": {} 00:17:29.989 }' 00:17:29.989 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:29.989 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:29.989 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:29.989 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.989 11:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.989 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:29.989 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:30.247 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:30.247 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:30.247 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:30.247 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:30.247 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:30.247 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:30.247 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:30.247 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:30.505 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:30.505 "name": "BaseBdev2", 00:17:30.505 "aliases": [ 00:17:30.505 "1654c933-e9bd-4e79-8bb1-36938abc5da9" 00:17:30.505 ], 00:17:30.505 "product_name": "Malloc disk", 00:17:30.505 "block_size": 512, 00:17:30.505 "num_blocks": 65536, 00:17:30.505 "uuid": "1654c933-e9bd-4e79-8bb1-36938abc5da9", 00:17:30.505 "assigned_rate_limits": { 00:17:30.505 "rw_ios_per_sec": 0, 00:17:30.505 "rw_mbytes_per_sec": 0, 00:17:30.505 "r_mbytes_per_sec": 0, 00:17:30.505 "w_mbytes_per_sec": 0 00:17:30.505 }, 00:17:30.505 "claimed": true, 00:17:30.505 "claim_type": "exclusive_write", 00:17:30.505 "zoned": false, 00:17:30.505 "supported_io_types": { 00:17:30.505 "read": true, 00:17:30.505 "write": true, 00:17:30.505 "unmap": true, 00:17:30.505 "write_zeroes": true, 00:17:30.505 "flush": true, 00:17:30.505 "reset": true, 00:17:30.505 "compare": false, 00:17:30.505 "compare_and_write": false, 00:17:30.505 "abort": true, 00:17:30.505 "nvme_admin": false, 00:17:30.505 "nvme_io": false 00:17:30.505 }, 00:17:30.505 "memory_domains": [ 00:17:30.505 { 00:17:30.505 "dma_device_id": "system", 00:17:30.505 "dma_device_type": 1 00:17:30.505 }, 00:17:30.505 { 00:17:30.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.505 "dma_device_type": 2 00:17:30.505 } 00:17:30.505 ], 00:17:30.505 "driver_specific": {} 00:17:30.505 }' 00:17:30.505 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:30.764 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:30.764 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:30.764 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:30.764 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:30.764 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:30.764 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:30.764 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.023 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:31.023 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.023 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.023 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:31.023 11:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:31.281 [2024-06-10 11:41:03.167748] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:31.281 [2024-06-10 11:41:03.167936] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.281 [2024-06-10 11:41:03.168115] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.281 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:31.281 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:17:31.281 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:31.281 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:31.281 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:31.281 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:17:31.281 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:31.281 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:31.281 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:31.281 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:31.281 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:31.281 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:31.281 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:31.281 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:31.281 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:31.281 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.282 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.846 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:31.846 "name": "Existed_Raid", 00:17:31.846 "uuid": "f884a7b0-0d4d-4cd2-93a4-8874a3271dac", 00:17:31.846 "strip_size_kb": 64, 00:17:31.846 "state": "offline", 00:17:31.846 "raid_level": "concat", 00:17:31.846 "superblock": true, 00:17:31.846 "num_base_bdevs": 2, 00:17:31.846 "num_base_bdevs_discovered": 1, 00:17:31.846 "num_base_bdevs_operational": 1, 00:17:31.846 "base_bdevs_list": [ 00:17:31.846 { 00:17:31.846 "name": null, 00:17:31.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.846 "is_configured": false, 00:17:31.846 "data_offset": 2048, 00:17:31.846 "data_size": 63488 00:17:31.846 }, 00:17:31.846 { 00:17:31.846 "name": "BaseBdev2", 00:17:31.846 "uuid": "1654c933-e9bd-4e79-8bb1-36938abc5da9", 00:17:31.846 "is_configured": true, 00:17:31.846 "data_offset": 2048, 00:17:31.846 "data_size": 63488 00:17:31.846 } 00:17:31.846 ] 00:17:31.846 }' 00:17:31.846 11:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:31.846 11:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.412 11:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:32.412 11:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:32.412 11:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:32.412 11:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.671 11:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:32.671 11:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:32.671 11:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:32.671 [2024-06-10 11:41:04.682506] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:32.671 [2024-06-10 11:41:04.682831] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:32.930 11:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:32.930 11:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:32.930 11:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.930 11:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:33.189 11:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:33.189 11:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:33.189 11:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:33.189 11:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 123548 00:17:33.189 11:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 123548 ']' 00:17:33.189 11:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 123548 00:17:33.189 11:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:17:33.189 11:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:33.189 11:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 123548 00:17:33.189 11:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:33.189 11:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:33.189 11:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 123548' 00:17:33.189 killing process with pid 123548 00:17:33.189 11:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 123548 00:17:33.189 [2024-06-10 11:41:05.177687] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:33.189 11:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 123548 00:17:33.189 [2024-06-10 11:41:05.177995] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:34.567 11:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:34.567 00:17:34.567 real 0m12.992s 00:17:34.567 user 0m22.333s 00:17:34.567 sys 0m1.752s 00:17:34.567 11:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:34.567 ************************************ 00:17:34.567 END TEST raid_state_function_test_sb 00:17:34.567 11:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.567 ************************************ 00:17:34.826 11:41:06 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:17:34.826 11:41:06 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:17:34.826 11:41:06 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:34.826 11:41:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:34.826 ************************************ 00:17:34.826 START TEST raid_superblock_test 00:17:34.826 ************************************ 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test concat 2 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=123950 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 123950 /var/tmp/spdk-raid.sock 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 123950 ']' 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:34.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:34.826 11:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.826 [2024-06-10 11:41:06.743529] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:17:34.826 [2024-06-10 11:41:06.743921] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123950 ] 00:17:35.084 [2024-06-10 11:41:06.923562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.084 [2024-06-10 11:41:07.132665] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.343 [2024-06-10 11:41:07.343042] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.601 11:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:35.601 11:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:17:35.601 11:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:35.601 11:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:35.601 11:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:35.601 11:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:35.601 11:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:35.601 11:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:35.601 11:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:35.601 11:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:35.601 11:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:36.169 malloc1 00:17:36.169 11:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:36.428 [2024-06-10 11:41:08.241303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:36.428 [2024-06-10 11:41:08.241657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.428 [2024-06-10 11:41:08.241749] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:36.428 [2024-06-10 11:41:08.241991] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.428 [2024-06-10 11:41:08.244362] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.428 [2024-06-10 11:41:08.244533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:36.428 pt1 00:17:36.428 11:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:36.428 11:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:36.428 11:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:36.428 11:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:36.428 11:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:36.428 11:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:36.428 11:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:36.428 11:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:36.428 11:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:36.686 malloc2 00:17:36.686 11:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:36.944 [2024-06-10 11:41:08.788220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:36.944 [2024-06-10 11:41:08.788579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.944 [2024-06-10 11:41:08.788679] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:36.944 [2024-06-10 11:41:08.788796] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.944 [2024-06-10 11:41:08.791330] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.944 [2024-06-10 11:41:08.791515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:36.944 pt2 00:17:36.944 11:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:36.944 11:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:36.944 11:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:17:36.944 [2024-06-10 11:41:09.000438] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:36.944 [2024-06-10 11:41:09.002705] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:37.202 [2024-06-10 11:41:09.003040] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:17:37.202 [2024-06-10 11:41:09.003145] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:37.202 [2024-06-10 11:41:09.003333] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:37.202 [2024-06-10 11:41:09.003715] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:17:37.202 [2024-06-10 11:41:09.003838] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:17:37.202 [2024-06-10 11:41:09.004084] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.203 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:37.203 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:37.203 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:37.203 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:37.203 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:37.203 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:37.203 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.203 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.203 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.203 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.203 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.203 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.203 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.203 "name": "raid_bdev1", 00:17:37.203 "uuid": "d14996f8-2fa4-4791-a67a-12e02e2a5ec2", 00:17:37.203 "strip_size_kb": 64, 00:17:37.203 "state": "online", 00:17:37.203 "raid_level": "concat", 00:17:37.203 "superblock": true, 00:17:37.203 "num_base_bdevs": 2, 00:17:37.203 "num_base_bdevs_discovered": 2, 00:17:37.203 "num_base_bdevs_operational": 2, 00:17:37.203 "base_bdevs_list": [ 00:17:37.203 { 00:17:37.203 "name": "pt1", 00:17:37.203 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:37.203 "is_configured": true, 00:17:37.203 "data_offset": 2048, 00:17:37.203 "data_size": 63488 00:17:37.203 }, 00:17:37.203 { 00:17:37.203 "name": "pt2", 00:17:37.203 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.203 "is_configured": true, 00:17:37.203 "data_offset": 2048, 00:17:37.203 "data_size": 63488 00:17:37.203 } 00:17:37.203 ] 00:17:37.203 }' 00:17:37.203 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.203 11:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.769 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:37.769 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:37.769 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:37.769 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:37.769 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:37.769 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:37.769 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:37.769 11:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:38.028 [2024-06-10 11:41:10.008854] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.028 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:38.028 "name": "raid_bdev1", 00:17:38.028 "aliases": [ 00:17:38.028 "d14996f8-2fa4-4791-a67a-12e02e2a5ec2" 00:17:38.028 ], 00:17:38.028 "product_name": "Raid Volume", 00:17:38.028 "block_size": 512, 00:17:38.028 "num_blocks": 126976, 00:17:38.028 "uuid": "d14996f8-2fa4-4791-a67a-12e02e2a5ec2", 00:17:38.028 "assigned_rate_limits": { 00:17:38.028 "rw_ios_per_sec": 0, 00:17:38.028 "rw_mbytes_per_sec": 0, 00:17:38.028 "r_mbytes_per_sec": 0, 00:17:38.028 "w_mbytes_per_sec": 0 00:17:38.028 }, 00:17:38.028 "claimed": false, 00:17:38.028 "zoned": false, 00:17:38.028 "supported_io_types": { 00:17:38.028 "read": true, 00:17:38.028 "write": true, 00:17:38.028 "unmap": true, 00:17:38.028 "write_zeroes": true, 00:17:38.028 "flush": true, 00:17:38.028 "reset": true, 00:17:38.028 "compare": false, 00:17:38.028 "compare_and_write": false, 00:17:38.028 "abort": false, 00:17:38.028 "nvme_admin": false, 00:17:38.028 "nvme_io": false 00:17:38.028 }, 00:17:38.028 "memory_domains": [ 00:17:38.028 { 00:17:38.028 "dma_device_id": "system", 00:17:38.028 "dma_device_type": 1 00:17:38.028 }, 00:17:38.028 { 00:17:38.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.029 "dma_device_type": 2 00:17:38.029 }, 00:17:38.029 { 00:17:38.029 "dma_device_id": "system", 00:17:38.029 "dma_device_type": 1 00:17:38.029 }, 00:17:38.029 { 00:17:38.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.029 "dma_device_type": 2 00:17:38.029 } 00:17:38.029 ], 00:17:38.029 "driver_specific": { 00:17:38.029 "raid": { 00:17:38.029 "uuid": "d14996f8-2fa4-4791-a67a-12e02e2a5ec2", 00:17:38.029 "strip_size_kb": 64, 00:17:38.029 "state": "online", 00:17:38.029 "raid_level": "concat", 00:17:38.029 "superblock": true, 00:17:38.029 "num_base_bdevs": 2, 00:17:38.029 "num_base_bdevs_discovered": 2, 00:17:38.029 "num_base_bdevs_operational": 2, 00:17:38.029 "base_bdevs_list": [ 00:17:38.029 { 00:17:38.029 "name": "pt1", 00:17:38.029 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:38.029 "is_configured": true, 00:17:38.029 "data_offset": 2048, 00:17:38.029 "data_size": 63488 00:17:38.029 }, 00:17:38.029 { 00:17:38.029 "name": "pt2", 00:17:38.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:38.029 "is_configured": true, 00:17:38.029 "data_offset": 2048, 00:17:38.029 "data_size": 63488 00:17:38.029 } 00:17:38.029 ] 00:17:38.029 } 00:17:38.029 } 00:17:38.029 }' 00:17:38.029 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:38.029 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:38.029 pt2' 00:17:38.029 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:38.029 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:38.029 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:38.323 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:38.323 "name": "pt1", 00:17:38.323 "aliases": [ 00:17:38.323 "00000000-0000-0000-0000-000000000001" 00:17:38.323 ], 00:17:38.323 "product_name": "passthru", 00:17:38.323 "block_size": 512, 00:17:38.323 "num_blocks": 65536, 00:17:38.323 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:38.323 "assigned_rate_limits": { 00:17:38.323 "rw_ios_per_sec": 0, 00:17:38.323 "rw_mbytes_per_sec": 0, 00:17:38.323 "r_mbytes_per_sec": 0, 00:17:38.323 "w_mbytes_per_sec": 0 00:17:38.323 }, 00:17:38.323 "claimed": true, 00:17:38.323 "claim_type": "exclusive_write", 00:17:38.323 "zoned": false, 00:17:38.323 "supported_io_types": { 00:17:38.323 "read": true, 00:17:38.323 "write": true, 00:17:38.323 "unmap": true, 00:17:38.323 "write_zeroes": true, 00:17:38.323 "flush": true, 00:17:38.323 "reset": true, 00:17:38.323 "compare": false, 00:17:38.323 "compare_and_write": false, 00:17:38.323 "abort": true, 00:17:38.323 "nvme_admin": false, 00:17:38.323 "nvme_io": false 00:17:38.323 }, 00:17:38.323 "memory_domains": [ 00:17:38.323 { 00:17:38.323 "dma_device_id": "system", 00:17:38.323 "dma_device_type": 1 00:17:38.323 }, 00:17:38.323 { 00:17:38.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.323 "dma_device_type": 2 00:17:38.323 } 00:17:38.323 ], 00:17:38.323 "driver_specific": { 00:17:38.323 "passthru": { 00:17:38.323 "name": "pt1", 00:17:38.323 "base_bdev_name": "malloc1" 00:17:38.323 } 00:17:38.323 } 00:17:38.323 }' 00:17:38.323 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:38.581 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:38.581 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:38.581 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:38.581 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:38.581 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:38.581 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:38.581 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:38.581 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:38.581 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.839 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.839 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:38.839 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:38.839 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:38.839 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:39.098 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:39.098 "name": "pt2", 00:17:39.098 "aliases": [ 00:17:39.098 "00000000-0000-0000-0000-000000000002" 00:17:39.098 ], 00:17:39.098 "product_name": "passthru", 00:17:39.098 "block_size": 512, 00:17:39.098 "num_blocks": 65536, 00:17:39.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:39.098 "assigned_rate_limits": { 00:17:39.098 "rw_ios_per_sec": 0, 00:17:39.098 "rw_mbytes_per_sec": 0, 00:17:39.098 "r_mbytes_per_sec": 0, 00:17:39.098 "w_mbytes_per_sec": 0 00:17:39.098 }, 00:17:39.098 "claimed": true, 00:17:39.098 "claim_type": "exclusive_write", 00:17:39.098 "zoned": false, 00:17:39.098 "supported_io_types": { 00:17:39.098 "read": true, 00:17:39.098 "write": true, 00:17:39.098 "unmap": true, 00:17:39.098 "write_zeroes": true, 00:17:39.098 "flush": true, 00:17:39.098 "reset": true, 00:17:39.098 "compare": false, 00:17:39.098 "compare_and_write": false, 00:17:39.098 "abort": true, 00:17:39.098 "nvme_admin": false, 00:17:39.098 "nvme_io": false 00:17:39.098 }, 00:17:39.098 "memory_domains": [ 00:17:39.098 { 00:17:39.098 "dma_device_id": "system", 00:17:39.098 "dma_device_type": 1 00:17:39.098 }, 00:17:39.098 { 00:17:39.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.098 "dma_device_type": 2 00:17:39.098 } 00:17:39.098 ], 00:17:39.098 "driver_specific": { 00:17:39.098 "passthru": { 00:17:39.098 "name": "pt2", 00:17:39.098 "base_bdev_name": "malloc2" 00:17:39.098 } 00:17:39.098 } 00:17:39.098 }' 00:17:39.098 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:39.098 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:39.098 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:39.098 11:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:39.098 11:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:39.098 11:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:39.098 11:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:39.098 11:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:39.357 11:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:39.357 11:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:39.357 11:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:39.357 11:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:39.357 11:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:39.357 11:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:39.616 [2024-06-10 11:41:11.609155] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.616 11:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=d14996f8-2fa4-4791-a67a-12e02e2a5ec2 00:17:39.616 11:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z d14996f8-2fa4-4791-a67a-12e02e2a5ec2 ']' 00:17:39.616 11:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:39.874 [2024-06-10 11:41:11.808987] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:39.874 [2024-06-10 11:41:11.809209] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.874 [2024-06-10 11:41:11.809370] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.874 [2024-06-10 11:41:11.809504] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.874 [2024-06-10 11:41:11.809580] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:17:39.874 11:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:39.874 11:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.132 11:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:40.132 11:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:40.132 11:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:40.132 11:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:40.390 11:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:40.390 11:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:40.656 11:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:40.656 11:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:40.926 11:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:40.926 11:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:40.926 11:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:17:40.926 11:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:40.926 11:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.926 11:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:40.927 11:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.927 11:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:40.927 11:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.927 11:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:40.927 11:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.927 11:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:40.927 11:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:41.185 [2024-06-10 11:41:12.997251] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:41.185 [2024-06-10 11:41:12.999566] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:41.185 [2024-06-10 11:41:12.999781] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:41.185 [2024-06-10 11:41:12.999969] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:41.185 [2024-06-10 11:41:13.000103] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:41.185 [2024-06-10 11:41:13.000144] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:17:41.185 request: 00:17:41.185 { 00:17:41.185 "name": "raid_bdev1", 00:17:41.185 "raid_level": "concat", 00:17:41.185 "base_bdevs": [ 00:17:41.185 "malloc1", 00:17:41.185 "malloc2" 00:17:41.185 ], 00:17:41.185 "strip_size_kb": 64, 00:17:41.185 "superblock": false, 00:17:41.185 "method": "bdev_raid_create", 00:17:41.185 "req_id": 1 00:17:41.185 } 00:17:41.185 Got JSON-RPC error response 00:17:41.185 response: 00:17:41.185 { 00:17:41.185 "code": -17, 00:17:41.185 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:41.185 } 00:17:41.185 11:41:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:17:41.185 11:41:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:41.185 11:41:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:41.185 11:41:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:41.185 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.185 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:41.443 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:41.443 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:41.443 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:41.702 [2024-06-10 11:41:13.513273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:41.702 [2024-06-10 11:41:13.513554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.702 [2024-06-10 11:41:13.513693] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:41.702 [2024-06-10 11:41:13.513792] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.702 [2024-06-10 11:41:13.516359] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.702 [2024-06-10 11:41:13.516555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:41.702 [2024-06-10 11:41:13.516808] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:41.702 [2024-06-10 11:41:13.516959] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:41.702 pt1 00:17:41.702 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:17:41.702 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:41.702 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:41.702 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:41.702 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:41.702 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:41.702 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.702 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.702 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.703 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.703 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.703 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.961 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.961 "name": "raid_bdev1", 00:17:41.961 "uuid": "d14996f8-2fa4-4791-a67a-12e02e2a5ec2", 00:17:41.961 "strip_size_kb": 64, 00:17:41.961 "state": "configuring", 00:17:41.961 "raid_level": "concat", 00:17:41.961 "superblock": true, 00:17:41.961 "num_base_bdevs": 2, 00:17:41.961 "num_base_bdevs_discovered": 1, 00:17:41.961 "num_base_bdevs_operational": 2, 00:17:41.961 "base_bdevs_list": [ 00:17:41.961 { 00:17:41.961 "name": "pt1", 00:17:41.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:41.961 "is_configured": true, 00:17:41.961 "data_offset": 2048, 00:17:41.961 "data_size": 63488 00:17:41.961 }, 00:17:41.961 { 00:17:41.961 "name": null, 00:17:41.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.961 "is_configured": false, 00:17:41.961 "data_offset": 2048, 00:17:41.961 "data_size": 63488 00:17:41.961 } 00:17:41.961 ] 00:17:41.961 }' 00:17:41.961 11:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.961 11:41:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.528 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:17:42.528 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:42.528 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:42.528 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:42.786 [2024-06-10 11:41:14.641524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:42.786 [2024-06-10 11:41:14.641832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.786 [2024-06-10 11:41:14.641901] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:42.786 [2024-06-10 11:41:14.641997] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.786 [2024-06-10 11:41:14.642508] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.786 [2024-06-10 11:41:14.642682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:42.786 [2024-06-10 11:41:14.642898] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:42.786 [2024-06-10 11:41:14.643038] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:42.786 [2024-06-10 11:41:14.643209] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:17:42.786 [2024-06-10 11:41:14.643312] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:42.786 [2024-06-10 11:41:14.643457] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:42.786 [2024-06-10 11:41:14.643927] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:17:42.786 [2024-06-10 11:41:14.644037] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:17:42.786 [2024-06-10 11:41:14.644268] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.786 pt2 00:17:42.786 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:42.786 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:42.786 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:42.786 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:42.786 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:42.786 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:42.786 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:42.786 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:42.786 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:42.786 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:42.786 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:42.786 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:42.786 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.786 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.084 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.084 "name": "raid_bdev1", 00:17:43.084 "uuid": "d14996f8-2fa4-4791-a67a-12e02e2a5ec2", 00:17:43.084 "strip_size_kb": 64, 00:17:43.084 "state": "online", 00:17:43.084 "raid_level": "concat", 00:17:43.084 "superblock": true, 00:17:43.084 "num_base_bdevs": 2, 00:17:43.084 "num_base_bdevs_discovered": 2, 00:17:43.084 "num_base_bdevs_operational": 2, 00:17:43.084 "base_bdevs_list": [ 00:17:43.084 { 00:17:43.084 "name": "pt1", 00:17:43.084 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:43.084 "is_configured": true, 00:17:43.084 "data_offset": 2048, 00:17:43.084 "data_size": 63488 00:17:43.084 }, 00:17:43.084 { 00:17:43.084 "name": "pt2", 00:17:43.084 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.084 "is_configured": true, 00:17:43.084 "data_offset": 2048, 00:17:43.084 "data_size": 63488 00:17:43.084 } 00:17:43.084 ] 00:17:43.084 }' 00:17:43.084 11:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.084 11:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.662 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:43.662 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:43.662 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:43.662 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:43.662 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:43.662 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:43.662 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:43.662 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:43.920 [2024-06-10 11:41:15.781975] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.920 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:43.920 "name": "raid_bdev1", 00:17:43.920 "aliases": [ 00:17:43.920 "d14996f8-2fa4-4791-a67a-12e02e2a5ec2" 00:17:43.920 ], 00:17:43.920 "product_name": "Raid Volume", 00:17:43.920 "block_size": 512, 00:17:43.920 "num_blocks": 126976, 00:17:43.920 "uuid": "d14996f8-2fa4-4791-a67a-12e02e2a5ec2", 00:17:43.920 "assigned_rate_limits": { 00:17:43.920 "rw_ios_per_sec": 0, 00:17:43.920 "rw_mbytes_per_sec": 0, 00:17:43.920 "r_mbytes_per_sec": 0, 00:17:43.920 "w_mbytes_per_sec": 0 00:17:43.920 }, 00:17:43.920 "claimed": false, 00:17:43.920 "zoned": false, 00:17:43.920 "supported_io_types": { 00:17:43.920 "read": true, 00:17:43.920 "write": true, 00:17:43.920 "unmap": true, 00:17:43.920 "write_zeroes": true, 00:17:43.920 "flush": true, 00:17:43.920 "reset": true, 00:17:43.920 "compare": false, 00:17:43.920 "compare_and_write": false, 00:17:43.920 "abort": false, 00:17:43.920 "nvme_admin": false, 00:17:43.920 "nvme_io": false 00:17:43.920 }, 00:17:43.920 "memory_domains": [ 00:17:43.920 { 00:17:43.920 "dma_device_id": "system", 00:17:43.920 "dma_device_type": 1 00:17:43.920 }, 00:17:43.920 { 00:17:43.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.920 "dma_device_type": 2 00:17:43.920 }, 00:17:43.920 { 00:17:43.920 "dma_device_id": "system", 00:17:43.920 "dma_device_type": 1 00:17:43.920 }, 00:17:43.920 { 00:17:43.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.920 "dma_device_type": 2 00:17:43.920 } 00:17:43.920 ], 00:17:43.920 "driver_specific": { 00:17:43.920 "raid": { 00:17:43.920 "uuid": "d14996f8-2fa4-4791-a67a-12e02e2a5ec2", 00:17:43.920 "strip_size_kb": 64, 00:17:43.920 "state": "online", 00:17:43.920 "raid_level": "concat", 00:17:43.920 "superblock": true, 00:17:43.921 "num_base_bdevs": 2, 00:17:43.921 "num_base_bdevs_discovered": 2, 00:17:43.921 "num_base_bdevs_operational": 2, 00:17:43.921 "base_bdevs_list": [ 00:17:43.921 { 00:17:43.921 "name": "pt1", 00:17:43.921 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:43.921 "is_configured": true, 00:17:43.921 "data_offset": 2048, 00:17:43.921 "data_size": 63488 00:17:43.921 }, 00:17:43.921 { 00:17:43.921 "name": "pt2", 00:17:43.921 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.921 "is_configured": true, 00:17:43.921 "data_offset": 2048, 00:17:43.921 "data_size": 63488 00:17:43.921 } 00:17:43.921 ] 00:17:43.921 } 00:17:43.921 } 00:17:43.921 }' 00:17:43.921 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:43.921 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:43.921 pt2' 00:17:43.921 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:43.921 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:43.921 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:44.180 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:44.180 "name": "pt1", 00:17:44.180 "aliases": [ 00:17:44.180 "00000000-0000-0000-0000-000000000001" 00:17:44.180 ], 00:17:44.180 "product_name": "passthru", 00:17:44.180 "block_size": 512, 00:17:44.180 "num_blocks": 65536, 00:17:44.180 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:44.180 "assigned_rate_limits": { 00:17:44.180 "rw_ios_per_sec": 0, 00:17:44.180 "rw_mbytes_per_sec": 0, 00:17:44.180 "r_mbytes_per_sec": 0, 00:17:44.180 "w_mbytes_per_sec": 0 00:17:44.180 }, 00:17:44.180 "claimed": true, 00:17:44.180 "claim_type": "exclusive_write", 00:17:44.180 "zoned": false, 00:17:44.180 "supported_io_types": { 00:17:44.180 "read": true, 00:17:44.180 "write": true, 00:17:44.180 "unmap": true, 00:17:44.180 "write_zeroes": true, 00:17:44.180 "flush": true, 00:17:44.180 "reset": true, 00:17:44.180 "compare": false, 00:17:44.180 "compare_and_write": false, 00:17:44.180 "abort": true, 00:17:44.180 "nvme_admin": false, 00:17:44.180 "nvme_io": false 00:17:44.180 }, 00:17:44.180 "memory_domains": [ 00:17:44.180 { 00:17:44.180 "dma_device_id": "system", 00:17:44.180 "dma_device_type": 1 00:17:44.180 }, 00:17:44.180 { 00:17:44.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.180 "dma_device_type": 2 00:17:44.180 } 00:17:44.180 ], 00:17:44.180 "driver_specific": { 00:17:44.180 "passthru": { 00:17:44.180 "name": "pt1", 00:17:44.180 "base_bdev_name": "malloc1" 00:17:44.180 } 00:17:44.180 } 00:17:44.180 }' 00:17:44.180 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.180 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.180 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:44.180 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.439 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.439 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:44.439 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.439 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.439 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:44.439 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.439 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.439 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:44.439 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:44.439 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:44.439 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:44.697 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:44.697 "name": "pt2", 00:17:44.697 "aliases": [ 00:17:44.697 "00000000-0000-0000-0000-000000000002" 00:17:44.697 ], 00:17:44.697 "product_name": "passthru", 00:17:44.697 "block_size": 512, 00:17:44.697 "num_blocks": 65536, 00:17:44.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:44.697 "assigned_rate_limits": { 00:17:44.697 "rw_ios_per_sec": 0, 00:17:44.697 "rw_mbytes_per_sec": 0, 00:17:44.697 "r_mbytes_per_sec": 0, 00:17:44.697 "w_mbytes_per_sec": 0 00:17:44.697 }, 00:17:44.697 "claimed": true, 00:17:44.697 "claim_type": "exclusive_write", 00:17:44.697 "zoned": false, 00:17:44.697 "supported_io_types": { 00:17:44.697 "read": true, 00:17:44.697 "write": true, 00:17:44.697 "unmap": true, 00:17:44.697 "write_zeroes": true, 00:17:44.697 "flush": true, 00:17:44.697 "reset": true, 00:17:44.697 "compare": false, 00:17:44.697 "compare_and_write": false, 00:17:44.697 "abort": true, 00:17:44.697 "nvme_admin": false, 00:17:44.697 "nvme_io": false 00:17:44.697 }, 00:17:44.697 "memory_domains": [ 00:17:44.697 { 00:17:44.697 "dma_device_id": "system", 00:17:44.697 "dma_device_type": 1 00:17:44.697 }, 00:17:44.697 { 00:17:44.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.697 "dma_device_type": 2 00:17:44.697 } 00:17:44.697 ], 00:17:44.697 "driver_specific": { 00:17:44.697 "passthru": { 00:17:44.697 "name": "pt2", 00:17:44.697 "base_bdev_name": "malloc2" 00:17:44.697 } 00:17:44.697 } 00:17:44.697 }' 00:17:44.697 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.697 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.956 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:44.956 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.956 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.956 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:44.956 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.956 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.956 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:44.956 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.956 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:45.215 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:45.215 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:45.215 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:45.474 [2024-06-10 11:41:17.306304] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:45.474 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' d14996f8-2fa4-4791-a67a-12e02e2a5ec2 '!=' d14996f8-2fa4-4791-a67a-12e02e2a5ec2 ']' 00:17:45.474 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:17:45.474 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:45.474 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:45.474 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 123950 00:17:45.474 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 123950 ']' 00:17:45.474 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 123950 00:17:45.474 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:17:45.474 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:45.474 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 123950 00:17:45.474 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:45.474 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:45.474 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 123950' 00:17:45.474 killing process with pid 123950 00:17:45.474 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 123950 00:17:45.474 [2024-06-10 11:41:17.362894] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:45.474 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 123950 00:17:45.474 [2024-06-10 11:41:17.363101] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.474 [2024-06-10 11:41:17.363226] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.474 [2024-06-10 11:41:17.363309] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:17:45.732 [2024-06-10 11:41:17.572927] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:47.109 ************************************ 00:17:47.109 END TEST raid_superblock_test 00:17:47.109 ************************************ 00:17:47.109 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:47.109 00:17:47.109 real 0m12.258s 00:17:47.109 user 0m21.085s 00:17:47.109 sys 0m1.700s 00:17:47.109 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:47.109 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.109 11:41:18 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:17:47.109 11:41:18 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:17:47.109 11:41:18 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:47.109 11:41:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:47.109 ************************************ 00:17:47.109 START TEST raid_read_error_test 00:17:47.109 ************************************ 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 2 read 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:47.109 11:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:47.109 11:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:47.109 11:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:47.109 11:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:47.109 11:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:47.109 11:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:47.109 11:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.eTLU2CIFNz 00:17:47.109 11:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=124327 00:17:47.109 11:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:47.109 11:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 124327 /var/tmp/spdk-raid.sock 00:17:47.109 11:41:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 124327 ']' 00:17:47.109 11:41:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:47.109 11:41:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:47.109 11:41:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:47.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:47.109 11:41:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:47.109 11:41:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.109 [2024-06-10 11:41:19.103221] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:17:47.109 [2024-06-10 11:41:19.103915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124327 ] 00:17:47.368 [2024-06-10 11:41:19.290778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.627 [2024-06-10 11:41:19.507852] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.886 [2024-06-10 11:41:19.748969] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.144 11:41:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:48.144 11:41:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:17:48.144 11:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:48.144 11:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:48.404 BaseBdev1_malloc 00:17:48.404 11:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:48.664 true 00:17:48.664 11:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:48.664 [2024-06-10 11:41:20.715405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:48.664 [2024-06-10 11:41:20.715687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.664 [2024-06-10 11:41:20.715830] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:48.664 [2024-06-10 11:41:20.715916] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.664 [2024-06-10 11:41:20.718353] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.664 [2024-06-10 11:41:20.718501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:48.664 BaseBdev1 00:17:48.924 11:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:48.924 11:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:49.183 BaseBdev2_malloc 00:17:49.183 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:49.442 true 00:17:49.442 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:49.701 [2024-06-10 11:41:21.549642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:49.701 [2024-06-10 11:41:21.550005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.701 [2024-06-10 11:41:21.550182] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:49.701 [2024-06-10 11:41:21.550307] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.701 [2024-06-10 11:41:21.552916] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.701 [2024-06-10 11:41:21.553095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:49.701 BaseBdev2 00:17:49.701 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:49.960 [2024-06-10 11:41:21.809861] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.960 [2024-06-10 11:41:21.812172] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:49.960 [2024-06-10 11:41:21.812531] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:17:49.960 [2024-06-10 11:41:21.812657] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:49.960 [2024-06-10 11:41:21.812847] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:49.960 [2024-06-10 11:41:21.813263] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:17:49.960 [2024-06-10 11:41:21.813305] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:17:49.960 [2024-06-10 11:41:21.813638] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.960 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:49.960 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:49.960 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:49.960 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:49.960 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:49.960 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:49.960 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.960 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.961 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.961 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.961 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.961 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.219 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:50.220 "name": "raid_bdev1", 00:17:50.220 "uuid": "ded5ebb9-3a20-4ce0-9984-d343a1ac0028", 00:17:50.220 "strip_size_kb": 64, 00:17:50.220 "state": "online", 00:17:50.220 "raid_level": "concat", 00:17:50.220 "superblock": true, 00:17:50.220 "num_base_bdevs": 2, 00:17:50.220 "num_base_bdevs_discovered": 2, 00:17:50.220 "num_base_bdevs_operational": 2, 00:17:50.220 "base_bdevs_list": [ 00:17:50.220 { 00:17:50.220 "name": "BaseBdev1", 00:17:50.220 "uuid": "74723566-7d14-53b8-8961-c492bc595b2c", 00:17:50.220 "is_configured": true, 00:17:50.220 "data_offset": 2048, 00:17:50.220 "data_size": 63488 00:17:50.220 }, 00:17:50.220 { 00:17:50.220 "name": "BaseBdev2", 00:17:50.220 "uuid": "81445d57-3fb2-5e29-b859-661158039f85", 00:17:50.220 "is_configured": true, 00:17:50.220 "data_offset": 2048, 00:17:50.220 "data_size": 63488 00:17:50.220 } 00:17:50.220 ] 00:17:50.220 }' 00:17:50.220 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:50.220 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.791 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:50.791 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:50.791 [2024-06-10 11:41:22.727791] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:51.727 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:51.986 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:51.986 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:17:51.986 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:17:51.986 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:51.986 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:51.986 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:51.986 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:51.986 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:51.986 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:51.986 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:51.986 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:51.986 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:51.987 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:51.987 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.987 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.551 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.551 "name": "raid_bdev1", 00:17:52.551 "uuid": "ded5ebb9-3a20-4ce0-9984-d343a1ac0028", 00:17:52.551 "strip_size_kb": 64, 00:17:52.551 "state": "online", 00:17:52.551 "raid_level": "concat", 00:17:52.551 "superblock": true, 00:17:52.551 "num_base_bdevs": 2, 00:17:52.551 "num_base_bdevs_discovered": 2, 00:17:52.551 "num_base_bdevs_operational": 2, 00:17:52.551 "base_bdevs_list": [ 00:17:52.551 { 00:17:52.551 "name": "BaseBdev1", 00:17:52.551 "uuid": "74723566-7d14-53b8-8961-c492bc595b2c", 00:17:52.551 "is_configured": true, 00:17:52.551 "data_offset": 2048, 00:17:52.551 "data_size": 63488 00:17:52.551 }, 00:17:52.551 { 00:17:52.551 "name": "BaseBdev2", 00:17:52.551 "uuid": "81445d57-3fb2-5e29-b859-661158039f85", 00:17:52.551 "is_configured": true, 00:17:52.551 "data_offset": 2048, 00:17:52.551 "data_size": 63488 00:17:52.551 } 00:17:52.551 ] 00:17:52.551 }' 00:17:52.551 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.551 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.124 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:53.381 [2024-06-10 11:41:25.290361] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.381 [2024-06-10 11:41:25.290622] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.381 [2024-06-10 11:41:25.293822] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.381 [2024-06-10 11:41:25.293999] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.381 [2024-06-10 11:41:25.294115] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.382 [2024-06-10 11:41:25.294199] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:17:53.382 0 00:17:53.382 11:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 124327 00:17:53.382 11:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 124327 ']' 00:17:53.382 11:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 124327 00:17:53.382 11:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:17:53.382 11:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:53.382 11:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 124327 00:17:53.382 11:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:53.382 11:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:53.382 11:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 124327' 00:17:53.382 killing process with pid 124327 00:17:53.382 11:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 124327 00:17:53.382 [2024-06-10 11:41:25.338790] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:53.382 11:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 124327 00:17:53.639 [2024-06-10 11:41:25.494388] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.539 11:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.eTLU2CIFNz 00:17:55.539 11:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:55.539 11:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:55.539 11:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.39 00:17:55.539 11:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:55.539 11:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:55.539 11:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:55.539 11:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.39 != \0\.\0\0 ]] 00:17:55.539 00:17:55.539 real 0m8.137s 00:17:55.539 user 0m12.002s 00:17:55.539 sys 0m0.928s 00:17:55.539 11:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:55.539 11:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.539 ************************************ 00:17:55.539 END TEST raid_read_error_test 00:17:55.539 ************************************ 00:17:55.539 11:41:27 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:17:55.539 11:41:27 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:17:55.539 11:41:27 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:55.539 11:41:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.539 ************************************ 00:17:55.539 START TEST raid_write_error_test 00:17:55.539 ************************************ 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 2 write 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.LmJdwgVzHs 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=124531 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 124531 /var/tmp/spdk-raid.sock 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 124531 ']' 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:55.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:55.539 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.539 [2024-06-10 11:41:27.282381] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:17:55.539 [2024-06-10 11:41:27.283030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124531 ] 00:17:55.539 [2024-06-10 11:41:27.469978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.797 [2024-06-10 11:41:27.777828] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.056 [2024-06-10 11:41:28.066538] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.314 11:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:56.314 11:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:17:56.314 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:56.314 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:56.573 BaseBdev1_malloc 00:17:56.832 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:57.092 true 00:17:57.092 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:57.351 [2024-06-10 11:41:29.260416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:57.351 [2024-06-10 11:41:29.260730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.351 [2024-06-10 11:41:29.260912] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:57.351 [2024-06-10 11:41:29.261055] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.351 [2024-06-10 11:41:29.264185] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.351 [2024-06-10 11:41:29.264382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:57.351 BaseBdev1 00:17:57.351 11:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:57.351 11:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:57.609 BaseBdev2_malloc 00:17:57.609 11:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:57.867 true 00:17:57.867 11:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:58.126 [2024-06-10 11:41:30.071748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:58.126 [2024-06-10 11:41:30.072111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.126 [2024-06-10 11:41:30.072245] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:58.126 [2024-06-10 11:41:30.072557] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.126 [2024-06-10 11:41:30.075618] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.126 [2024-06-10 11:41:30.075817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:58.126 BaseBdev2 00:17:58.126 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:58.466 [2024-06-10 11:41:30.368222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:58.466 [2024-06-10 11:41:30.370650] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.466 [2024-06-10 11:41:30.371041] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:17:58.466 [2024-06-10 11:41:30.371161] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:58.466 [2024-06-10 11:41:30.371359] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:58.466 [2024-06-10 11:41:30.371780] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:17:58.466 [2024-06-10 11:41:30.371893] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:17:58.467 [2024-06-10 11:41:30.372183] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.467 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:58.467 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:58.467 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:58.467 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:58.467 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:58.467 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:58.467 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.467 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.467 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.467 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.467 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.467 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.726 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.726 "name": "raid_bdev1", 00:17:58.726 "uuid": "ed1b2aeb-c568-454f-876f-a9f578c82af1", 00:17:58.726 "strip_size_kb": 64, 00:17:58.726 "state": "online", 00:17:58.726 "raid_level": "concat", 00:17:58.726 "superblock": true, 00:17:58.726 "num_base_bdevs": 2, 00:17:58.726 "num_base_bdevs_discovered": 2, 00:17:58.726 "num_base_bdevs_operational": 2, 00:17:58.726 "base_bdevs_list": [ 00:17:58.726 { 00:17:58.726 "name": "BaseBdev1", 00:17:58.726 "uuid": "b48ca5c5-4469-531f-8066-96305474938b", 00:17:58.726 "is_configured": true, 00:17:58.726 "data_offset": 2048, 00:17:58.726 "data_size": 63488 00:17:58.726 }, 00:17:58.726 { 00:17:58.726 "name": "BaseBdev2", 00:17:58.726 "uuid": "658f39b9-9bc6-5e13-9d38-c2e1db07603c", 00:17:58.726 "is_configured": true, 00:17:58.726 "data_offset": 2048, 00:17:58.726 "data_size": 63488 00:17:58.726 } 00:17:58.726 ] 00:17:58.726 }' 00:17:58.726 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.726 11:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.294 11:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:59.294 11:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:59.294 [2024-06-10 11:41:31.269959] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:00.227 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:00.485 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:00.485 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:18:00.486 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:18:00.486 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:18:00.486 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:00.486 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:00.486 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:00.486 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:00.486 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:00.486 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.486 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.486 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.486 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.486 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.486 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.744 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.744 "name": "raid_bdev1", 00:18:00.744 "uuid": "ed1b2aeb-c568-454f-876f-a9f578c82af1", 00:18:00.744 "strip_size_kb": 64, 00:18:00.744 "state": "online", 00:18:00.744 "raid_level": "concat", 00:18:00.744 "superblock": true, 00:18:00.744 "num_base_bdevs": 2, 00:18:00.744 "num_base_bdevs_discovered": 2, 00:18:00.744 "num_base_bdevs_operational": 2, 00:18:00.744 "base_bdevs_list": [ 00:18:00.744 { 00:18:00.744 "name": "BaseBdev1", 00:18:00.744 "uuid": "b48ca5c5-4469-531f-8066-96305474938b", 00:18:00.744 "is_configured": true, 00:18:00.744 "data_offset": 2048, 00:18:00.744 "data_size": 63488 00:18:00.744 }, 00:18:00.744 { 00:18:00.744 "name": "BaseBdev2", 00:18:00.744 "uuid": "658f39b9-9bc6-5e13-9d38-c2e1db07603c", 00:18:00.744 "is_configured": true, 00:18:00.744 "data_offset": 2048, 00:18:00.744 "data_size": 63488 00:18:00.744 } 00:18:00.744 ] 00:18:00.744 }' 00:18:00.744 11:41:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.744 11:41:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.680 11:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:01.680 [2024-06-10 11:41:33.643513] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.680 [2024-06-10 11:41:33.643767] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.680 [2024-06-10 11:41:33.647135] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.680 [2024-06-10 11:41:33.647308] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.680 [2024-06-10 11:41:33.647382] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.680 [2024-06-10 11:41:33.647484] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:18:01.680 0 00:18:01.680 11:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 124531 00:18:01.680 11:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 124531 ']' 00:18:01.680 11:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 124531 00:18:01.680 11:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:18:01.680 11:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:01.680 11:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 124531 00:18:01.680 11:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:01.680 11:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:01.680 11:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 124531' 00:18:01.680 killing process with pid 124531 00:18:01.680 11:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 124531 00:18:01.680 [2024-06-10 11:41:33.699203] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:01.680 11:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 124531 00:18:01.938 [2024-06-10 11:41:33.853560] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:03.849 11:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.LmJdwgVzHs 00:18:03.849 11:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:03.849 11:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:03.849 ************************************ 00:18:03.849 END TEST raid_write_error_test 00:18:03.849 ************************************ 00:18:03.849 11:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.42 00:18:03.849 11:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:18:03.849 11:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:03.849 11:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:03.849 11:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.42 != \0\.\0\0 ]] 00:18:03.849 00:18:03.849 real 0m8.307s 00:18:03.849 user 0m12.322s 00:18:03.849 sys 0m0.930s 00:18:03.849 11:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:03.849 11:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.849 11:41:35 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:18:03.849 11:41:35 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:18:03.849 11:41:35 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:18:03.849 11:41:35 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:03.849 11:41:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:03.849 ************************************ 00:18:03.849 START TEST raid_state_function_test 00:18:03.849 ************************************ 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 2 false 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=124731 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 124731' 00:18:03.849 Process raid pid: 124731 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 124731 /var/tmp/spdk-raid.sock 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 124731 ']' 00:18:03.849 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:03.850 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:03.850 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:03.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:03.850 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:03.850 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.850 [2024-06-10 11:41:35.655979] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:18:03.850 [2024-06-10 11:41:35.656486] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.850 [2024-06-10 11:41:35.846978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.415 [2024-06-10 11:41:36.184365] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.673 [2024-06-10 11:41:36.504599] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:04.673 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:04.673 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:18:04.673 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:04.931 [2024-06-10 11:41:36.952181] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:04.931 [2024-06-10 11:41:36.952499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:04.931 [2024-06-10 11:41:36.952589] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:04.931 [2024-06-10 11:41:36.952662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:04.931 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:04.931 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:04.931 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:04.931 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:04.931 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:04.931 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:04.931 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:04.931 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:04.931 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:04.931 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:04.931 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.931 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.189 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:05.189 "name": "Existed_Raid", 00:18:05.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.189 "strip_size_kb": 0, 00:18:05.189 "state": "configuring", 00:18:05.189 "raid_level": "raid1", 00:18:05.189 "superblock": false, 00:18:05.189 "num_base_bdevs": 2, 00:18:05.189 "num_base_bdevs_discovered": 0, 00:18:05.189 "num_base_bdevs_operational": 2, 00:18:05.189 "base_bdevs_list": [ 00:18:05.189 { 00:18:05.189 "name": "BaseBdev1", 00:18:05.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.189 "is_configured": false, 00:18:05.189 "data_offset": 0, 00:18:05.189 "data_size": 0 00:18:05.189 }, 00:18:05.189 { 00:18:05.189 "name": "BaseBdev2", 00:18:05.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.189 "is_configured": false, 00:18:05.189 "data_offset": 0, 00:18:05.189 "data_size": 0 00:18:05.189 } 00:18:05.189 ] 00:18:05.189 }' 00:18:05.189 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:05.189 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.755 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:06.015 [2024-06-10 11:41:38.064283] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:06.015 [2024-06-10 11:41:38.064497] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:06.282 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:06.540 [2024-06-10 11:41:38.348356] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:06.540 [2024-06-10 11:41:38.348669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:06.540 [2024-06-10 11:41:38.348768] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:06.540 [2024-06-10 11:41:38.348834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:06.540 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:06.797 [2024-06-10 11:41:38.666893] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.797 BaseBdev1 00:18:06.797 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:06.797 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:18:06.797 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:18:06.797 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:18:06.797 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:18:06.797 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:18:06.797 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:07.054 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:07.619 [ 00:18:07.619 { 00:18:07.619 "name": "BaseBdev1", 00:18:07.619 "aliases": [ 00:18:07.619 "11a751df-164e-4489-8a1b-86fcc7748c85" 00:18:07.619 ], 00:18:07.619 "product_name": "Malloc disk", 00:18:07.619 "block_size": 512, 00:18:07.619 "num_blocks": 65536, 00:18:07.619 "uuid": "11a751df-164e-4489-8a1b-86fcc7748c85", 00:18:07.619 "assigned_rate_limits": { 00:18:07.619 "rw_ios_per_sec": 0, 00:18:07.619 "rw_mbytes_per_sec": 0, 00:18:07.619 "r_mbytes_per_sec": 0, 00:18:07.619 "w_mbytes_per_sec": 0 00:18:07.619 }, 00:18:07.619 "claimed": true, 00:18:07.619 "claim_type": "exclusive_write", 00:18:07.619 "zoned": false, 00:18:07.619 "supported_io_types": { 00:18:07.619 "read": true, 00:18:07.619 "write": true, 00:18:07.619 "unmap": true, 00:18:07.619 "write_zeroes": true, 00:18:07.619 "flush": true, 00:18:07.619 "reset": true, 00:18:07.619 "compare": false, 00:18:07.619 "compare_and_write": false, 00:18:07.619 "abort": true, 00:18:07.619 "nvme_admin": false, 00:18:07.619 "nvme_io": false 00:18:07.619 }, 00:18:07.619 "memory_domains": [ 00:18:07.619 { 00:18:07.619 "dma_device_id": "system", 00:18:07.619 "dma_device_type": 1 00:18:07.619 }, 00:18:07.619 { 00:18:07.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.619 "dma_device_type": 2 00:18:07.619 } 00:18:07.619 ], 00:18:07.619 "driver_specific": {} 00:18:07.619 } 00:18:07.619 ] 00:18:07.619 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:18:07.619 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:07.619 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:07.619 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:07.619 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:07.619 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:07.619 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:07.619 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:07.619 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:07.619 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:07.619 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:07.619 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.619 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.877 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:07.877 "name": "Existed_Raid", 00:18:07.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.877 "strip_size_kb": 0, 00:18:07.877 "state": "configuring", 00:18:07.877 "raid_level": "raid1", 00:18:07.877 "superblock": false, 00:18:07.877 "num_base_bdevs": 2, 00:18:07.877 "num_base_bdevs_discovered": 1, 00:18:07.877 "num_base_bdevs_operational": 2, 00:18:07.877 "base_bdevs_list": [ 00:18:07.877 { 00:18:07.877 "name": "BaseBdev1", 00:18:07.877 "uuid": "11a751df-164e-4489-8a1b-86fcc7748c85", 00:18:07.877 "is_configured": true, 00:18:07.877 "data_offset": 0, 00:18:07.877 "data_size": 65536 00:18:07.877 }, 00:18:07.877 { 00:18:07.877 "name": "BaseBdev2", 00:18:07.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.877 "is_configured": false, 00:18:07.877 "data_offset": 0, 00:18:07.877 "data_size": 0 00:18:07.877 } 00:18:07.877 ] 00:18:07.877 }' 00:18:07.877 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:07.877 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.445 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:08.703 [2024-06-10 11:41:40.651344] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:08.703 [2024-06-10 11:41:40.651628] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:08.703 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:08.962 [2024-06-10 11:41:40.867386] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.962 [2024-06-10 11:41:40.869868] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:08.962 [2024-06-10 11:41:40.870062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:08.962 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:08.962 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:08.962 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:08.962 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:08.962 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:08.962 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:08.962 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:08.962 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:08.962 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:08.962 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:08.962 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:08.962 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:08.962 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.962 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.220 11:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.220 "name": "Existed_Raid", 00:18:09.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.220 "strip_size_kb": 0, 00:18:09.220 "state": "configuring", 00:18:09.220 "raid_level": "raid1", 00:18:09.220 "superblock": false, 00:18:09.220 "num_base_bdevs": 2, 00:18:09.220 "num_base_bdevs_discovered": 1, 00:18:09.220 "num_base_bdevs_operational": 2, 00:18:09.220 "base_bdevs_list": [ 00:18:09.220 { 00:18:09.220 "name": "BaseBdev1", 00:18:09.220 "uuid": "11a751df-164e-4489-8a1b-86fcc7748c85", 00:18:09.220 "is_configured": true, 00:18:09.220 "data_offset": 0, 00:18:09.220 "data_size": 65536 00:18:09.220 }, 00:18:09.220 { 00:18:09.220 "name": "BaseBdev2", 00:18:09.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.220 "is_configured": false, 00:18:09.220 "data_offset": 0, 00:18:09.220 "data_size": 0 00:18:09.221 } 00:18:09.221 ] 00:18:09.221 }' 00:18:09.221 11:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.221 11:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.787 11:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:10.045 [2024-06-10 11:41:41.934863] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.045 [2024-06-10 11:41:41.935135] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:18:10.045 [2024-06-10 11:41:41.935181] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:10.045 [2024-06-10 11:41:41.935420] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:10.045 [2024-06-10 11:41:41.935879] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:18:10.045 [2024-06-10 11:41:41.935996] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:18:10.045 [2024-06-10 11:41:41.936359] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.045 BaseBdev2 00:18:10.045 11:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:10.045 11:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:18:10.045 11:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:18:10.045 11:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:18:10.045 11:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:18:10.045 11:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:18:10.045 11:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:10.301 11:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:10.558 [ 00:18:10.558 { 00:18:10.558 "name": "BaseBdev2", 00:18:10.558 "aliases": [ 00:18:10.558 "ba399c34-00f9-49fa-90db-0596fbbd3827" 00:18:10.558 ], 00:18:10.558 "product_name": "Malloc disk", 00:18:10.558 "block_size": 512, 00:18:10.558 "num_blocks": 65536, 00:18:10.558 "uuid": "ba399c34-00f9-49fa-90db-0596fbbd3827", 00:18:10.558 "assigned_rate_limits": { 00:18:10.558 "rw_ios_per_sec": 0, 00:18:10.558 "rw_mbytes_per_sec": 0, 00:18:10.558 "r_mbytes_per_sec": 0, 00:18:10.558 "w_mbytes_per_sec": 0 00:18:10.559 }, 00:18:10.559 "claimed": true, 00:18:10.559 "claim_type": "exclusive_write", 00:18:10.559 "zoned": false, 00:18:10.559 "supported_io_types": { 00:18:10.559 "read": true, 00:18:10.559 "write": true, 00:18:10.559 "unmap": true, 00:18:10.559 "write_zeroes": true, 00:18:10.559 "flush": true, 00:18:10.559 "reset": true, 00:18:10.559 "compare": false, 00:18:10.559 "compare_and_write": false, 00:18:10.559 "abort": true, 00:18:10.559 "nvme_admin": false, 00:18:10.559 "nvme_io": false 00:18:10.559 }, 00:18:10.559 "memory_domains": [ 00:18:10.559 { 00:18:10.559 "dma_device_id": "system", 00:18:10.559 "dma_device_type": 1 00:18:10.559 }, 00:18:10.559 { 00:18:10.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.559 "dma_device_type": 2 00:18:10.559 } 00:18:10.559 ], 00:18:10.559 "driver_specific": {} 00:18:10.559 } 00:18:10.559 ] 00:18:10.559 11:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:18:10.559 11:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:10.559 11:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:10.559 11:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:10.559 11:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:10.559 11:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:10.559 11:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:10.559 11:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:10.559 11:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:10.559 11:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:10.559 11:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:10.559 11:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:10.559 11:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:10.559 11:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.559 11:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.816 11:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:10.816 "name": "Existed_Raid", 00:18:10.816 "uuid": "3643bcbc-3310-4992-a636-b4c34c2a4854", 00:18:10.816 "strip_size_kb": 0, 00:18:10.816 "state": "online", 00:18:10.816 "raid_level": "raid1", 00:18:10.816 "superblock": false, 00:18:10.816 "num_base_bdevs": 2, 00:18:10.816 "num_base_bdevs_discovered": 2, 00:18:10.816 "num_base_bdevs_operational": 2, 00:18:10.816 "base_bdevs_list": [ 00:18:10.816 { 00:18:10.816 "name": "BaseBdev1", 00:18:10.816 "uuid": "11a751df-164e-4489-8a1b-86fcc7748c85", 00:18:10.817 "is_configured": true, 00:18:10.817 "data_offset": 0, 00:18:10.817 "data_size": 65536 00:18:10.817 }, 00:18:10.817 { 00:18:10.817 "name": "BaseBdev2", 00:18:10.817 "uuid": "ba399c34-00f9-49fa-90db-0596fbbd3827", 00:18:10.817 "is_configured": true, 00:18:10.817 "data_offset": 0, 00:18:10.817 "data_size": 65536 00:18:10.817 } 00:18:10.817 ] 00:18:10.817 }' 00:18:10.817 11:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:10.817 11:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.382 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:11.382 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:11.382 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:11.382 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:11.382 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:11.382 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:11.382 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:11.382 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:11.382 [2024-06-10 11:41:43.403407] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.382 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:11.382 "name": "Existed_Raid", 00:18:11.382 "aliases": [ 00:18:11.382 "3643bcbc-3310-4992-a636-b4c34c2a4854" 00:18:11.382 ], 00:18:11.382 "product_name": "Raid Volume", 00:18:11.382 "block_size": 512, 00:18:11.382 "num_blocks": 65536, 00:18:11.382 "uuid": "3643bcbc-3310-4992-a636-b4c34c2a4854", 00:18:11.382 "assigned_rate_limits": { 00:18:11.382 "rw_ios_per_sec": 0, 00:18:11.382 "rw_mbytes_per_sec": 0, 00:18:11.382 "r_mbytes_per_sec": 0, 00:18:11.382 "w_mbytes_per_sec": 0 00:18:11.382 }, 00:18:11.382 "claimed": false, 00:18:11.382 "zoned": false, 00:18:11.382 "supported_io_types": { 00:18:11.382 "read": true, 00:18:11.382 "write": true, 00:18:11.382 "unmap": false, 00:18:11.382 "write_zeroes": true, 00:18:11.382 "flush": false, 00:18:11.382 "reset": true, 00:18:11.382 "compare": false, 00:18:11.382 "compare_and_write": false, 00:18:11.382 "abort": false, 00:18:11.382 "nvme_admin": false, 00:18:11.382 "nvme_io": false 00:18:11.382 }, 00:18:11.382 "memory_domains": [ 00:18:11.382 { 00:18:11.382 "dma_device_id": "system", 00:18:11.383 "dma_device_type": 1 00:18:11.383 }, 00:18:11.383 { 00:18:11.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.383 "dma_device_type": 2 00:18:11.383 }, 00:18:11.383 { 00:18:11.383 "dma_device_id": "system", 00:18:11.383 "dma_device_type": 1 00:18:11.383 }, 00:18:11.383 { 00:18:11.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.383 "dma_device_type": 2 00:18:11.383 } 00:18:11.383 ], 00:18:11.383 "driver_specific": { 00:18:11.383 "raid": { 00:18:11.383 "uuid": "3643bcbc-3310-4992-a636-b4c34c2a4854", 00:18:11.383 "strip_size_kb": 0, 00:18:11.383 "state": "online", 00:18:11.383 "raid_level": "raid1", 00:18:11.383 "superblock": false, 00:18:11.383 "num_base_bdevs": 2, 00:18:11.383 "num_base_bdevs_discovered": 2, 00:18:11.383 "num_base_bdevs_operational": 2, 00:18:11.383 "base_bdevs_list": [ 00:18:11.383 { 00:18:11.383 "name": "BaseBdev1", 00:18:11.383 "uuid": "11a751df-164e-4489-8a1b-86fcc7748c85", 00:18:11.383 "is_configured": true, 00:18:11.383 "data_offset": 0, 00:18:11.383 "data_size": 65536 00:18:11.383 }, 00:18:11.383 { 00:18:11.383 "name": "BaseBdev2", 00:18:11.383 "uuid": "ba399c34-00f9-49fa-90db-0596fbbd3827", 00:18:11.383 "is_configured": true, 00:18:11.383 "data_offset": 0, 00:18:11.383 "data_size": 65536 00:18:11.383 } 00:18:11.383 ] 00:18:11.383 } 00:18:11.383 } 00:18:11.383 }' 00:18:11.383 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:11.641 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:11.641 BaseBdev2' 00:18:11.641 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:11.641 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:11.641 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:11.899 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:11.900 "name": "BaseBdev1", 00:18:11.900 "aliases": [ 00:18:11.900 "11a751df-164e-4489-8a1b-86fcc7748c85" 00:18:11.900 ], 00:18:11.900 "product_name": "Malloc disk", 00:18:11.900 "block_size": 512, 00:18:11.900 "num_blocks": 65536, 00:18:11.900 "uuid": "11a751df-164e-4489-8a1b-86fcc7748c85", 00:18:11.900 "assigned_rate_limits": { 00:18:11.900 "rw_ios_per_sec": 0, 00:18:11.900 "rw_mbytes_per_sec": 0, 00:18:11.900 "r_mbytes_per_sec": 0, 00:18:11.900 "w_mbytes_per_sec": 0 00:18:11.900 }, 00:18:11.900 "claimed": true, 00:18:11.900 "claim_type": "exclusive_write", 00:18:11.900 "zoned": false, 00:18:11.900 "supported_io_types": { 00:18:11.900 "read": true, 00:18:11.900 "write": true, 00:18:11.900 "unmap": true, 00:18:11.900 "write_zeroes": true, 00:18:11.900 "flush": true, 00:18:11.900 "reset": true, 00:18:11.900 "compare": false, 00:18:11.900 "compare_and_write": false, 00:18:11.900 "abort": true, 00:18:11.900 "nvme_admin": false, 00:18:11.900 "nvme_io": false 00:18:11.900 }, 00:18:11.900 "memory_domains": [ 00:18:11.900 { 00:18:11.900 "dma_device_id": "system", 00:18:11.900 "dma_device_type": 1 00:18:11.900 }, 00:18:11.900 { 00:18:11.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.900 "dma_device_type": 2 00:18:11.900 } 00:18:11.900 ], 00:18:11.900 "driver_specific": {} 00:18:11.900 }' 00:18:11.900 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:11.900 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:11.900 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:11.900 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:11.900 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:11.900 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:11.900 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:11.900 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:12.159 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:12.159 11:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:12.159 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:12.159 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:12.159 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:12.159 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:12.159 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:12.418 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:12.418 "name": "BaseBdev2", 00:18:12.418 "aliases": [ 00:18:12.418 "ba399c34-00f9-49fa-90db-0596fbbd3827" 00:18:12.418 ], 00:18:12.418 "product_name": "Malloc disk", 00:18:12.418 "block_size": 512, 00:18:12.418 "num_blocks": 65536, 00:18:12.418 "uuid": "ba399c34-00f9-49fa-90db-0596fbbd3827", 00:18:12.418 "assigned_rate_limits": { 00:18:12.418 "rw_ios_per_sec": 0, 00:18:12.418 "rw_mbytes_per_sec": 0, 00:18:12.418 "r_mbytes_per_sec": 0, 00:18:12.418 "w_mbytes_per_sec": 0 00:18:12.418 }, 00:18:12.418 "claimed": true, 00:18:12.418 "claim_type": "exclusive_write", 00:18:12.418 "zoned": false, 00:18:12.418 "supported_io_types": { 00:18:12.418 "read": true, 00:18:12.418 "write": true, 00:18:12.418 "unmap": true, 00:18:12.418 "write_zeroes": true, 00:18:12.418 "flush": true, 00:18:12.418 "reset": true, 00:18:12.418 "compare": false, 00:18:12.418 "compare_and_write": false, 00:18:12.418 "abort": true, 00:18:12.418 "nvme_admin": false, 00:18:12.418 "nvme_io": false 00:18:12.418 }, 00:18:12.418 "memory_domains": [ 00:18:12.418 { 00:18:12.418 "dma_device_id": "system", 00:18:12.418 "dma_device_type": 1 00:18:12.418 }, 00:18:12.418 { 00:18:12.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.418 "dma_device_type": 2 00:18:12.418 } 00:18:12.418 ], 00:18:12.418 "driver_specific": {} 00:18:12.418 }' 00:18:12.418 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:12.418 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:12.418 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:12.418 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:12.418 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:12.418 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:12.418 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:12.418 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:12.676 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:12.676 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:12.676 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:12.676 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:12.676 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:12.935 [2024-06-10 11:41:44.799540] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:12.935 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:12.935 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:12.935 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:12.935 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:12.935 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:12.935 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:12.935 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:12.935 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:12.935 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:12.935 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:12.935 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:12.935 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.935 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.935 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.935 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.935 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.935 11:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.193 11:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:13.193 "name": "Existed_Raid", 00:18:13.193 "uuid": "3643bcbc-3310-4992-a636-b4c34c2a4854", 00:18:13.193 "strip_size_kb": 0, 00:18:13.193 "state": "online", 00:18:13.193 "raid_level": "raid1", 00:18:13.193 "superblock": false, 00:18:13.193 "num_base_bdevs": 2, 00:18:13.193 "num_base_bdevs_discovered": 1, 00:18:13.193 "num_base_bdevs_operational": 1, 00:18:13.193 "base_bdevs_list": [ 00:18:13.193 { 00:18:13.193 "name": null, 00:18:13.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.193 "is_configured": false, 00:18:13.193 "data_offset": 0, 00:18:13.193 "data_size": 65536 00:18:13.193 }, 00:18:13.193 { 00:18:13.193 "name": "BaseBdev2", 00:18:13.193 "uuid": "ba399c34-00f9-49fa-90db-0596fbbd3827", 00:18:13.193 "is_configured": true, 00:18:13.193 "data_offset": 0, 00:18:13.193 "data_size": 65536 00:18:13.193 } 00:18:13.193 ] 00:18:13.193 }' 00:18:13.193 11:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:13.193 11:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.760 11:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:13.760 11:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:13.760 11:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.760 11:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:14.019 11:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:14.019 11:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:14.019 11:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:14.277 [2024-06-10 11:41:46.221571] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:14.277 [2024-06-10 11:41:46.221910] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.277 [2024-06-10 11:41:46.323735] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.277 [2024-06-10 11:41:46.323938] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.277 [2024-06-10 11:41:46.324050] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 124731 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 124731 ']' 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 124731 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 124731 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 124731' 00:18:14.536 killing process with pid 124731 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 124731 00:18:14.536 [2024-06-10 11:41:46.575027] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:14.536 11:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 124731 00:18:14.536 [2024-06-10 11:41:46.575304] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:16.439 ************************************ 00:18:16.439 END TEST raid_state_function_test 00:18:16.439 ************************************ 00:18:16.439 11:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:16.439 00:18:16.439 real 0m12.424s 00:18:16.439 user 0m21.016s 00:18:16.439 sys 0m1.912s 00:18:16.439 11:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:16.439 11:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.439 11:41:48 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:18:16.439 11:41:48 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:18:16.439 11:41:48 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:16.439 11:41:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:16.439 ************************************ 00:18:16.439 START TEST raid_state_function_test_sb 00:18:16.439 ************************************ 00:18:16.439 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 2 true 00:18:16.439 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:18:16.439 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:18:16.439 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:16.439 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:16.439 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:16.439 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:16.439 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:16.439 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:16.439 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:16.439 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:16.439 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:16.439 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:16.439 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=125119 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 125119' 00:18:16.440 Process raid pid: 125119 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 125119 /var/tmp/spdk-raid.sock 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 125119 ']' 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:16.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:16.440 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.440 [2024-06-10 11:41:48.129255] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:18:16.440 [2024-06-10 11:41:48.129703] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.440 [2024-06-10 11:41:48.308931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.698 [2024-06-10 11:41:48.510123] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.698 [2024-06-10 11:41:48.715150] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.265 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:17.265 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:18:17.265 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:17.265 [2024-06-10 11:41:49.296587] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:17.265 [2024-06-10 11:41:49.296862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:17.265 [2024-06-10 11:41:49.296969] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:17.265 [2024-06-10 11:41:49.297037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:17.265 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:17.265 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:17.265 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:17.265 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:17.265 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:17.265 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:17.265 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:17.265 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:17.265 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:17.265 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:17.265 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.265 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.851 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:17.851 "name": "Existed_Raid", 00:18:17.851 "uuid": "f5ce82de-6f91-47ff-b6f2-e77ab5c6b89f", 00:18:17.851 "strip_size_kb": 0, 00:18:17.851 "state": "configuring", 00:18:17.851 "raid_level": "raid1", 00:18:17.851 "superblock": true, 00:18:17.851 "num_base_bdevs": 2, 00:18:17.851 "num_base_bdevs_discovered": 0, 00:18:17.851 "num_base_bdevs_operational": 2, 00:18:17.851 "base_bdevs_list": [ 00:18:17.851 { 00:18:17.851 "name": "BaseBdev1", 00:18:17.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.851 "is_configured": false, 00:18:17.851 "data_offset": 0, 00:18:17.851 "data_size": 0 00:18:17.851 }, 00:18:17.851 { 00:18:17.851 "name": "BaseBdev2", 00:18:17.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.851 "is_configured": false, 00:18:17.851 "data_offset": 0, 00:18:17.851 "data_size": 0 00:18:17.851 } 00:18:17.851 ] 00:18:17.851 }' 00:18:17.851 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:17.851 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.418 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:18.676 [2024-06-10 11:41:50.504670] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:18.676 [2024-06-10 11:41:50.504905] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:18.676 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:18.934 [2024-06-10 11:41:50.788758] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:18.934 [2024-06-10 11:41:50.789053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:18.934 [2024-06-10 11:41:50.789171] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:18.934 [2024-06-10 11:41:50.789289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:18.934 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:19.192 [2024-06-10 11:41:51.070484] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.192 BaseBdev1 00:18:19.192 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:19.192 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:18:19.193 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:18:19.193 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:18:19.193 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:18:19.193 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:18:19.193 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:19.452 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:19.712 [ 00:18:19.712 { 00:18:19.712 "name": "BaseBdev1", 00:18:19.712 "aliases": [ 00:18:19.712 "47615a2c-5b03-4679-a9e5-bcfcbe7bec9e" 00:18:19.712 ], 00:18:19.712 "product_name": "Malloc disk", 00:18:19.712 "block_size": 512, 00:18:19.712 "num_blocks": 65536, 00:18:19.712 "uuid": "47615a2c-5b03-4679-a9e5-bcfcbe7bec9e", 00:18:19.712 "assigned_rate_limits": { 00:18:19.712 "rw_ios_per_sec": 0, 00:18:19.712 "rw_mbytes_per_sec": 0, 00:18:19.712 "r_mbytes_per_sec": 0, 00:18:19.712 "w_mbytes_per_sec": 0 00:18:19.712 }, 00:18:19.712 "claimed": true, 00:18:19.712 "claim_type": "exclusive_write", 00:18:19.712 "zoned": false, 00:18:19.712 "supported_io_types": { 00:18:19.712 "read": true, 00:18:19.712 "write": true, 00:18:19.712 "unmap": true, 00:18:19.712 "write_zeroes": true, 00:18:19.712 "flush": true, 00:18:19.712 "reset": true, 00:18:19.712 "compare": false, 00:18:19.712 "compare_and_write": false, 00:18:19.712 "abort": true, 00:18:19.712 "nvme_admin": false, 00:18:19.712 "nvme_io": false 00:18:19.712 }, 00:18:19.712 "memory_domains": [ 00:18:19.712 { 00:18:19.712 "dma_device_id": "system", 00:18:19.712 "dma_device_type": 1 00:18:19.712 }, 00:18:19.712 { 00:18:19.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.712 "dma_device_type": 2 00:18:19.712 } 00:18:19.712 ], 00:18:19.712 "driver_specific": {} 00:18:19.712 } 00:18:19.712 ] 00:18:19.712 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:18:19.712 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:19.712 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:19.712 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:19.712 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:19.712 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:19.712 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:19.712 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:19.712 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:19.712 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:19.712 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:19.712 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.712 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.971 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:19.971 "name": "Existed_Raid", 00:18:19.971 "uuid": "ea9285a4-8029-476c-ad6a-1cf51ad2beb6", 00:18:19.971 "strip_size_kb": 0, 00:18:19.971 "state": "configuring", 00:18:19.971 "raid_level": "raid1", 00:18:19.971 "superblock": true, 00:18:19.971 "num_base_bdevs": 2, 00:18:19.971 "num_base_bdevs_discovered": 1, 00:18:19.971 "num_base_bdevs_operational": 2, 00:18:19.971 "base_bdevs_list": [ 00:18:19.971 { 00:18:19.971 "name": "BaseBdev1", 00:18:19.971 "uuid": "47615a2c-5b03-4679-a9e5-bcfcbe7bec9e", 00:18:19.971 "is_configured": true, 00:18:19.971 "data_offset": 2048, 00:18:19.971 "data_size": 63488 00:18:19.971 }, 00:18:19.971 { 00:18:19.971 "name": "BaseBdev2", 00:18:19.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.971 "is_configured": false, 00:18:19.971 "data_offset": 0, 00:18:19.971 "data_size": 0 00:18:19.971 } 00:18:19.971 ] 00:18:19.971 }' 00:18:19.971 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:19.971 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.603 11:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:20.861 [2024-06-10 11:41:52.734914] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:20.861 [2024-06-10 11:41:52.735196] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:20.861 11:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:21.120 [2024-06-10 11:41:53.023075] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:21.120 [2024-06-10 11:41:53.025529] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:21.120 [2024-06-10 11:41:53.025767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:21.120 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:21.120 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:21.120 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:21.120 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:21.120 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:21.120 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:21.120 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:21.120 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:21.120 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:21.120 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:21.120 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:21.120 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:21.120 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.120 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.378 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:21.378 "name": "Existed_Raid", 00:18:21.378 "uuid": "41e1fac6-c385-4639-9715-170204670cfc", 00:18:21.378 "strip_size_kb": 0, 00:18:21.378 "state": "configuring", 00:18:21.378 "raid_level": "raid1", 00:18:21.378 "superblock": true, 00:18:21.378 "num_base_bdevs": 2, 00:18:21.378 "num_base_bdevs_discovered": 1, 00:18:21.378 "num_base_bdevs_operational": 2, 00:18:21.378 "base_bdevs_list": [ 00:18:21.378 { 00:18:21.378 "name": "BaseBdev1", 00:18:21.378 "uuid": "47615a2c-5b03-4679-a9e5-bcfcbe7bec9e", 00:18:21.378 "is_configured": true, 00:18:21.378 "data_offset": 2048, 00:18:21.378 "data_size": 63488 00:18:21.378 }, 00:18:21.378 { 00:18:21.378 "name": "BaseBdev2", 00:18:21.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.378 "is_configured": false, 00:18:21.378 "data_offset": 0, 00:18:21.378 "data_size": 0 00:18:21.378 } 00:18:21.378 ] 00:18:21.378 }' 00:18:21.378 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:21.378 11:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.947 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:22.205 [2024-06-10 11:41:54.201337] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:22.205 [2024-06-10 11:41:54.201851] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:18:22.205 [2024-06-10 11:41:54.201975] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:22.205 [2024-06-10 11:41:54.202164] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:22.205 [2024-06-10 11:41:54.202734] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:18:22.205 [2024-06-10 11:41:54.202857] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:18:22.205 BaseBdev2 00:18:22.205 [2024-06-10 11:41:54.203107] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.205 11:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:22.205 11:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:18:22.205 11:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:18:22.205 11:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:18:22.205 11:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:18:22.205 11:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:18:22.205 11:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:22.463 11:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:22.721 [ 00:18:22.721 { 00:18:22.721 "name": "BaseBdev2", 00:18:22.721 "aliases": [ 00:18:22.721 "5090a41d-cedd-47ca-8770-edcc4ee612f8" 00:18:22.721 ], 00:18:22.721 "product_name": "Malloc disk", 00:18:22.721 "block_size": 512, 00:18:22.721 "num_blocks": 65536, 00:18:22.721 "uuid": "5090a41d-cedd-47ca-8770-edcc4ee612f8", 00:18:22.721 "assigned_rate_limits": { 00:18:22.721 "rw_ios_per_sec": 0, 00:18:22.721 "rw_mbytes_per_sec": 0, 00:18:22.721 "r_mbytes_per_sec": 0, 00:18:22.721 "w_mbytes_per_sec": 0 00:18:22.721 }, 00:18:22.721 "claimed": true, 00:18:22.722 "claim_type": "exclusive_write", 00:18:22.722 "zoned": false, 00:18:22.722 "supported_io_types": { 00:18:22.722 "read": true, 00:18:22.722 "write": true, 00:18:22.722 "unmap": true, 00:18:22.722 "write_zeroes": true, 00:18:22.722 "flush": true, 00:18:22.722 "reset": true, 00:18:22.722 "compare": false, 00:18:22.722 "compare_and_write": false, 00:18:22.722 "abort": true, 00:18:22.722 "nvme_admin": false, 00:18:22.722 "nvme_io": false 00:18:22.722 }, 00:18:22.722 "memory_domains": [ 00:18:22.722 { 00:18:22.722 "dma_device_id": "system", 00:18:22.722 "dma_device_type": 1 00:18:22.722 }, 00:18:22.722 { 00:18:22.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.722 "dma_device_type": 2 00:18:22.722 } 00:18:22.722 ], 00:18:22.722 "driver_specific": {} 00:18:22.722 } 00:18:22.722 ] 00:18:22.980 11:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:18:22.980 11:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:22.980 11:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:22.980 11:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:22.980 11:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:22.980 11:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:22.980 11:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:22.980 11:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:22.980 11:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:22.980 11:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:22.980 11:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:22.980 11:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:22.980 11:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:22.980 11:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.980 11:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.980 11:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:22.980 "name": "Existed_Raid", 00:18:22.980 "uuid": "41e1fac6-c385-4639-9715-170204670cfc", 00:18:22.980 "strip_size_kb": 0, 00:18:22.980 "state": "online", 00:18:22.980 "raid_level": "raid1", 00:18:22.980 "superblock": true, 00:18:22.980 "num_base_bdevs": 2, 00:18:22.980 "num_base_bdevs_discovered": 2, 00:18:22.980 "num_base_bdevs_operational": 2, 00:18:22.980 "base_bdevs_list": [ 00:18:22.980 { 00:18:22.980 "name": "BaseBdev1", 00:18:22.980 "uuid": "47615a2c-5b03-4679-a9e5-bcfcbe7bec9e", 00:18:22.980 "is_configured": true, 00:18:22.980 "data_offset": 2048, 00:18:22.980 "data_size": 63488 00:18:22.980 }, 00:18:22.980 { 00:18:22.980 "name": "BaseBdev2", 00:18:22.980 "uuid": "5090a41d-cedd-47ca-8770-edcc4ee612f8", 00:18:22.980 "is_configured": true, 00:18:22.980 "data_offset": 2048, 00:18:22.980 "data_size": 63488 00:18:22.980 } 00:18:22.980 ] 00:18:22.980 }' 00:18:22.980 11:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:22.980 11:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.546 11:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:23.546 11:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:23.546 11:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:23.546 11:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:23.546 11:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:23.546 11:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:23.546 11:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:23.546 11:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:24.112 [2024-06-10 11:41:55.870033] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.112 11:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:24.112 "name": "Existed_Raid", 00:18:24.112 "aliases": [ 00:18:24.112 "41e1fac6-c385-4639-9715-170204670cfc" 00:18:24.112 ], 00:18:24.112 "product_name": "Raid Volume", 00:18:24.112 "block_size": 512, 00:18:24.112 "num_blocks": 63488, 00:18:24.112 "uuid": "41e1fac6-c385-4639-9715-170204670cfc", 00:18:24.112 "assigned_rate_limits": { 00:18:24.112 "rw_ios_per_sec": 0, 00:18:24.112 "rw_mbytes_per_sec": 0, 00:18:24.112 "r_mbytes_per_sec": 0, 00:18:24.112 "w_mbytes_per_sec": 0 00:18:24.112 }, 00:18:24.112 "claimed": false, 00:18:24.112 "zoned": false, 00:18:24.112 "supported_io_types": { 00:18:24.112 "read": true, 00:18:24.112 "write": true, 00:18:24.112 "unmap": false, 00:18:24.112 "write_zeroes": true, 00:18:24.112 "flush": false, 00:18:24.112 "reset": true, 00:18:24.112 "compare": false, 00:18:24.112 "compare_and_write": false, 00:18:24.112 "abort": false, 00:18:24.112 "nvme_admin": false, 00:18:24.112 "nvme_io": false 00:18:24.112 }, 00:18:24.112 "memory_domains": [ 00:18:24.112 { 00:18:24.112 "dma_device_id": "system", 00:18:24.112 "dma_device_type": 1 00:18:24.112 }, 00:18:24.112 { 00:18:24.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.112 "dma_device_type": 2 00:18:24.112 }, 00:18:24.112 { 00:18:24.112 "dma_device_id": "system", 00:18:24.112 "dma_device_type": 1 00:18:24.112 }, 00:18:24.112 { 00:18:24.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.112 "dma_device_type": 2 00:18:24.112 } 00:18:24.112 ], 00:18:24.112 "driver_specific": { 00:18:24.112 "raid": { 00:18:24.112 "uuid": "41e1fac6-c385-4639-9715-170204670cfc", 00:18:24.112 "strip_size_kb": 0, 00:18:24.112 "state": "online", 00:18:24.112 "raid_level": "raid1", 00:18:24.112 "superblock": true, 00:18:24.112 "num_base_bdevs": 2, 00:18:24.112 "num_base_bdevs_discovered": 2, 00:18:24.112 "num_base_bdevs_operational": 2, 00:18:24.112 "base_bdevs_list": [ 00:18:24.112 { 00:18:24.112 "name": "BaseBdev1", 00:18:24.112 "uuid": "47615a2c-5b03-4679-a9e5-bcfcbe7bec9e", 00:18:24.112 "is_configured": true, 00:18:24.112 "data_offset": 2048, 00:18:24.112 "data_size": 63488 00:18:24.112 }, 00:18:24.112 { 00:18:24.112 "name": "BaseBdev2", 00:18:24.112 "uuid": "5090a41d-cedd-47ca-8770-edcc4ee612f8", 00:18:24.112 "is_configured": true, 00:18:24.112 "data_offset": 2048, 00:18:24.112 "data_size": 63488 00:18:24.112 } 00:18:24.112 ] 00:18:24.112 } 00:18:24.112 } 00:18:24.112 }' 00:18:24.112 11:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:24.112 11:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:24.112 BaseBdev2' 00:18:24.112 11:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:24.112 11:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:24.112 11:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:24.370 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:24.370 "name": "BaseBdev1", 00:18:24.370 "aliases": [ 00:18:24.370 "47615a2c-5b03-4679-a9e5-bcfcbe7bec9e" 00:18:24.370 ], 00:18:24.370 "product_name": "Malloc disk", 00:18:24.370 "block_size": 512, 00:18:24.370 "num_blocks": 65536, 00:18:24.370 "uuid": "47615a2c-5b03-4679-a9e5-bcfcbe7bec9e", 00:18:24.370 "assigned_rate_limits": { 00:18:24.370 "rw_ios_per_sec": 0, 00:18:24.370 "rw_mbytes_per_sec": 0, 00:18:24.370 "r_mbytes_per_sec": 0, 00:18:24.370 "w_mbytes_per_sec": 0 00:18:24.370 }, 00:18:24.370 "claimed": true, 00:18:24.370 "claim_type": "exclusive_write", 00:18:24.370 "zoned": false, 00:18:24.370 "supported_io_types": { 00:18:24.370 "read": true, 00:18:24.370 "write": true, 00:18:24.370 "unmap": true, 00:18:24.370 "write_zeroes": true, 00:18:24.370 "flush": true, 00:18:24.370 "reset": true, 00:18:24.370 "compare": false, 00:18:24.370 "compare_and_write": false, 00:18:24.370 "abort": true, 00:18:24.370 "nvme_admin": false, 00:18:24.370 "nvme_io": false 00:18:24.370 }, 00:18:24.370 "memory_domains": [ 00:18:24.370 { 00:18:24.370 "dma_device_id": "system", 00:18:24.370 "dma_device_type": 1 00:18:24.370 }, 00:18:24.370 { 00:18:24.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.370 "dma_device_type": 2 00:18:24.370 } 00:18:24.370 ], 00:18:24.370 "driver_specific": {} 00:18:24.370 }' 00:18:24.370 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:24.370 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:24.370 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:24.370 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:24.370 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:24.628 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:24.628 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:24.628 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:24.628 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:24.628 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:24.628 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:24.628 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:24.628 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:24.628 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:24.628 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:24.888 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:24.888 "name": "BaseBdev2", 00:18:24.888 "aliases": [ 00:18:24.888 "5090a41d-cedd-47ca-8770-edcc4ee612f8" 00:18:24.888 ], 00:18:24.888 "product_name": "Malloc disk", 00:18:24.888 "block_size": 512, 00:18:24.888 "num_blocks": 65536, 00:18:24.888 "uuid": "5090a41d-cedd-47ca-8770-edcc4ee612f8", 00:18:24.888 "assigned_rate_limits": { 00:18:24.888 "rw_ios_per_sec": 0, 00:18:24.888 "rw_mbytes_per_sec": 0, 00:18:24.888 "r_mbytes_per_sec": 0, 00:18:24.888 "w_mbytes_per_sec": 0 00:18:24.888 }, 00:18:24.888 "claimed": true, 00:18:24.888 "claim_type": "exclusive_write", 00:18:24.888 "zoned": false, 00:18:24.888 "supported_io_types": { 00:18:24.888 "read": true, 00:18:24.888 "write": true, 00:18:24.888 "unmap": true, 00:18:24.888 "write_zeroes": true, 00:18:24.888 "flush": true, 00:18:24.888 "reset": true, 00:18:24.888 "compare": false, 00:18:24.888 "compare_and_write": false, 00:18:24.888 "abort": true, 00:18:24.888 "nvme_admin": false, 00:18:24.888 "nvme_io": false 00:18:24.888 }, 00:18:24.888 "memory_domains": [ 00:18:24.888 { 00:18:24.888 "dma_device_id": "system", 00:18:24.888 "dma_device_type": 1 00:18:24.888 }, 00:18:24.888 { 00:18:24.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.888 "dma_device_type": 2 00:18:24.888 } 00:18:24.888 ], 00:18:24.888 "driver_specific": {} 00:18:24.888 }' 00:18:24.888 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:24.888 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:25.147 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:25.147 11:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:25.147 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:25.147 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:25.147 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:25.147 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:25.147 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:25.147 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:25.405 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:25.405 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:25.405 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:25.663 [2024-06-10 11:41:57.479348] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:25.663 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:25.664 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:25.664 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:25.664 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:18:25.664 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:25.664 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:25.664 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:25.664 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:25.664 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:25.664 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:25.664 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:25.664 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.664 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.664 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.664 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.664 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.664 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.922 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.922 "name": "Existed_Raid", 00:18:25.922 "uuid": "41e1fac6-c385-4639-9715-170204670cfc", 00:18:25.922 "strip_size_kb": 0, 00:18:25.922 "state": "online", 00:18:25.922 "raid_level": "raid1", 00:18:25.922 "superblock": true, 00:18:25.923 "num_base_bdevs": 2, 00:18:25.923 "num_base_bdevs_discovered": 1, 00:18:25.923 "num_base_bdevs_operational": 1, 00:18:25.923 "base_bdevs_list": [ 00:18:25.923 { 00:18:25.923 "name": null, 00:18:25.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.923 "is_configured": false, 00:18:25.923 "data_offset": 2048, 00:18:25.923 "data_size": 63488 00:18:25.923 }, 00:18:25.923 { 00:18:25.923 "name": "BaseBdev2", 00:18:25.923 "uuid": "5090a41d-cedd-47ca-8770-edcc4ee612f8", 00:18:25.923 "is_configured": true, 00:18:25.923 "data_offset": 2048, 00:18:25.923 "data_size": 63488 00:18:25.923 } 00:18:25.923 ] 00:18:25.923 }' 00:18:25.923 11:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.923 11:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.857 11:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:26.857 11:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:26.857 11:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.857 11:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:26.857 11:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:26.857 11:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:26.857 11:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:27.423 [2024-06-10 11:41:59.191438] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:27.423 [2024-06-10 11:41:59.191812] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:27.423 [2024-06-10 11:41:59.305544] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.423 [2024-06-10 11:41:59.305832] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.423 [2024-06-10 11:41:59.305937] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:18:27.423 11:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:27.423 11:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:27.424 11:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:27.424 11:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.682 11:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:27.682 11:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:27.682 11:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:18:27.682 11:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 125119 00:18:27.682 11:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 125119 ']' 00:18:27.682 11:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 125119 00:18:27.682 11:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:18:27.682 11:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:27.682 11:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 125119 00:18:27.682 killing process with pid 125119 00:18:27.682 11:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:27.682 11:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:27.682 11:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 125119' 00:18:27.682 11:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 125119 00:18:27.682 11:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 125119 00:18:27.682 [2024-06-10 11:41:59.659712] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:27.682 [2024-06-10 11:41:59.659864] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:29.586 11:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:29.586 ************************************ 00:18:29.586 END TEST raid_state_function_test_sb 00:18:29.586 ************************************ 00:18:29.586 00:18:29.586 real 0m13.130s 00:18:29.586 user 0m22.525s 00:18:29.586 sys 0m1.756s 00:18:29.586 11:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:29.586 11:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.586 11:42:01 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:18:29.586 11:42:01 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:18:29.586 11:42:01 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:29.586 11:42:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.586 ************************************ 00:18:29.586 START TEST raid_superblock_test 00:18:29.586 ************************************ 00:18:29.586 11:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 2 00:18:29.586 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:18:29.586 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:18:29.586 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:29.586 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:29.586 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:29.586 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=125516 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 125516 /var/tmp/spdk-raid.sock 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 125516 ']' 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:29.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:29.587 11:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.587 [2024-06-10 11:42:01.353465] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:18:29.587 [2024-06-10 11:42:01.353950] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125516 ] 00:18:29.587 [2024-06-10 11:42:01.540307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.845 [2024-06-10 11:42:01.834916] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.103 [2024-06-10 11:42:02.139214] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.360 11:42:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:30.360 11:42:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:18:30.360 11:42:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:30.360 11:42:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:30.360 11:42:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:30.360 11:42:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:30.360 11:42:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:30.360 11:42:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:30.360 11:42:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:30.360 11:42:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:30.360 11:42:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:30.925 malloc1 00:18:30.925 11:42:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:31.183 [2024-06-10 11:42:03.064660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:31.183 [2024-06-10 11:42:03.065101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.183 [2024-06-10 11:42:03.065305] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:18:31.183 [2024-06-10 11:42:03.065461] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.183 [2024-06-10 11:42:03.069072] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.183 [2024-06-10 11:42:03.069309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:31.183 pt1 00:18:31.183 11:42:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:31.183 11:42:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:31.183 11:42:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:31.183 11:42:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:31.183 11:42:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:31.183 11:42:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:31.183 11:42:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:31.183 11:42:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:31.183 11:42:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:31.440 malloc2 00:18:31.440 11:42:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:32.007 [2024-06-10 11:42:03.773811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:32.007 [2024-06-10 11:42:03.774254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.007 [2024-06-10 11:42:03.774502] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:32.007 [2024-06-10 11:42:03.774742] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.007 [2024-06-10 11:42:03.778306] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.008 [2024-06-10 11:42:03.778571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:32.008 pt2 00:18:32.008 11:42:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:32.008 11:42:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:32.008 11:42:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:18:32.008 [2024-06-10 11:42:04.035168] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:32.008 [2024-06-10 11:42:04.037689] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:32.008 [2024-06-10 11:42:04.038064] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:18:32.008 [2024-06-10 11:42:04.038187] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:32.008 [2024-06-10 11:42:04.038384] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:18:32.008 [2024-06-10 11:42:04.038846] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:18:32.008 [2024-06-10 11:42:04.038969] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:18:32.008 [2024-06-10 11:42:04.039280] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.008 11:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:32.008 11:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:32.008 11:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:32.008 11:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:32.008 11:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:32.008 11:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:32.008 11:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:32.008 11:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:32.008 11:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:32.008 11:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:32.008 11:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.008 11:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.574 11:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:32.574 "name": "raid_bdev1", 00:18:32.574 "uuid": "2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0", 00:18:32.574 "strip_size_kb": 0, 00:18:32.574 "state": "online", 00:18:32.574 "raid_level": "raid1", 00:18:32.574 "superblock": true, 00:18:32.574 "num_base_bdevs": 2, 00:18:32.574 "num_base_bdevs_discovered": 2, 00:18:32.574 "num_base_bdevs_operational": 2, 00:18:32.574 "base_bdevs_list": [ 00:18:32.574 { 00:18:32.574 "name": "pt1", 00:18:32.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:32.574 "is_configured": true, 00:18:32.574 "data_offset": 2048, 00:18:32.574 "data_size": 63488 00:18:32.574 }, 00:18:32.574 { 00:18:32.574 "name": "pt2", 00:18:32.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.574 "is_configured": true, 00:18:32.574 "data_offset": 2048, 00:18:32.574 "data_size": 63488 00:18:32.574 } 00:18:32.574 ] 00:18:32.574 }' 00:18:32.574 11:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:32.574 11:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.139 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:33.139 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:33.139 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:33.139 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:33.139 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:33.139 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:33.139 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:33.139 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:33.705 [2024-06-10 11:42:05.467921] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:33.705 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:33.705 "name": "raid_bdev1", 00:18:33.705 "aliases": [ 00:18:33.705 "2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0" 00:18:33.705 ], 00:18:33.705 "product_name": "Raid Volume", 00:18:33.705 "block_size": 512, 00:18:33.705 "num_blocks": 63488, 00:18:33.705 "uuid": "2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0", 00:18:33.705 "assigned_rate_limits": { 00:18:33.705 "rw_ios_per_sec": 0, 00:18:33.705 "rw_mbytes_per_sec": 0, 00:18:33.705 "r_mbytes_per_sec": 0, 00:18:33.705 "w_mbytes_per_sec": 0 00:18:33.705 }, 00:18:33.705 "claimed": false, 00:18:33.705 "zoned": false, 00:18:33.705 "supported_io_types": { 00:18:33.705 "read": true, 00:18:33.705 "write": true, 00:18:33.705 "unmap": false, 00:18:33.705 "write_zeroes": true, 00:18:33.705 "flush": false, 00:18:33.705 "reset": true, 00:18:33.705 "compare": false, 00:18:33.705 "compare_and_write": false, 00:18:33.705 "abort": false, 00:18:33.705 "nvme_admin": false, 00:18:33.705 "nvme_io": false 00:18:33.705 }, 00:18:33.705 "memory_domains": [ 00:18:33.705 { 00:18:33.705 "dma_device_id": "system", 00:18:33.705 "dma_device_type": 1 00:18:33.705 }, 00:18:33.705 { 00:18:33.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.705 "dma_device_type": 2 00:18:33.705 }, 00:18:33.705 { 00:18:33.705 "dma_device_id": "system", 00:18:33.705 "dma_device_type": 1 00:18:33.705 }, 00:18:33.705 { 00:18:33.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.705 "dma_device_type": 2 00:18:33.705 } 00:18:33.705 ], 00:18:33.705 "driver_specific": { 00:18:33.705 "raid": { 00:18:33.705 "uuid": "2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0", 00:18:33.705 "strip_size_kb": 0, 00:18:33.705 "state": "online", 00:18:33.705 "raid_level": "raid1", 00:18:33.705 "superblock": true, 00:18:33.705 "num_base_bdevs": 2, 00:18:33.705 "num_base_bdevs_discovered": 2, 00:18:33.705 "num_base_bdevs_operational": 2, 00:18:33.705 "base_bdevs_list": [ 00:18:33.705 { 00:18:33.705 "name": "pt1", 00:18:33.705 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:33.705 "is_configured": true, 00:18:33.705 "data_offset": 2048, 00:18:33.705 "data_size": 63488 00:18:33.705 }, 00:18:33.705 { 00:18:33.705 "name": "pt2", 00:18:33.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:33.705 "is_configured": true, 00:18:33.705 "data_offset": 2048, 00:18:33.705 "data_size": 63488 00:18:33.705 } 00:18:33.705 ] 00:18:33.705 } 00:18:33.705 } 00:18:33.705 }' 00:18:33.705 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:33.705 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:33.705 pt2' 00:18:33.705 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:33.705 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:33.705 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:33.963 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:33.963 "name": "pt1", 00:18:33.963 "aliases": [ 00:18:33.963 "00000000-0000-0000-0000-000000000001" 00:18:33.963 ], 00:18:33.963 "product_name": "passthru", 00:18:33.963 "block_size": 512, 00:18:33.963 "num_blocks": 65536, 00:18:33.963 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:33.963 "assigned_rate_limits": { 00:18:33.963 "rw_ios_per_sec": 0, 00:18:33.963 "rw_mbytes_per_sec": 0, 00:18:33.963 "r_mbytes_per_sec": 0, 00:18:33.963 "w_mbytes_per_sec": 0 00:18:33.963 }, 00:18:33.963 "claimed": true, 00:18:33.963 "claim_type": "exclusive_write", 00:18:33.963 "zoned": false, 00:18:33.963 "supported_io_types": { 00:18:33.963 "read": true, 00:18:33.963 "write": true, 00:18:33.963 "unmap": true, 00:18:33.963 "write_zeroes": true, 00:18:33.963 "flush": true, 00:18:33.963 "reset": true, 00:18:33.963 "compare": false, 00:18:33.963 "compare_and_write": false, 00:18:33.963 "abort": true, 00:18:33.963 "nvme_admin": false, 00:18:33.963 "nvme_io": false 00:18:33.963 }, 00:18:33.963 "memory_domains": [ 00:18:33.963 { 00:18:33.963 "dma_device_id": "system", 00:18:33.963 "dma_device_type": 1 00:18:33.963 }, 00:18:33.963 { 00:18:33.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.963 "dma_device_type": 2 00:18:33.963 } 00:18:33.963 ], 00:18:33.963 "driver_specific": { 00:18:33.963 "passthru": { 00:18:33.963 "name": "pt1", 00:18:33.963 "base_bdev_name": "malloc1" 00:18:33.963 } 00:18:33.963 } 00:18:33.964 }' 00:18:33.964 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:33.964 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:33.964 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:33.964 11:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:34.221 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:34.221 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:34.221 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:34.221 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:34.221 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:34.221 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:34.221 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:34.221 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:34.221 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:34.221 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:34.221 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:34.787 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:34.787 "name": "pt2", 00:18:34.787 "aliases": [ 00:18:34.787 "00000000-0000-0000-0000-000000000002" 00:18:34.787 ], 00:18:34.787 "product_name": "passthru", 00:18:34.787 "block_size": 512, 00:18:34.787 "num_blocks": 65536, 00:18:34.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.787 "assigned_rate_limits": { 00:18:34.787 "rw_ios_per_sec": 0, 00:18:34.787 "rw_mbytes_per_sec": 0, 00:18:34.787 "r_mbytes_per_sec": 0, 00:18:34.787 "w_mbytes_per_sec": 0 00:18:34.787 }, 00:18:34.787 "claimed": true, 00:18:34.787 "claim_type": "exclusive_write", 00:18:34.787 "zoned": false, 00:18:34.787 "supported_io_types": { 00:18:34.787 "read": true, 00:18:34.787 "write": true, 00:18:34.787 "unmap": true, 00:18:34.787 "write_zeroes": true, 00:18:34.787 "flush": true, 00:18:34.787 "reset": true, 00:18:34.787 "compare": false, 00:18:34.787 "compare_and_write": false, 00:18:34.787 "abort": true, 00:18:34.787 "nvme_admin": false, 00:18:34.787 "nvme_io": false 00:18:34.787 }, 00:18:34.787 "memory_domains": [ 00:18:34.787 { 00:18:34.787 "dma_device_id": "system", 00:18:34.787 "dma_device_type": 1 00:18:34.787 }, 00:18:34.787 { 00:18:34.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.787 "dma_device_type": 2 00:18:34.787 } 00:18:34.787 ], 00:18:34.787 "driver_specific": { 00:18:34.787 "passthru": { 00:18:34.787 "name": "pt2", 00:18:34.787 "base_bdev_name": "malloc2" 00:18:34.787 } 00:18:34.787 } 00:18:34.787 }' 00:18:34.787 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:34.787 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:34.787 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:34.787 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:34.787 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:34.787 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:34.787 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:34.787 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:35.044 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:35.044 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:35.044 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:35.044 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:35.044 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:35.044 11:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:35.302 [2024-06-10 11:42:07.224287] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.302 11:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0 00:18:35.302 11:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0 ']' 00:18:35.302 11:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:35.559 [2024-06-10 11:42:07.572048] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:35.559 [2024-06-10 11:42:07.572271] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:35.559 [2024-06-10 11:42:07.572467] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:35.559 [2024-06-10 11:42:07.572681] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:35.559 [2024-06-10 11:42:07.572804] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:18:35.559 11:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.559 11:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:36.126 11:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:36.126 11:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:36.126 11:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:36.126 11:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:36.383 11:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:36.383 11:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:36.641 11:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:36.641 11:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:37.207 11:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:37.207 11:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:37.207 11:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:18:37.207 11:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:37.207 11:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:37.207 11:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:37.207 11:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:37.207 11:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:37.207 11:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:37.207 11:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:37.207 11:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:37.207 11:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:37.207 11:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:37.465 [2024-06-10 11:42:09.308524] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:37.465 [2024-06-10 11:42:09.311338] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:37.465 [2024-06-10 11:42:09.311586] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:37.465 [2024-06-10 11:42:09.311869] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:37.465 [2024-06-10 11:42:09.312033] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:37.465 [2024-06-10 11:42:09.312132] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:18:37.465 request: 00:18:37.465 { 00:18:37.465 "name": "raid_bdev1", 00:18:37.465 "raid_level": "raid1", 00:18:37.465 "base_bdevs": [ 00:18:37.465 "malloc1", 00:18:37.465 "malloc2" 00:18:37.465 ], 00:18:37.465 "superblock": false, 00:18:37.465 "method": "bdev_raid_create", 00:18:37.465 "req_id": 1 00:18:37.465 } 00:18:37.465 Got JSON-RPC error response 00:18:37.465 response: 00:18:37.465 { 00:18:37.465 "code": -17, 00:18:37.465 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:37.465 } 00:18:37.465 11:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:18:37.465 11:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:37.465 11:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:37.465 11:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:37.465 11:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.465 11:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:37.723 11:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:37.723 11:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:37.723 11:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:37.981 [2024-06-10 11:42:09.820588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:37.981 [2024-06-10 11:42:09.820890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.981 [2024-06-10 11:42:09.821058] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:37.981 [2024-06-10 11:42:09.821179] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.981 [2024-06-10 11:42:09.824108] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.981 [2024-06-10 11:42:09.824317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:37.981 [2024-06-10 11:42:09.824569] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:37.981 [2024-06-10 11:42:09.824723] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:37.981 pt1 00:18:37.981 11:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:37.981 11:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:37.981 11:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:37.981 11:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:37.981 11:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:37.981 11:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:37.981 11:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:37.981 11:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:37.981 11:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:37.981 11:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:37.981 11:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.981 11:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.239 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:38.239 "name": "raid_bdev1", 00:18:38.239 "uuid": "2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0", 00:18:38.239 "strip_size_kb": 0, 00:18:38.239 "state": "configuring", 00:18:38.239 "raid_level": "raid1", 00:18:38.239 "superblock": true, 00:18:38.239 "num_base_bdevs": 2, 00:18:38.239 "num_base_bdevs_discovered": 1, 00:18:38.239 "num_base_bdevs_operational": 2, 00:18:38.239 "base_bdevs_list": [ 00:18:38.239 { 00:18:38.239 "name": "pt1", 00:18:38.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:38.239 "is_configured": true, 00:18:38.239 "data_offset": 2048, 00:18:38.239 "data_size": 63488 00:18:38.239 }, 00:18:38.239 { 00:18:38.239 "name": null, 00:18:38.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.239 "is_configured": false, 00:18:38.239 "data_offset": 2048, 00:18:38.239 "data_size": 63488 00:18:38.239 } 00:18:38.239 ] 00:18:38.239 }' 00:18:38.239 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:38.239 11:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.805 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:18:38.805 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:38.805 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:38.805 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:38.805 [2024-06-10 11:42:10.796903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:38.805 [2024-06-10 11:42:10.797339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.805 [2024-06-10 11:42:10.797510] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:38.805 [2024-06-10 11:42:10.797649] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.805 [2024-06-10 11:42:10.798408] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.805 [2024-06-10 11:42:10.798645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:38.805 [2024-06-10 11:42:10.798975] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:38.805 [2024-06-10 11:42:10.799151] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:38.805 [2024-06-10 11:42:10.799435] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:18:38.805 [2024-06-10 11:42:10.799555] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:38.805 [2024-06-10 11:42:10.799828] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:38.805 [2024-06-10 11:42:10.800413] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:18:38.805 [2024-06-10 11:42:10.800566] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:18:38.805 [2024-06-10 11:42:10.800901] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.805 pt2 00:18:38.805 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:38.805 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:38.805 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:38.805 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:38.805 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:38.805 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:38.806 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:38.806 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:38.806 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:38.806 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:38.806 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:38.806 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:38.806 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.806 11:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.064 11:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:39.064 "name": "raid_bdev1", 00:18:39.064 "uuid": "2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0", 00:18:39.064 "strip_size_kb": 0, 00:18:39.064 "state": "online", 00:18:39.064 "raid_level": "raid1", 00:18:39.064 "superblock": true, 00:18:39.064 "num_base_bdevs": 2, 00:18:39.064 "num_base_bdevs_discovered": 2, 00:18:39.064 "num_base_bdevs_operational": 2, 00:18:39.064 "base_bdevs_list": [ 00:18:39.064 { 00:18:39.064 "name": "pt1", 00:18:39.064 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.064 "is_configured": true, 00:18:39.064 "data_offset": 2048, 00:18:39.064 "data_size": 63488 00:18:39.064 }, 00:18:39.064 { 00:18:39.064 "name": "pt2", 00:18:39.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.064 "is_configured": true, 00:18:39.064 "data_offset": 2048, 00:18:39.064 "data_size": 63488 00:18:39.064 } 00:18:39.064 ] 00:18:39.064 }' 00:18:39.064 11:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:39.064 11:42:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.668 11:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:39.668 11:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:39.668 11:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:39.668 11:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:39.668 11:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:39.668 11:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:39.668 11:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:39.668 11:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:39.944 [2024-06-10 11:42:11.885409] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.944 11:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:39.944 "name": "raid_bdev1", 00:18:39.944 "aliases": [ 00:18:39.944 "2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0" 00:18:39.944 ], 00:18:39.944 "product_name": "Raid Volume", 00:18:39.944 "block_size": 512, 00:18:39.944 "num_blocks": 63488, 00:18:39.944 "uuid": "2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0", 00:18:39.944 "assigned_rate_limits": { 00:18:39.944 "rw_ios_per_sec": 0, 00:18:39.944 "rw_mbytes_per_sec": 0, 00:18:39.944 "r_mbytes_per_sec": 0, 00:18:39.944 "w_mbytes_per_sec": 0 00:18:39.944 }, 00:18:39.944 "claimed": false, 00:18:39.944 "zoned": false, 00:18:39.944 "supported_io_types": { 00:18:39.944 "read": true, 00:18:39.944 "write": true, 00:18:39.944 "unmap": false, 00:18:39.944 "write_zeroes": true, 00:18:39.944 "flush": false, 00:18:39.944 "reset": true, 00:18:39.944 "compare": false, 00:18:39.944 "compare_and_write": false, 00:18:39.944 "abort": false, 00:18:39.944 "nvme_admin": false, 00:18:39.944 "nvme_io": false 00:18:39.944 }, 00:18:39.944 "memory_domains": [ 00:18:39.944 { 00:18:39.944 "dma_device_id": "system", 00:18:39.944 "dma_device_type": 1 00:18:39.944 }, 00:18:39.944 { 00:18:39.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.944 "dma_device_type": 2 00:18:39.944 }, 00:18:39.944 { 00:18:39.944 "dma_device_id": "system", 00:18:39.944 "dma_device_type": 1 00:18:39.944 }, 00:18:39.944 { 00:18:39.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.944 "dma_device_type": 2 00:18:39.944 } 00:18:39.944 ], 00:18:39.944 "driver_specific": { 00:18:39.944 "raid": { 00:18:39.944 "uuid": "2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0", 00:18:39.944 "strip_size_kb": 0, 00:18:39.945 "state": "online", 00:18:39.945 "raid_level": "raid1", 00:18:39.945 "superblock": true, 00:18:39.945 "num_base_bdevs": 2, 00:18:39.945 "num_base_bdevs_discovered": 2, 00:18:39.945 "num_base_bdevs_operational": 2, 00:18:39.945 "base_bdevs_list": [ 00:18:39.945 { 00:18:39.945 "name": "pt1", 00:18:39.945 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.945 "is_configured": true, 00:18:39.945 "data_offset": 2048, 00:18:39.945 "data_size": 63488 00:18:39.945 }, 00:18:39.945 { 00:18:39.945 "name": "pt2", 00:18:39.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.945 "is_configured": true, 00:18:39.945 "data_offset": 2048, 00:18:39.945 "data_size": 63488 00:18:39.945 } 00:18:39.945 ] 00:18:39.945 } 00:18:39.945 } 00:18:39.945 }' 00:18:39.945 11:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:39.945 11:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:39.945 pt2' 00:18:39.945 11:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:39.945 11:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:39.945 11:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:40.205 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:40.205 "name": "pt1", 00:18:40.205 "aliases": [ 00:18:40.205 "00000000-0000-0000-0000-000000000001" 00:18:40.205 ], 00:18:40.205 "product_name": "passthru", 00:18:40.205 "block_size": 512, 00:18:40.205 "num_blocks": 65536, 00:18:40.205 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:40.205 "assigned_rate_limits": { 00:18:40.205 "rw_ios_per_sec": 0, 00:18:40.205 "rw_mbytes_per_sec": 0, 00:18:40.205 "r_mbytes_per_sec": 0, 00:18:40.205 "w_mbytes_per_sec": 0 00:18:40.205 }, 00:18:40.205 "claimed": true, 00:18:40.205 "claim_type": "exclusive_write", 00:18:40.205 "zoned": false, 00:18:40.205 "supported_io_types": { 00:18:40.205 "read": true, 00:18:40.205 "write": true, 00:18:40.205 "unmap": true, 00:18:40.205 "write_zeroes": true, 00:18:40.205 "flush": true, 00:18:40.205 "reset": true, 00:18:40.205 "compare": false, 00:18:40.205 "compare_and_write": false, 00:18:40.205 "abort": true, 00:18:40.205 "nvme_admin": false, 00:18:40.205 "nvme_io": false 00:18:40.205 }, 00:18:40.205 "memory_domains": [ 00:18:40.205 { 00:18:40.205 "dma_device_id": "system", 00:18:40.205 "dma_device_type": 1 00:18:40.205 }, 00:18:40.205 { 00:18:40.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.205 "dma_device_type": 2 00:18:40.205 } 00:18:40.205 ], 00:18:40.205 "driver_specific": { 00:18:40.205 "passthru": { 00:18:40.205 "name": "pt1", 00:18:40.205 "base_bdev_name": "malloc1" 00:18:40.205 } 00:18:40.205 } 00:18:40.205 }' 00:18:40.205 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:40.462 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:40.462 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:40.462 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:40.462 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:40.462 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:40.462 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:40.462 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:40.718 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:40.718 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:40.718 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:40.718 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:40.718 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:40.718 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:40.718 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:40.976 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:40.976 "name": "pt2", 00:18:40.976 "aliases": [ 00:18:40.976 "00000000-0000-0000-0000-000000000002" 00:18:40.976 ], 00:18:40.976 "product_name": "passthru", 00:18:40.976 "block_size": 512, 00:18:40.976 "num_blocks": 65536, 00:18:40.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.976 "assigned_rate_limits": { 00:18:40.976 "rw_ios_per_sec": 0, 00:18:40.976 "rw_mbytes_per_sec": 0, 00:18:40.976 "r_mbytes_per_sec": 0, 00:18:40.976 "w_mbytes_per_sec": 0 00:18:40.976 }, 00:18:40.976 "claimed": true, 00:18:40.976 "claim_type": "exclusive_write", 00:18:40.976 "zoned": false, 00:18:40.976 "supported_io_types": { 00:18:40.976 "read": true, 00:18:40.976 "write": true, 00:18:40.976 "unmap": true, 00:18:40.976 "write_zeroes": true, 00:18:40.976 "flush": true, 00:18:40.976 "reset": true, 00:18:40.976 "compare": false, 00:18:40.976 "compare_and_write": false, 00:18:40.976 "abort": true, 00:18:40.976 "nvme_admin": false, 00:18:40.976 "nvme_io": false 00:18:40.976 }, 00:18:40.976 "memory_domains": [ 00:18:40.976 { 00:18:40.976 "dma_device_id": "system", 00:18:40.976 "dma_device_type": 1 00:18:40.976 }, 00:18:40.976 { 00:18:40.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.976 "dma_device_type": 2 00:18:40.976 } 00:18:40.976 ], 00:18:40.976 "driver_specific": { 00:18:40.976 "passthru": { 00:18:40.976 "name": "pt2", 00:18:40.976 "base_bdev_name": "malloc2" 00:18:40.976 } 00:18:40.976 } 00:18:40.976 }' 00:18:40.976 11:42:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:40.976 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:41.233 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:41.233 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:41.234 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:41.234 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:41.234 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:41.234 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:41.234 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:41.234 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:41.234 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:41.491 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:41.491 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:41.491 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:41.748 [2024-06-10 11:42:13.595354] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.748 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0 '!=' 2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0 ']' 00:18:41.748 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:18:41.748 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:41.748 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:41.748 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:42.006 [2024-06-10 11:42:13.907199] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:42.006 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:42.006 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:42.006 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:42.006 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:42.006 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:42.006 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:42.006 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:42.006 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:42.006 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:42.006 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:42.006 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.006 11:42:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.266 11:42:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:42.266 "name": "raid_bdev1", 00:18:42.266 "uuid": "2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0", 00:18:42.266 "strip_size_kb": 0, 00:18:42.266 "state": "online", 00:18:42.266 "raid_level": "raid1", 00:18:42.266 "superblock": true, 00:18:42.266 "num_base_bdevs": 2, 00:18:42.266 "num_base_bdevs_discovered": 1, 00:18:42.266 "num_base_bdevs_operational": 1, 00:18:42.266 "base_bdevs_list": [ 00:18:42.266 { 00:18:42.266 "name": null, 00:18:42.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.266 "is_configured": false, 00:18:42.266 "data_offset": 2048, 00:18:42.266 "data_size": 63488 00:18:42.266 }, 00:18:42.266 { 00:18:42.266 "name": "pt2", 00:18:42.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.266 "is_configured": true, 00:18:42.266 "data_offset": 2048, 00:18:42.266 "data_size": 63488 00:18:42.266 } 00:18:42.266 ] 00:18:42.266 }' 00:18:42.266 11:42:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:42.266 11:42:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.834 11:42:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:43.093 [2024-06-10 11:42:15.035310] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.093 [2024-06-10 11:42:15.035657] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.093 [2024-06-10 11:42:15.035874] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.093 [2024-06-10 11:42:15.036068] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.093 [2024-06-10 11:42:15.036202] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:18:43.093 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.093 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:18:43.352 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:18:43.352 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:18:43.352 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:18:43.352 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:43.352 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:43.611 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:43.611 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:43.611 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:18:43.611 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:43.611 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:18:43.611 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:43.869 [2024-06-10 11:42:15.743121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:43.869 [2024-06-10 11:42:15.743443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.869 [2024-06-10 11:42:15.743520] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:43.869 [2024-06-10 11:42:15.743627] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.869 [2024-06-10 11:42:15.746211] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.869 [2024-06-10 11:42:15.746398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:43.869 [2024-06-10 11:42:15.746609] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:43.869 [2024-06-10 11:42:15.746806] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:43.869 [2024-06-10 11:42:15.746962] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:18:43.869 [2024-06-10 11:42:15.747099] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:43.869 [2024-06-10 11:42:15.747224] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:43.869 [2024-06-10 11:42:15.747666] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:18:43.869 [2024-06-10 11:42:15.747785] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:18:43.869 [2024-06-10 11:42:15.748070] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.869 pt2 00:18:43.869 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:43.869 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:43.869 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:43.869 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:43.869 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:43.869 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:43.869 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.869 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.869 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.869 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.869 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.869 11:42:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.128 11:42:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:44.128 "name": "raid_bdev1", 00:18:44.128 "uuid": "2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0", 00:18:44.128 "strip_size_kb": 0, 00:18:44.128 "state": "online", 00:18:44.128 "raid_level": "raid1", 00:18:44.128 "superblock": true, 00:18:44.128 "num_base_bdevs": 2, 00:18:44.128 "num_base_bdevs_discovered": 1, 00:18:44.128 "num_base_bdevs_operational": 1, 00:18:44.128 "base_bdevs_list": [ 00:18:44.128 { 00:18:44.128 "name": null, 00:18:44.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.128 "is_configured": false, 00:18:44.128 "data_offset": 2048, 00:18:44.128 "data_size": 63488 00:18:44.128 }, 00:18:44.128 { 00:18:44.128 "name": "pt2", 00:18:44.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.128 "is_configured": true, 00:18:44.128 "data_offset": 2048, 00:18:44.128 "data_size": 63488 00:18:44.128 } 00:18:44.128 ] 00:18:44.128 }' 00:18:44.128 11:42:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:44.128 11:42:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.695 11:42:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:44.953 [2024-06-10 11:42:16.864230] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.953 [2024-06-10 11:42:16.864506] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:44.953 [2024-06-10 11:42:16.864668] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.953 [2024-06-10 11:42:16.864799] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.953 [2024-06-10 11:42:16.864922] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:18:44.953 11:42:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.953 11:42:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:18:45.211 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:18:45.211 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:18:45.211 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:18:45.211 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:45.470 [2024-06-10 11:42:17.400437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:45.470 [2024-06-10 11:42:17.400776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.470 [2024-06-10 11:42:17.400926] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:45.470 [2024-06-10 11:42:17.401042] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.470 [2024-06-10 11:42:17.403838] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.470 [2024-06-10 11:42:17.404079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:45.470 [2024-06-10 11:42:17.404361] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:45.470 [2024-06-10 11:42:17.404516] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:45.470 [2024-06-10 11:42:17.404804] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:45.470 [2024-06-10 11:42:17.404945] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:45.470 [2024-06-10 11:42:17.405016] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:18:45.470 [2024-06-10 11:42:17.405344] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:45.470 [2024-06-10 11:42:17.405480] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:18:45.470 [2024-06-10 11:42:17.405573] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:45.470 pt1 00:18:45.470 [2024-06-10 11:42:17.405730] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:45.470 [2024-06-10 11:42:17.406066] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:18:45.470 [2024-06-10 11:42:17.406190] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:18:45.470 [2024-06-10 11:42:17.406434] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.470 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:18:45.470 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:45.470 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:45.470 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:45.470 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:45.470 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:45.470 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:45.470 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:45.470 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:45.470 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:45.470 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:45.470 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.470 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.035 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:46.035 "name": "raid_bdev1", 00:18:46.035 "uuid": "2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0", 00:18:46.035 "strip_size_kb": 0, 00:18:46.035 "state": "online", 00:18:46.035 "raid_level": "raid1", 00:18:46.035 "superblock": true, 00:18:46.035 "num_base_bdevs": 2, 00:18:46.035 "num_base_bdevs_discovered": 1, 00:18:46.035 "num_base_bdevs_operational": 1, 00:18:46.035 "base_bdevs_list": [ 00:18:46.035 { 00:18:46.035 "name": null, 00:18:46.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.035 "is_configured": false, 00:18:46.035 "data_offset": 2048, 00:18:46.035 "data_size": 63488 00:18:46.035 }, 00:18:46.035 { 00:18:46.035 "name": "pt2", 00:18:46.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:46.035 "is_configured": true, 00:18:46.035 "data_offset": 2048, 00:18:46.035 "data_size": 63488 00:18:46.035 } 00:18:46.035 ] 00:18:46.035 }' 00:18:46.035 11:42:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:46.035 11:42:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.600 11:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:18:46.600 11:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:46.858 11:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:18:46.858 11:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:46.858 11:42:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:18:47.117 [2024-06-10 11:42:19.113040] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.117 11:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0 '!=' 2228cb9b-ee37-4c1c-a84f-bc0ac6b7bbf0 ']' 00:18:47.117 11:42:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 125516 00:18:47.117 11:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 125516 ']' 00:18:47.117 11:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 125516 00:18:47.117 11:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:18:47.117 11:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:47.117 11:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 125516 00:18:47.117 killing process with pid 125516 00:18:47.117 11:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:47.117 11:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:47.117 11:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 125516' 00:18:47.117 11:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 125516 00:18:47.117 [2024-06-10 11:42:19.162043] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:47.117 [2024-06-10 11:42:19.162136] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.117 11:42:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 125516 00:18:47.117 [2024-06-10 11:42:19.162192] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.117 [2024-06-10 11:42:19.162204] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:18:47.375 [2024-06-10 11:42:19.380331] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:49.276 ************************************ 00:18:49.276 END TEST raid_superblock_test 00:18:49.276 ************************************ 00:18:49.276 11:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:18:49.276 00:18:49.276 real 0m19.562s 00:18:49.276 user 0m34.802s 00:18:49.276 sys 0m2.856s 00:18:49.276 11:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:49.276 11:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.276 11:42:20 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:18:49.276 11:42:20 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:18:49.276 11:42:20 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:49.276 11:42:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.276 ************************************ 00:18:49.276 START TEST raid_read_error_test 00:18:49.276 ************************************ 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 2 read 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Z1jTSQFevS 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=126080 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 126080 /var/tmp/spdk-raid.sock 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 126080 ']' 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:49.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:49.276 11:42:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.276 [2024-06-10 11:42:20.970064] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:18:49.277 [2024-06-10 11:42:20.970506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126080 ] 00:18:49.277 [2024-06-10 11:42:21.153668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.534 [2024-06-10 11:42:21.449597] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.792 [2024-06-10 11:42:21.722409] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:50.050 11:42:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:50.050 11:42:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:18:50.050 11:42:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:50.050 11:42:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:50.317 BaseBdev1_malloc 00:18:50.317 11:42:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:50.578 true 00:18:50.578 11:42:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:50.836 [2024-06-10 11:42:22.818886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:50.836 [2024-06-10 11:42:22.819292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.836 [2024-06-10 11:42:22.819476] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:18:50.836 [2024-06-10 11:42:22.819613] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.836 [2024-06-10 11:42:22.822566] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.836 [2024-06-10 11:42:22.822850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:50.836 BaseBdev1 00:18:50.836 11:42:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:50.836 11:42:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:51.403 BaseBdev2_malloc 00:18:51.403 11:42:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:51.661 true 00:18:51.661 11:42:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:51.919 [2024-06-10 11:42:23.898381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:51.919 [2024-06-10 11:42:23.898819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.919 [2024-06-10 11:42:23.899012] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:51.919 [2024-06-10 11:42:23.899148] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.919 [2024-06-10 11:42:23.902111] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.919 [2024-06-10 11:42:23.902372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:51.919 BaseBdev2 00:18:51.919 11:42:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:18:52.178 [2024-06-10 11:42:24.206963] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:52.178 [2024-06-10 11:42:24.211776] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:52.178 [2024-06-10 11:42:24.212563] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:18:52.178 [2024-06-10 11:42:24.212761] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:52.178 [2024-06-10 11:42:24.213030] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:18:52.178 [2024-06-10 11:42:24.213763] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:18:52.178 [2024-06-10 11:42:24.213935] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:18:52.178 [2024-06-10 11:42:24.214335] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.178 11:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:52.178 11:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:52.178 11:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:52.178 11:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:52.178 11:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:52.178 11:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:52.178 11:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:52.178 11:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:52.438 11:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:52.438 11:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:52.438 11:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.438 11:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.699 11:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:52.699 "name": "raid_bdev1", 00:18:52.699 "uuid": "b335a822-2587-4487-a832-1a5da8883116", 00:18:52.699 "strip_size_kb": 0, 00:18:52.699 "state": "online", 00:18:52.699 "raid_level": "raid1", 00:18:52.699 "superblock": true, 00:18:52.699 "num_base_bdevs": 2, 00:18:52.699 "num_base_bdevs_discovered": 2, 00:18:52.699 "num_base_bdevs_operational": 2, 00:18:52.699 "base_bdevs_list": [ 00:18:52.699 { 00:18:52.699 "name": "BaseBdev1", 00:18:52.699 "uuid": "dc061a6a-d836-54c4-b00e-614200c9fa57", 00:18:52.699 "is_configured": true, 00:18:52.699 "data_offset": 2048, 00:18:52.699 "data_size": 63488 00:18:52.699 }, 00:18:52.699 { 00:18:52.699 "name": "BaseBdev2", 00:18:52.699 "uuid": "5a02a0f1-c872-5732-95e3-9f61c840b556", 00:18:52.699 "is_configured": true, 00:18:52.699 "data_offset": 2048, 00:18:52.699 "data_size": 63488 00:18:52.699 } 00:18:52.699 ] 00:18:52.699 }' 00:18:52.699 11:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:52.699 11:42:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.266 11:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:53.266 11:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:53.524 [2024-06-10 11:42:25.386375] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:54.459 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:54.717 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:54.717 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:54.717 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:18:54.717 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:18:54.717 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:54.717 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:54.717 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:54.717 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:54.717 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:54.717 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:54.717 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:54.717 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:54.717 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:54.717 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:54.717 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.717 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.975 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:54.976 "name": "raid_bdev1", 00:18:54.976 "uuid": "b335a822-2587-4487-a832-1a5da8883116", 00:18:54.976 "strip_size_kb": 0, 00:18:54.976 "state": "online", 00:18:54.976 "raid_level": "raid1", 00:18:54.976 "superblock": true, 00:18:54.976 "num_base_bdevs": 2, 00:18:54.976 "num_base_bdevs_discovered": 2, 00:18:54.976 "num_base_bdevs_operational": 2, 00:18:54.976 "base_bdevs_list": [ 00:18:54.976 { 00:18:54.976 "name": "BaseBdev1", 00:18:54.976 "uuid": "dc061a6a-d836-54c4-b00e-614200c9fa57", 00:18:54.976 "is_configured": true, 00:18:54.976 "data_offset": 2048, 00:18:54.976 "data_size": 63488 00:18:54.976 }, 00:18:54.976 { 00:18:54.976 "name": "BaseBdev2", 00:18:54.976 "uuid": "5a02a0f1-c872-5732-95e3-9f61c840b556", 00:18:54.976 "is_configured": true, 00:18:54.976 "data_offset": 2048, 00:18:54.976 "data_size": 63488 00:18:54.976 } 00:18:54.976 ] 00:18:54.976 }' 00:18:54.976 11:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:54.976 11:42:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.542 11:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:55.801 [2024-06-10 11:42:27.815731] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:55.801 [2024-06-10 11:42:27.816119] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.801 [2024-06-10 11:42:27.819632] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.801 [2024-06-10 11:42:27.820031] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.801 0 00:18:55.801 [2024-06-10 11:42:27.820354] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.801 [2024-06-10 11:42:27.820385] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:18:55.801 11:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 126080 00:18:55.801 11:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 126080 ']' 00:18:55.801 11:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 126080 00:18:55.801 11:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:18:55.801 11:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:55.801 11:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 126080 00:18:56.059 11:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:56.059 11:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:56.059 11:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 126080' 00:18:56.059 killing process with pid 126080 00:18:56.059 11:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 126080 00:18:56.059 11:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 126080 00:18:56.059 [2024-06-10 11:42:27.864877] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.059 [2024-06-10 11:42:28.014648] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:57.963 11:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Z1jTSQFevS 00:18:57.963 11:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:57.964 11:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:57.964 11:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:18:57.964 11:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:18:57.964 11:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:57.964 11:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:57.964 11:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:57.964 00:18:57.964 real 0m8.736s 00:18:57.964 user 0m13.150s 00:18:57.964 sys 0m1.080s 00:18:57.964 11:42:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:57.964 ************************************ 00:18:57.964 END TEST raid_read_error_test 00:18:57.964 ************************************ 00:18:57.964 11:42:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.964 11:42:29 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:18:57.964 11:42:29 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:18:57.964 11:42:29 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:57.964 11:42:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.964 ************************************ 00:18:57.964 START TEST raid_write_error_test 00:18:57.964 ************************************ 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 2 write 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.D5GodJSmw0 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=126291 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 126291 /var/tmp/spdk-raid.sock 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 126291 ']' 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:57.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:57.964 11:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.964 [2024-06-10 11:42:29.766260] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:18:57.964 [2024-06-10 11:42:29.766860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126291 ] 00:18:57.964 [2024-06-10 11:42:29.948341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.223 [2024-06-10 11:42:30.171868] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.481 [2024-06-10 11:42:30.418348] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.739 11:42:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:58.739 11:42:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:18:58.739 11:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:58.739 11:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:58.998 BaseBdev1_malloc 00:18:58.998 11:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:58.998 true 00:18:58.998 11:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:59.257 [2024-06-10 11:42:31.241130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:59.257 [2024-06-10 11:42:31.241397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.257 [2024-06-10 11:42:31.241557] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:18:59.257 [2024-06-10 11:42:31.241661] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.257 [2024-06-10 11:42:31.244338] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.257 [2024-06-10 11:42:31.244521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:59.257 BaseBdev1 00:18:59.257 11:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:59.257 11:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:59.928 BaseBdev2_malloc 00:18:59.928 11:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:59.928 true 00:18:59.928 11:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:00.189 [2024-06-10 11:42:32.074076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:00.189 [2024-06-10 11:42:32.074391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.189 [2024-06-10 11:42:32.074563] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:00.189 [2024-06-10 11:42:32.074685] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.189 [2024-06-10 11:42:32.077278] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.189 [2024-06-10 11:42:32.077461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:00.189 BaseBdev2 00:19:00.189 11:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:19:00.485 [2024-06-10 11:42:32.302219] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.485 [2024-06-10 11:42:32.304689] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:00.485 [2024-06-10 11:42:32.305111] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:19:00.485 [2024-06-10 11:42:32.305242] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:00.485 [2024-06-10 11:42:32.305436] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:19:00.485 [2024-06-10 11:42:32.305888] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:19:00.485 [2024-06-10 11:42:32.306005] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:19:00.485 [2024-06-10 11:42:32.306315] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.485 11:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:00.485 11:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:00.485 11:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:00.485 11:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:00.485 11:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:00.485 11:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:00.485 11:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:00.485 11:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:00.485 11:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:00.485 11:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:00.485 11:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.485 11:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.806 11:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:00.806 "name": "raid_bdev1", 00:19:00.806 "uuid": "b546f172-a273-4052-b102-1e0a1559a34f", 00:19:00.806 "strip_size_kb": 0, 00:19:00.806 "state": "online", 00:19:00.806 "raid_level": "raid1", 00:19:00.806 "superblock": true, 00:19:00.806 "num_base_bdevs": 2, 00:19:00.806 "num_base_bdevs_discovered": 2, 00:19:00.806 "num_base_bdevs_operational": 2, 00:19:00.806 "base_bdevs_list": [ 00:19:00.806 { 00:19:00.806 "name": "BaseBdev1", 00:19:00.806 "uuid": "f1864cb1-d6dd-5112-9d97-c571d3ac9092", 00:19:00.806 "is_configured": true, 00:19:00.806 "data_offset": 2048, 00:19:00.806 "data_size": 63488 00:19:00.806 }, 00:19:00.806 { 00:19:00.806 "name": "BaseBdev2", 00:19:00.806 "uuid": "c3e91483-710f-5b38-872f-2fce5a4f2ca6", 00:19:00.806 "is_configured": true, 00:19:00.806 "data_offset": 2048, 00:19:00.806 "data_size": 63488 00:19:00.806 } 00:19:00.806 ] 00:19:00.806 }' 00:19:00.806 11:42:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:00.806 11:42:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.372 11:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:01.373 11:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:01.373 [2024-06-10 11:42:33.220089] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:19:02.469 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:02.469 [2024-06-10 11:42:34.396542] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:19:02.469 [2024-06-10 11:42:34.396893] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:02.469 [2024-06-10 11:42:34.397148] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:19:02.469 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:02.469 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:02.469 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:19:02.469 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:19:02.469 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:02.469 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:02.469 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:02.469 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:02.469 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:02.469 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:02.469 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:02.469 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:02.469 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:02.470 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:02.470 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.470 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.729 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.729 "name": "raid_bdev1", 00:19:02.729 "uuid": "b546f172-a273-4052-b102-1e0a1559a34f", 00:19:02.729 "strip_size_kb": 0, 00:19:02.729 "state": "online", 00:19:02.729 "raid_level": "raid1", 00:19:02.729 "superblock": true, 00:19:02.729 "num_base_bdevs": 2, 00:19:02.729 "num_base_bdevs_discovered": 1, 00:19:02.729 "num_base_bdevs_operational": 1, 00:19:02.729 "base_bdevs_list": [ 00:19:02.729 { 00:19:02.729 "name": null, 00:19:02.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.729 "is_configured": false, 00:19:02.729 "data_offset": 2048, 00:19:02.729 "data_size": 63488 00:19:02.729 }, 00:19:02.729 { 00:19:02.729 "name": "BaseBdev2", 00:19:02.729 "uuid": "c3e91483-710f-5b38-872f-2fce5a4f2ca6", 00:19:02.729 "is_configured": true, 00:19:02.729 "data_offset": 2048, 00:19:02.729 "data_size": 63488 00:19:02.729 } 00:19:02.729 ] 00:19:02.729 }' 00:19:02.729 11:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.729 11:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.298 11:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:03.558 [2024-06-10 11:42:35.570696] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.558 [2024-06-10 11:42:35.570935] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.558 [2024-06-10 11:42:35.573755] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.558 [2024-06-10 11:42:35.573917] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.558 [2024-06-10 11:42:35.574064] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.558 [2024-06-10 11:42:35.574143] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:19:03.558 0 00:19:03.558 11:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 126291 00:19:03.558 11:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 126291 ']' 00:19:03.558 11:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 126291 00:19:03.558 11:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:19:03.558 11:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:03.558 11:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 126291 00:19:03.816 11:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:03.816 11:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:03.816 11:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 126291' 00:19:03.816 killing process with pid 126291 00:19:03.816 11:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 126291 00:19:03.816 [2024-06-10 11:42:35.626932] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:03.816 11:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 126291 00:19:03.816 [2024-06-10 11:42:35.797676] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:05.719 11:42:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.D5GodJSmw0 00:19:05.719 11:42:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:05.719 11:42:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:05.719 ************************************ 00:19:05.719 END TEST raid_write_error_test 00:19:05.719 ************************************ 00:19:05.719 11:42:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:19:05.719 11:42:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:19:05.719 11:42:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:05.719 11:42:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:19:05.719 11:42:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:05.719 00:19:05.719 real 0m7.791s 00:19:05.719 user 0m11.248s 00:19:05.719 sys 0m0.969s 00:19:05.719 11:42:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:05.719 11:42:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.719 11:42:37 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:19:05.719 11:42:37 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:19:05.719 11:42:37 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:19:05.719 11:42:37 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:19:05.719 11:42:37 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:05.719 11:42:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:05.719 ************************************ 00:19:05.719 START TEST raid_state_function_test 00:19:05.719 ************************************ 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 3 false 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:05.719 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:05.720 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:19:05.720 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:05.720 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:05.720 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:19:05.720 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:19:05.720 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=126492 00:19:05.720 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:05.720 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 126492' 00:19:05.720 Process raid pid: 126492 00:19:05.720 11:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 126492 /var/tmp/spdk-raid.sock 00:19:05.720 11:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 126492 ']' 00:19:05.720 11:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:05.720 11:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:05.720 11:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:05.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:05.720 11:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:05.720 11:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.720 [2024-06-10 11:42:37.605831] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:19:05.720 [2024-06-10 11:42:37.606280] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.978 [2024-06-10 11:42:37.793674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.236 [2024-06-10 11:42:38.081172] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.494 [2024-06-10 11:42:38.326886] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:06.752 11:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:06.752 11:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:19:06.753 11:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:07.011 [2024-06-10 11:42:38.943112] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:07.011 [2024-06-10 11:42:38.943447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:07.011 [2024-06-10 11:42:38.943570] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:07.011 [2024-06-10 11:42:38.943650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:07.011 [2024-06-10 11:42:38.943759] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:07.011 [2024-06-10 11:42:38.943823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:07.011 11:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:07.011 11:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:07.011 11:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:07.011 11:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:07.011 11:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:07.011 11:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:07.011 11:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:07.011 11:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:07.011 11:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:07.011 11:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:07.011 11:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.011 11:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.270 11:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:07.270 "name": "Existed_Raid", 00:19:07.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.270 "strip_size_kb": 64, 00:19:07.270 "state": "configuring", 00:19:07.270 "raid_level": "raid0", 00:19:07.270 "superblock": false, 00:19:07.270 "num_base_bdevs": 3, 00:19:07.270 "num_base_bdevs_discovered": 0, 00:19:07.270 "num_base_bdevs_operational": 3, 00:19:07.270 "base_bdevs_list": [ 00:19:07.270 { 00:19:07.270 "name": "BaseBdev1", 00:19:07.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.270 "is_configured": false, 00:19:07.270 "data_offset": 0, 00:19:07.270 "data_size": 0 00:19:07.270 }, 00:19:07.270 { 00:19:07.270 "name": "BaseBdev2", 00:19:07.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.270 "is_configured": false, 00:19:07.270 "data_offset": 0, 00:19:07.270 "data_size": 0 00:19:07.270 }, 00:19:07.270 { 00:19:07.270 "name": "BaseBdev3", 00:19:07.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.270 "is_configured": false, 00:19:07.270 "data_offset": 0, 00:19:07.270 "data_size": 0 00:19:07.270 } 00:19:07.270 ] 00:19:07.270 }' 00:19:07.270 11:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:07.270 11:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.837 11:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:08.095 [2024-06-10 11:42:40.079226] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:08.095 [2024-06-10 11:42:40.079500] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:08.095 11:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:08.353 [2024-06-10 11:42:40.359295] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:08.353 [2024-06-10 11:42:40.359588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:08.353 [2024-06-10 11:42:40.359739] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:08.353 [2024-06-10 11:42:40.359855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:08.353 [2024-06-10 11:42:40.359942] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:08.353 [2024-06-10 11:42:40.360002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:08.353 11:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:08.919 [2024-06-10 11:42:40.700498] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:08.919 BaseBdev1 00:19:08.919 11:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:08.919 11:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:19:08.919 11:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:08.919 11:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:19:08.919 11:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:08.919 11:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:08.919 11:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:09.179 11:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:09.437 [ 00:19:09.437 { 00:19:09.437 "name": "BaseBdev1", 00:19:09.437 "aliases": [ 00:19:09.437 "70f17e19-25bb-4cc0-9a44-a77e147aa277" 00:19:09.437 ], 00:19:09.437 "product_name": "Malloc disk", 00:19:09.437 "block_size": 512, 00:19:09.437 "num_blocks": 65536, 00:19:09.437 "uuid": "70f17e19-25bb-4cc0-9a44-a77e147aa277", 00:19:09.437 "assigned_rate_limits": { 00:19:09.437 "rw_ios_per_sec": 0, 00:19:09.437 "rw_mbytes_per_sec": 0, 00:19:09.437 "r_mbytes_per_sec": 0, 00:19:09.437 "w_mbytes_per_sec": 0 00:19:09.437 }, 00:19:09.437 "claimed": true, 00:19:09.437 "claim_type": "exclusive_write", 00:19:09.437 "zoned": false, 00:19:09.437 "supported_io_types": { 00:19:09.437 "read": true, 00:19:09.437 "write": true, 00:19:09.437 "unmap": true, 00:19:09.437 "write_zeroes": true, 00:19:09.437 "flush": true, 00:19:09.437 "reset": true, 00:19:09.437 "compare": false, 00:19:09.437 "compare_and_write": false, 00:19:09.437 "abort": true, 00:19:09.437 "nvme_admin": false, 00:19:09.437 "nvme_io": false 00:19:09.437 }, 00:19:09.437 "memory_domains": [ 00:19:09.437 { 00:19:09.437 "dma_device_id": "system", 00:19:09.437 "dma_device_type": 1 00:19:09.437 }, 00:19:09.437 { 00:19:09.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.437 "dma_device_type": 2 00:19:09.437 } 00:19:09.437 ], 00:19:09.437 "driver_specific": {} 00:19:09.437 } 00:19:09.437 ] 00:19:09.437 11:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:19:09.437 11:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:09.437 11:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:09.437 11:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:09.437 11:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:09.437 11:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:09.437 11:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:09.437 11:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:09.437 11:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:09.437 11:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:09.437 11:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:09.437 11:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.437 11:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.696 11:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:09.696 "name": "Existed_Raid", 00:19:09.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.696 "strip_size_kb": 64, 00:19:09.696 "state": "configuring", 00:19:09.696 "raid_level": "raid0", 00:19:09.696 "superblock": false, 00:19:09.696 "num_base_bdevs": 3, 00:19:09.696 "num_base_bdevs_discovered": 1, 00:19:09.696 "num_base_bdevs_operational": 3, 00:19:09.696 "base_bdevs_list": [ 00:19:09.696 { 00:19:09.696 "name": "BaseBdev1", 00:19:09.696 "uuid": "70f17e19-25bb-4cc0-9a44-a77e147aa277", 00:19:09.696 "is_configured": true, 00:19:09.696 "data_offset": 0, 00:19:09.696 "data_size": 65536 00:19:09.696 }, 00:19:09.696 { 00:19:09.696 "name": "BaseBdev2", 00:19:09.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.696 "is_configured": false, 00:19:09.696 "data_offset": 0, 00:19:09.696 "data_size": 0 00:19:09.696 }, 00:19:09.696 { 00:19:09.696 "name": "BaseBdev3", 00:19:09.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.696 "is_configured": false, 00:19:09.696 "data_offset": 0, 00:19:09.696 "data_size": 0 00:19:09.696 } 00:19:09.696 ] 00:19:09.696 }' 00:19:09.696 11:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:09.696 11:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.263 11:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:10.521 [2024-06-10 11:42:42.524897] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:10.521 [2024-06-10 11:42:42.525172] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:10.521 11:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:10.779 [2024-06-10 11:42:42.812978] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:10.779 [2024-06-10 11:42:42.815332] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:10.779 [2024-06-10 11:42:42.815538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:10.779 [2024-06-10 11:42:42.815649] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:10.779 [2024-06-10 11:42:42.815731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:10.779 11:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:10.779 11:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:10.779 11:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:10.779 11:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:10.779 11:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:10.779 11:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:10.779 11:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:10.779 11:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:10.779 11:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:10.780 11:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:10.780 11:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:10.780 11:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:11.038 11:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.038 11:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.296 11:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:11.296 "name": "Existed_Raid", 00:19:11.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.296 "strip_size_kb": 64, 00:19:11.296 "state": "configuring", 00:19:11.296 "raid_level": "raid0", 00:19:11.296 "superblock": false, 00:19:11.296 "num_base_bdevs": 3, 00:19:11.296 "num_base_bdevs_discovered": 1, 00:19:11.296 "num_base_bdevs_operational": 3, 00:19:11.296 "base_bdevs_list": [ 00:19:11.296 { 00:19:11.296 "name": "BaseBdev1", 00:19:11.296 "uuid": "70f17e19-25bb-4cc0-9a44-a77e147aa277", 00:19:11.296 "is_configured": true, 00:19:11.296 "data_offset": 0, 00:19:11.296 "data_size": 65536 00:19:11.296 }, 00:19:11.296 { 00:19:11.296 "name": "BaseBdev2", 00:19:11.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.296 "is_configured": false, 00:19:11.296 "data_offset": 0, 00:19:11.296 "data_size": 0 00:19:11.296 }, 00:19:11.296 { 00:19:11.296 "name": "BaseBdev3", 00:19:11.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.296 "is_configured": false, 00:19:11.296 "data_offset": 0, 00:19:11.296 "data_size": 0 00:19:11.296 } 00:19:11.296 ] 00:19:11.296 }' 00:19:11.296 11:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:11.296 11:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.863 11:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:12.122 [2024-06-10 11:42:44.087694] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:12.122 BaseBdev2 00:19:12.122 11:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:12.122 11:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:19:12.122 11:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:12.122 11:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:19:12.122 11:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:12.122 11:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:12.122 11:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:12.381 11:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:12.949 [ 00:19:12.949 { 00:19:12.949 "name": "BaseBdev2", 00:19:12.949 "aliases": [ 00:19:12.949 "5207fd52-f4ba-40ed-bfce-f346de8e55d1" 00:19:12.949 ], 00:19:12.949 "product_name": "Malloc disk", 00:19:12.949 "block_size": 512, 00:19:12.949 "num_blocks": 65536, 00:19:12.949 "uuid": "5207fd52-f4ba-40ed-bfce-f346de8e55d1", 00:19:12.949 "assigned_rate_limits": { 00:19:12.949 "rw_ios_per_sec": 0, 00:19:12.949 "rw_mbytes_per_sec": 0, 00:19:12.949 "r_mbytes_per_sec": 0, 00:19:12.949 "w_mbytes_per_sec": 0 00:19:12.949 }, 00:19:12.949 "claimed": true, 00:19:12.949 "claim_type": "exclusive_write", 00:19:12.949 "zoned": false, 00:19:12.949 "supported_io_types": { 00:19:12.949 "read": true, 00:19:12.949 "write": true, 00:19:12.949 "unmap": true, 00:19:12.949 "write_zeroes": true, 00:19:12.949 "flush": true, 00:19:12.949 "reset": true, 00:19:12.949 "compare": false, 00:19:12.949 "compare_and_write": false, 00:19:12.949 "abort": true, 00:19:12.949 "nvme_admin": false, 00:19:12.949 "nvme_io": false 00:19:12.949 }, 00:19:12.949 "memory_domains": [ 00:19:12.949 { 00:19:12.949 "dma_device_id": "system", 00:19:12.949 "dma_device_type": 1 00:19:12.949 }, 00:19:12.949 { 00:19:12.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.949 "dma_device_type": 2 00:19:12.949 } 00:19:12.949 ], 00:19:12.949 "driver_specific": {} 00:19:12.949 } 00:19:12.949 ] 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:12.949 "name": "Existed_Raid", 00:19:12.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.949 "strip_size_kb": 64, 00:19:12.949 "state": "configuring", 00:19:12.949 "raid_level": "raid0", 00:19:12.949 "superblock": false, 00:19:12.949 "num_base_bdevs": 3, 00:19:12.949 "num_base_bdevs_discovered": 2, 00:19:12.949 "num_base_bdevs_operational": 3, 00:19:12.949 "base_bdevs_list": [ 00:19:12.949 { 00:19:12.949 "name": "BaseBdev1", 00:19:12.949 "uuid": "70f17e19-25bb-4cc0-9a44-a77e147aa277", 00:19:12.949 "is_configured": true, 00:19:12.949 "data_offset": 0, 00:19:12.949 "data_size": 65536 00:19:12.949 }, 00:19:12.949 { 00:19:12.949 "name": "BaseBdev2", 00:19:12.949 "uuid": "5207fd52-f4ba-40ed-bfce-f346de8e55d1", 00:19:12.949 "is_configured": true, 00:19:12.949 "data_offset": 0, 00:19:12.949 "data_size": 65536 00:19:12.949 }, 00:19:12.949 { 00:19:12.949 "name": "BaseBdev3", 00:19:12.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.949 "is_configured": false, 00:19:12.949 "data_offset": 0, 00:19:12.949 "data_size": 0 00:19:12.949 } 00:19:12.949 ] 00:19:12.949 }' 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:12.949 11:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.518 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:13.777 [2024-06-10 11:42:45.808478] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:13.777 [2024-06-10 11:42:45.808792] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:19:13.777 [2024-06-10 11:42:45.808845] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:13.777 [2024-06-10 11:42:45.809124] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:19:13.777 [2024-06-10 11:42:45.809551] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:19:13.777 [2024-06-10 11:42:45.809699] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:19:13.777 [2024-06-10 11:42:45.810092] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.777 BaseBdev3 00:19:13.777 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:13.777 11:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:19:13.777 11:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:13.777 11:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:19:13.777 11:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:13.777 11:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:13.777 11:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:14.075 11:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:14.333 [ 00:19:14.333 { 00:19:14.333 "name": "BaseBdev3", 00:19:14.333 "aliases": [ 00:19:14.333 "da09cbb8-effc-4607-a77d-0bfc4c49efb4" 00:19:14.333 ], 00:19:14.333 "product_name": "Malloc disk", 00:19:14.333 "block_size": 512, 00:19:14.333 "num_blocks": 65536, 00:19:14.333 "uuid": "da09cbb8-effc-4607-a77d-0bfc4c49efb4", 00:19:14.333 "assigned_rate_limits": { 00:19:14.333 "rw_ios_per_sec": 0, 00:19:14.333 "rw_mbytes_per_sec": 0, 00:19:14.333 "r_mbytes_per_sec": 0, 00:19:14.333 "w_mbytes_per_sec": 0 00:19:14.333 }, 00:19:14.333 "claimed": true, 00:19:14.333 "claim_type": "exclusive_write", 00:19:14.333 "zoned": false, 00:19:14.333 "supported_io_types": { 00:19:14.333 "read": true, 00:19:14.333 "write": true, 00:19:14.333 "unmap": true, 00:19:14.333 "write_zeroes": true, 00:19:14.333 "flush": true, 00:19:14.333 "reset": true, 00:19:14.333 "compare": false, 00:19:14.333 "compare_and_write": false, 00:19:14.333 "abort": true, 00:19:14.333 "nvme_admin": false, 00:19:14.333 "nvme_io": false 00:19:14.333 }, 00:19:14.333 "memory_domains": [ 00:19:14.333 { 00:19:14.333 "dma_device_id": "system", 00:19:14.333 "dma_device_type": 1 00:19:14.333 }, 00:19:14.333 { 00:19:14.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.333 "dma_device_type": 2 00:19:14.333 } 00:19:14.333 ], 00:19:14.333 "driver_specific": {} 00:19:14.333 } 00:19:14.333 ] 00:19:14.333 11:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:19:14.333 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:14.333 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:14.333 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:14.333 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:14.333 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:14.333 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:14.333 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:14.333 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:14.333 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:14.333 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:14.333 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:14.333 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:14.333 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.333 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.591 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:14.591 "name": "Existed_Raid", 00:19:14.591 "uuid": "1aea716a-25d7-42e6-bc81-8cf745e78bac", 00:19:14.591 "strip_size_kb": 64, 00:19:14.591 "state": "online", 00:19:14.591 "raid_level": "raid0", 00:19:14.591 "superblock": false, 00:19:14.591 "num_base_bdevs": 3, 00:19:14.591 "num_base_bdevs_discovered": 3, 00:19:14.591 "num_base_bdevs_operational": 3, 00:19:14.591 "base_bdevs_list": [ 00:19:14.591 { 00:19:14.591 "name": "BaseBdev1", 00:19:14.591 "uuid": "70f17e19-25bb-4cc0-9a44-a77e147aa277", 00:19:14.591 "is_configured": true, 00:19:14.591 "data_offset": 0, 00:19:14.591 "data_size": 65536 00:19:14.591 }, 00:19:14.591 { 00:19:14.591 "name": "BaseBdev2", 00:19:14.591 "uuid": "5207fd52-f4ba-40ed-bfce-f346de8e55d1", 00:19:14.591 "is_configured": true, 00:19:14.591 "data_offset": 0, 00:19:14.591 "data_size": 65536 00:19:14.591 }, 00:19:14.591 { 00:19:14.591 "name": "BaseBdev3", 00:19:14.591 "uuid": "da09cbb8-effc-4607-a77d-0bfc4c49efb4", 00:19:14.591 "is_configured": true, 00:19:14.591 "data_offset": 0, 00:19:14.591 "data_size": 65536 00:19:14.591 } 00:19:14.591 ] 00:19:14.591 }' 00:19:14.591 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:14.591 11:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.524 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:15.524 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:15.524 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:15.524 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:15.524 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:15.524 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:15.524 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:15.524 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:15.524 [2024-06-10 11:42:47.529211] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:15.524 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:15.524 "name": "Existed_Raid", 00:19:15.524 "aliases": [ 00:19:15.524 "1aea716a-25d7-42e6-bc81-8cf745e78bac" 00:19:15.524 ], 00:19:15.524 "product_name": "Raid Volume", 00:19:15.524 "block_size": 512, 00:19:15.524 "num_blocks": 196608, 00:19:15.524 "uuid": "1aea716a-25d7-42e6-bc81-8cf745e78bac", 00:19:15.524 "assigned_rate_limits": { 00:19:15.524 "rw_ios_per_sec": 0, 00:19:15.524 "rw_mbytes_per_sec": 0, 00:19:15.524 "r_mbytes_per_sec": 0, 00:19:15.524 "w_mbytes_per_sec": 0 00:19:15.524 }, 00:19:15.524 "claimed": false, 00:19:15.524 "zoned": false, 00:19:15.524 "supported_io_types": { 00:19:15.524 "read": true, 00:19:15.524 "write": true, 00:19:15.524 "unmap": true, 00:19:15.524 "write_zeroes": true, 00:19:15.524 "flush": true, 00:19:15.524 "reset": true, 00:19:15.524 "compare": false, 00:19:15.524 "compare_and_write": false, 00:19:15.524 "abort": false, 00:19:15.524 "nvme_admin": false, 00:19:15.524 "nvme_io": false 00:19:15.524 }, 00:19:15.524 "memory_domains": [ 00:19:15.524 { 00:19:15.525 "dma_device_id": "system", 00:19:15.525 "dma_device_type": 1 00:19:15.525 }, 00:19:15.525 { 00:19:15.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.525 "dma_device_type": 2 00:19:15.525 }, 00:19:15.525 { 00:19:15.525 "dma_device_id": "system", 00:19:15.525 "dma_device_type": 1 00:19:15.525 }, 00:19:15.525 { 00:19:15.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.525 "dma_device_type": 2 00:19:15.525 }, 00:19:15.525 { 00:19:15.525 "dma_device_id": "system", 00:19:15.525 "dma_device_type": 1 00:19:15.525 }, 00:19:15.525 { 00:19:15.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.525 "dma_device_type": 2 00:19:15.525 } 00:19:15.525 ], 00:19:15.525 "driver_specific": { 00:19:15.525 "raid": { 00:19:15.525 "uuid": "1aea716a-25d7-42e6-bc81-8cf745e78bac", 00:19:15.525 "strip_size_kb": 64, 00:19:15.525 "state": "online", 00:19:15.525 "raid_level": "raid0", 00:19:15.525 "superblock": false, 00:19:15.525 "num_base_bdevs": 3, 00:19:15.525 "num_base_bdevs_discovered": 3, 00:19:15.525 "num_base_bdevs_operational": 3, 00:19:15.525 "base_bdevs_list": [ 00:19:15.525 { 00:19:15.525 "name": "BaseBdev1", 00:19:15.525 "uuid": "70f17e19-25bb-4cc0-9a44-a77e147aa277", 00:19:15.525 "is_configured": true, 00:19:15.525 "data_offset": 0, 00:19:15.525 "data_size": 65536 00:19:15.525 }, 00:19:15.525 { 00:19:15.525 "name": "BaseBdev2", 00:19:15.525 "uuid": "5207fd52-f4ba-40ed-bfce-f346de8e55d1", 00:19:15.525 "is_configured": true, 00:19:15.525 "data_offset": 0, 00:19:15.525 "data_size": 65536 00:19:15.525 }, 00:19:15.525 { 00:19:15.525 "name": "BaseBdev3", 00:19:15.525 "uuid": "da09cbb8-effc-4607-a77d-0bfc4c49efb4", 00:19:15.525 "is_configured": true, 00:19:15.525 "data_offset": 0, 00:19:15.525 "data_size": 65536 00:19:15.525 } 00:19:15.525 ] 00:19:15.525 } 00:19:15.525 } 00:19:15.525 }' 00:19:15.525 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:15.783 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:15.783 BaseBdev2 00:19:15.783 BaseBdev3' 00:19:15.783 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:15.783 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:15.783 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:16.041 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:16.041 "name": "BaseBdev1", 00:19:16.041 "aliases": [ 00:19:16.041 "70f17e19-25bb-4cc0-9a44-a77e147aa277" 00:19:16.041 ], 00:19:16.041 "product_name": "Malloc disk", 00:19:16.041 "block_size": 512, 00:19:16.041 "num_blocks": 65536, 00:19:16.041 "uuid": "70f17e19-25bb-4cc0-9a44-a77e147aa277", 00:19:16.041 "assigned_rate_limits": { 00:19:16.041 "rw_ios_per_sec": 0, 00:19:16.041 "rw_mbytes_per_sec": 0, 00:19:16.041 "r_mbytes_per_sec": 0, 00:19:16.041 "w_mbytes_per_sec": 0 00:19:16.041 }, 00:19:16.041 "claimed": true, 00:19:16.041 "claim_type": "exclusive_write", 00:19:16.041 "zoned": false, 00:19:16.041 "supported_io_types": { 00:19:16.041 "read": true, 00:19:16.041 "write": true, 00:19:16.041 "unmap": true, 00:19:16.041 "write_zeroes": true, 00:19:16.041 "flush": true, 00:19:16.041 "reset": true, 00:19:16.041 "compare": false, 00:19:16.041 "compare_and_write": false, 00:19:16.041 "abort": true, 00:19:16.041 "nvme_admin": false, 00:19:16.041 "nvme_io": false 00:19:16.041 }, 00:19:16.041 "memory_domains": [ 00:19:16.041 { 00:19:16.041 "dma_device_id": "system", 00:19:16.041 "dma_device_type": 1 00:19:16.041 }, 00:19:16.041 { 00:19:16.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.041 "dma_device_type": 2 00:19:16.041 } 00:19:16.041 ], 00:19:16.041 "driver_specific": {} 00:19:16.041 }' 00:19:16.041 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:16.041 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:16.041 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:16.041 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:16.041 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:16.299 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:16.299 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:16.299 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:16.299 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:16.299 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:16.299 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:16.299 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:16.299 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:16.299 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:16.299 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:16.865 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:16.865 "name": "BaseBdev2", 00:19:16.866 "aliases": [ 00:19:16.866 "5207fd52-f4ba-40ed-bfce-f346de8e55d1" 00:19:16.866 ], 00:19:16.866 "product_name": "Malloc disk", 00:19:16.866 "block_size": 512, 00:19:16.866 "num_blocks": 65536, 00:19:16.866 "uuid": "5207fd52-f4ba-40ed-bfce-f346de8e55d1", 00:19:16.866 "assigned_rate_limits": { 00:19:16.866 "rw_ios_per_sec": 0, 00:19:16.866 "rw_mbytes_per_sec": 0, 00:19:16.866 "r_mbytes_per_sec": 0, 00:19:16.866 "w_mbytes_per_sec": 0 00:19:16.866 }, 00:19:16.866 "claimed": true, 00:19:16.866 "claim_type": "exclusive_write", 00:19:16.866 "zoned": false, 00:19:16.866 "supported_io_types": { 00:19:16.866 "read": true, 00:19:16.866 "write": true, 00:19:16.866 "unmap": true, 00:19:16.866 "write_zeroes": true, 00:19:16.866 "flush": true, 00:19:16.866 "reset": true, 00:19:16.866 "compare": false, 00:19:16.866 "compare_and_write": false, 00:19:16.866 "abort": true, 00:19:16.866 "nvme_admin": false, 00:19:16.866 "nvme_io": false 00:19:16.866 }, 00:19:16.866 "memory_domains": [ 00:19:16.866 { 00:19:16.866 "dma_device_id": "system", 00:19:16.866 "dma_device_type": 1 00:19:16.866 }, 00:19:16.866 { 00:19:16.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.866 "dma_device_type": 2 00:19:16.866 } 00:19:16.866 ], 00:19:16.866 "driver_specific": {} 00:19:16.866 }' 00:19:16.866 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:16.866 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:16.866 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:16.866 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:16.866 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:16.866 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:16.866 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:16.866 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:16.866 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:16.866 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:17.134 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:17.134 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:17.134 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:17.134 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:17.134 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:17.394 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:17.394 "name": "BaseBdev3", 00:19:17.394 "aliases": [ 00:19:17.394 "da09cbb8-effc-4607-a77d-0bfc4c49efb4" 00:19:17.394 ], 00:19:17.394 "product_name": "Malloc disk", 00:19:17.394 "block_size": 512, 00:19:17.394 "num_blocks": 65536, 00:19:17.394 "uuid": "da09cbb8-effc-4607-a77d-0bfc4c49efb4", 00:19:17.394 "assigned_rate_limits": { 00:19:17.394 "rw_ios_per_sec": 0, 00:19:17.394 "rw_mbytes_per_sec": 0, 00:19:17.394 "r_mbytes_per_sec": 0, 00:19:17.394 "w_mbytes_per_sec": 0 00:19:17.394 }, 00:19:17.394 "claimed": true, 00:19:17.394 "claim_type": "exclusive_write", 00:19:17.394 "zoned": false, 00:19:17.394 "supported_io_types": { 00:19:17.394 "read": true, 00:19:17.394 "write": true, 00:19:17.394 "unmap": true, 00:19:17.394 "write_zeroes": true, 00:19:17.394 "flush": true, 00:19:17.394 "reset": true, 00:19:17.394 "compare": false, 00:19:17.394 "compare_and_write": false, 00:19:17.394 "abort": true, 00:19:17.394 "nvme_admin": false, 00:19:17.394 "nvme_io": false 00:19:17.394 }, 00:19:17.394 "memory_domains": [ 00:19:17.394 { 00:19:17.394 "dma_device_id": "system", 00:19:17.394 "dma_device_type": 1 00:19:17.394 }, 00:19:17.394 { 00:19:17.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.394 "dma_device_type": 2 00:19:17.394 } 00:19:17.394 ], 00:19:17.394 "driver_specific": {} 00:19:17.394 }' 00:19:17.394 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:17.394 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:17.394 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:17.394 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:17.652 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:17.652 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:17.652 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:17.652 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:17.652 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:17.652 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:17.652 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:17.910 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:17.910 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:18.169 [2024-06-10 11:42:50.013549] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:18.169 [2024-06-10 11:42:50.013840] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:18.169 [2024-06-10 11:42:50.014039] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:18.169 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:18.169 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:19:18.169 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:18.169 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:18.169 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:18.169 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:19:18.169 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:18.169 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:18.169 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:18.169 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:18.169 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:18.169 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:18.170 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:18.170 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:18.170 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.170 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.170 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.519 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.519 "name": "Existed_Raid", 00:19:18.519 "uuid": "1aea716a-25d7-42e6-bc81-8cf745e78bac", 00:19:18.519 "strip_size_kb": 64, 00:19:18.519 "state": "offline", 00:19:18.519 "raid_level": "raid0", 00:19:18.519 "superblock": false, 00:19:18.519 "num_base_bdevs": 3, 00:19:18.519 "num_base_bdevs_discovered": 2, 00:19:18.519 "num_base_bdevs_operational": 2, 00:19:18.519 "base_bdevs_list": [ 00:19:18.519 { 00:19:18.519 "name": null, 00:19:18.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.519 "is_configured": false, 00:19:18.519 "data_offset": 0, 00:19:18.519 "data_size": 65536 00:19:18.519 }, 00:19:18.519 { 00:19:18.519 "name": "BaseBdev2", 00:19:18.519 "uuid": "5207fd52-f4ba-40ed-bfce-f346de8e55d1", 00:19:18.519 "is_configured": true, 00:19:18.519 "data_offset": 0, 00:19:18.519 "data_size": 65536 00:19:18.519 }, 00:19:18.519 { 00:19:18.519 "name": "BaseBdev3", 00:19:18.519 "uuid": "da09cbb8-effc-4607-a77d-0bfc4c49efb4", 00:19:18.519 "is_configured": true, 00:19:18.519 "data_offset": 0, 00:19:18.519 "data_size": 65536 00:19:18.519 } 00:19:18.519 ] 00:19:18.519 }' 00:19:18.519 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.519 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.453 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:19.453 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:19.453 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.453 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:19.453 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:19.453 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:19.453 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:19.711 [2024-06-10 11:42:51.730465] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:19.969 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:19.969 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:19.969 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.969 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:20.236 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:20.236 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:20.236 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:20.499 [2024-06-10 11:42:52.409576] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:20.499 [2024-06-10 11:42:52.409899] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:19:20.499 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:20.499 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:20.499 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:20.499 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.063 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:21.063 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:21.063 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:21.063 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:21.063 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:21.063 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:21.322 BaseBdev2 00:19:21.322 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:21.322 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:19:21.322 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:21.322 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:19:21.322 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:21.322 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:21.322 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:21.580 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:21.838 [ 00:19:21.838 { 00:19:21.838 "name": "BaseBdev2", 00:19:21.838 "aliases": [ 00:19:21.838 "d4fe8e0a-063d-49fe-a61a-b8616797e3c1" 00:19:21.838 ], 00:19:21.838 "product_name": "Malloc disk", 00:19:21.838 "block_size": 512, 00:19:21.838 "num_blocks": 65536, 00:19:21.838 "uuid": "d4fe8e0a-063d-49fe-a61a-b8616797e3c1", 00:19:21.838 "assigned_rate_limits": { 00:19:21.838 "rw_ios_per_sec": 0, 00:19:21.838 "rw_mbytes_per_sec": 0, 00:19:21.838 "r_mbytes_per_sec": 0, 00:19:21.838 "w_mbytes_per_sec": 0 00:19:21.838 }, 00:19:21.838 "claimed": false, 00:19:21.838 "zoned": false, 00:19:21.838 "supported_io_types": { 00:19:21.838 "read": true, 00:19:21.838 "write": true, 00:19:21.838 "unmap": true, 00:19:21.838 "write_zeroes": true, 00:19:21.838 "flush": true, 00:19:21.838 "reset": true, 00:19:21.838 "compare": false, 00:19:21.838 "compare_and_write": false, 00:19:21.838 "abort": true, 00:19:21.838 "nvme_admin": false, 00:19:21.838 "nvme_io": false 00:19:21.838 }, 00:19:21.838 "memory_domains": [ 00:19:21.838 { 00:19:21.838 "dma_device_id": "system", 00:19:21.839 "dma_device_type": 1 00:19:21.839 }, 00:19:21.839 { 00:19:21.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.839 "dma_device_type": 2 00:19:21.839 } 00:19:21.839 ], 00:19:21.839 "driver_specific": {} 00:19:21.839 } 00:19:21.839 ] 00:19:21.839 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:19:21.839 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:21.839 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:21.839 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:22.095 BaseBdev3 00:19:22.095 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:22.095 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:19:22.095 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:22.095 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:19:22.095 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:22.095 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:22.096 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:22.353 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:22.611 [ 00:19:22.611 { 00:19:22.611 "name": "BaseBdev3", 00:19:22.611 "aliases": [ 00:19:22.611 "5cbe6f73-bb1f-4afa-8a27-0c1ec154493d" 00:19:22.611 ], 00:19:22.611 "product_name": "Malloc disk", 00:19:22.611 "block_size": 512, 00:19:22.611 "num_blocks": 65536, 00:19:22.611 "uuid": "5cbe6f73-bb1f-4afa-8a27-0c1ec154493d", 00:19:22.611 "assigned_rate_limits": { 00:19:22.611 "rw_ios_per_sec": 0, 00:19:22.611 "rw_mbytes_per_sec": 0, 00:19:22.611 "r_mbytes_per_sec": 0, 00:19:22.611 "w_mbytes_per_sec": 0 00:19:22.611 }, 00:19:22.611 "claimed": false, 00:19:22.611 "zoned": false, 00:19:22.611 "supported_io_types": { 00:19:22.611 "read": true, 00:19:22.611 "write": true, 00:19:22.611 "unmap": true, 00:19:22.611 "write_zeroes": true, 00:19:22.611 "flush": true, 00:19:22.611 "reset": true, 00:19:22.611 "compare": false, 00:19:22.611 "compare_and_write": false, 00:19:22.611 "abort": true, 00:19:22.611 "nvme_admin": false, 00:19:22.611 "nvme_io": false 00:19:22.611 }, 00:19:22.611 "memory_domains": [ 00:19:22.611 { 00:19:22.611 "dma_device_id": "system", 00:19:22.611 "dma_device_type": 1 00:19:22.611 }, 00:19:22.611 { 00:19:22.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.611 "dma_device_type": 2 00:19:22.611 } 00:19:22.611 ], 00:19:22.611 "driver_specific": {} 00:19:22.611 } 00:19:22.611 ] 00:19:22.611 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:19:22.611 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:22.611 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:22.611 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:22.868 [2024-06-10 11:42:54.834103] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:22.868 [2024-06-10 11:42:54.834468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:22.868 [2024-06-10 11:42:54.834696] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:22.868 [2024-06-10 11:42:54.837393] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:22.868 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:22.868 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:22.868 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:22.868 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:22.868 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:22.868 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:22.868 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:22.868 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:22.868 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:22.868 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:22.868 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.868 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.126 11:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:23.126 "name": "Existed_Raid", 00:19:23.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.126 "strip_size_kb": 64, 00:19:23.126 "state": "configuring", 00:19:23.126 "raid_level": "raid0", 00:19:23.126 "superblock": false, 00:19:23.126 "num_base_bdevs": 3, 00:19:23.126 "num_base_bdevs_discovered": 2, 00:19:23.126 "num_base_bdevs_operational": 3, 00:19:23.126 "base_bdevs_list": [ 00:19:23.126 { 00:19:23.126 "name": "BaseBdev1", 00:19:23.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.126 "is_configured": false, 00:19:23.126 "data_offset": 0, 00:19:23.126 "data_size": 0 00:19:23.126 }, 00:19:23.126 { 00:19:23.126 "name": "BaseBdev2", 00:19:23.126 "uuid": "d4fe8e0a-063d-49fe-a61a-b8616797e3c1", 00:19:23.126 "is_configured": true, 00:19:23.126 "data_offset": 0, 00:19:23.126 "data_size": 65536 00:19:23.126 }, 00:19:23.126 { 00:19:23.126 "name": "BaseBdev3", 00:19:23.126 "uuid": "5cbe6f73-bb1f-4afa-8a27-0c1ec154493d", 00:19:23.126 "is_configured": true, 00:19:23.126 "data_offset": 0, 00:19:23.126 "data_size": 65536 00:19:23.126 } 00:19:23.126 ] 00:19:23.126 }' 00:19:23.126 11:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:23.126 11:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.069 11:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:24.069 [2024-06-10 11:42:56.034321] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:24.069 11:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:24.069 11:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:24.069 11:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:24.069 11:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:24.069 11:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:24.069 11:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:24.069 11:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:24.069 11:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:24.069 11:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:24.069 11:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:24.069 11:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.069 11:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.326 11:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:24.326 "name": "Existed_Raid", 00:19:24.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.326 "strip_size_kb": 64, 00:19:24.326 "state": "configuring", 00:19:24.326 "raid_level": "raid0", 00:19:24.326 "superblock": false, 00:19:24.326 "num_base_bdevs": 3, 00:19:24.326 "num_base_bdevs_discovered": 1, 00:19:24.326 "num_base_bdevs_operational": 3, 00:19:24.326 "base_bdevs_list": [ 00:19:24.326 { 00:19:24.326 "name": "BaseBdev1", 00:19:24.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.326 "is_configured": false, 00:19:24.326 "data_offset": 0, 00:19:24.326 "data_size": 0 00:19:24.326 }, 00:19:24.326 { 00:19:24.326 "name": null, 00:19:24.326 "uuid": "d4fe8e0a-063d-49fe-a61a-b8616797e3c1", 00:19:24.326 "is_configured": false, 00:19:24.326 "data_offset": 0, 00:19:24.326 "data_size": 65536 00:19:24.326 }, 00:19:24.326 { 00:19:24.326 "name": "BaseBdev3", 00:19:24.326 "uuid": "5cbe6f73-bb1f-4afa-8a27-0c1ec154493d", 00:19:24.326 "is_configured": true, 00:19:24.326 "data_offset": 0, 00:19:24.326 "data_size": 65536 00:19:24.326 } 00:19:24.326 ] 00:19:24.326 }' 00:19:24.326 11:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:24.326 11:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.890 11:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.890 11:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:25.456 11:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:25.456 11:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:25.456 [2024-06-10 11:42:57.463445] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:25.456 BaseBdev1 00:19:25.456 11:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:25.456 11:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:19:25.456 11:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:25.456 11:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:19:25.456 11:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:25.456 11:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:25.456 11:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:25.714 11:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:26.014 [ 00:19:26.014 { 00:19:26.014 "name": "BaseBdev1", 00:19:26.014 "aliases": [ 00:19:26.014 "4835d3f6-1e30-44ab-ad23-fb03d4b80e70" 00:19:26.014 ], 00:19:26.014 "product_name": "Malloc disk", 00:19:26.014 "block_size": 512, 00:19:26.014 "num_blocks": 65536, 00:19:26.014 "uuid": "4835d3f6-1e30-44ab-ad23-fb03d4b80e70", 00:19:26.014 "assigned_rate_limits": { 00:19:26.014 "rw_ios_per_sec": 0, 00:19:26.014 "rw_mbytes_per_sec": 0, 00:19:26.014 "r_mbytes_per_sec": 0, 00:19:26.014 "w_mbytes_per_sec": 0 00:19:26.014 }, 00:19:26.014 "claimed": true, 00:19:26.014 "claim_type": "exclusive_write", 00:19:26.014 "zoned": false, 00:19:26.014 "supported_io_types": { 00:19:26.014 "read": true, 00:19:26.014 "write": true, 00:19:26.014 "unmap": true, 00:19:26.014 "write_zeroes": true, 00:19:26.014 "flush": true, 00:19:26.014 "reset": true, 00:19:26.014 "compare": false, 00:19:26.014 "compare_and_write": false, 00:19:26.014 "abort": true, 00:19:26.014 "nvme_admin": false, 00:19:26.014 "nvme_io": false 00:19:26.014 }, 00:19:26.014 "memory_domains": [ 00:19:26.014 { 00:19:26.014 "dma_device_id": "system", 00:19:26.014 "dma_device_type": 1 00:19:26.014 }, 00:19:26.014 { 00:19:26.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.014 "dma_device_type": 2 00:19:26.014 } 00:19:26.014 ], 00:19:26.014 "driver_specific": {} 00:19:26.014 } 00:19:26.014 ] 00:19:26.014 11:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:19:26.014 11:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:26.014 11:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:26.014 11:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:26.014 11:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:26.014 11:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:26.014 11:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:26.014 11:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:26.014 11:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:26.014 11:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:26.014 11:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:26.014 11:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.014 11:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.273 11:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:26.273 "name": "Existed_Raid", 00:19:26.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.273 "strip_size_kb": 64, 00:19:26.273 "state": "configuring", 00:19:26.273 "raid_level": "raid0", 00:19:26.273 "superblock": false, 00:19:26.273 "num_base_bdevs": 3, 00:19:26.273 "num_base_bdevs_discovered": 2, 00:19:26.273 "num_base_bdevs_operational": 3, 00:19:26.273 "base_bdevs_list": [ 00:19:26.273 { 00:19:26.273 "name": "BaseBdev1", 00:19:26.273 "uuid": "4835d3f6-1e30-44ab-ad23-fb03d4b80e70", 00:19:26.273 "is_configured": true, 00:19:26.273 "data_offset": 0, 00:19:26.273 "data_size": 65536 00:19:26.273 }, 00:19:26.273 { 00:19:26.273 "name": null, 00:19:26.273 "uuid": "d4fe8e0a-063d-49fe-a61a-b8616797e3c1", 00:19:26.273 "is_configured": false, 00:19:26.273 "data_offset": 0, 00:19:26.273 "data_size": 65536 00:19:26.273 }, 00:19:26.273 { 00:19:26.273 "name": "BaseBdev3", 00:19:26.273 "uuid": "5cbe6f73-bb1f-4afa-8a27-0c1ec154493d", 00:19:26.273 "is_configured": true, 00:19:26.273 "data_offset": 0, 00:19:26.273 "data_size": 65536 00:19:26.273 } 00:19:26.273 ] 00:19:26.273 }' 00:19:26.273 11:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:26.273 11:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.840 11:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.840 11:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:27.098 11:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:27.098 11:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:27.355 [2024-06-10 11:42:59.252298] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:27.355 11:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:27.355 11:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:27.355 11:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:27.355 11:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:27.355 11:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:27.355 11:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:27.355 11:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:27.355 11:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:27.355 11:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:27.355 11:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:27.355 11:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.355 11:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.612 11:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:27.612 "name": "Existed_Raid", 00:19:27.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.612 "strip_size_kb": 64, 00:19:27.612 "state": "configuring", 00:19:27.612 "raid_level": "raid0", 00:19:27.612 "superblock": false, 00:19:27.612 "num_base_bdevs": 3, 00:19:27.612 "num_base_bdevs_discovered": 1, 00:19:27.612 "num_base_bdevs_operational": 3, 00:19:27.612 "base_bdevs_list": [ 00:19:27.612 { 00:19:27.612 "name": "BaseBdev1", 00:19:27.612 "uuid": "4835d3f6-1e30-44ab-ad23-fb03d4b80e70", 00:19:27.612 "is_configured": true, 00:19:27.612 "data_offset": 0, 00:19:27.612 "data_size": 65536 00:19:27.612 }, 00:19:27.612 { 00:19:27.612 "name": null, 00:19:27.612 "uuid": "d4fe8e0a-063d-49fe-a61a-b8616797e3c1", 00:19:27.612 "is_configured": false, 00:19:27.612 "data_offset": 0, 00:19:27.612 "data_size": 65536 00:19:27.612 }, 00:19:27.612 { 00:19:27.612 "name": null, 00:19:27.612 "uuid": "5cbe6f73-bb1f-4afa-8a27-0c1ec154493d", 00:19:27.613 "is_configured": false, 00:19:27.613 "data_offset": 0, 00:19:27.613 "data_size": 65536 00:19:27.613 } 00:19:27.613 ] 00:19:27.613 }' 00:19:27.613 11:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:27.613 11:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.177 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.177 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:28.434 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:28.434 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:28.692 [2024-06-10 11:43:00.581177] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:28.692 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:28.692 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:28.692 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:28.692 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:28.692 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:28.692 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:28.692 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:28.692 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:28.692 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:28.692 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:28.692 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.692 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.949 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:28.949 "name": "Existed_Raid", 00:19:28.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.949 "strip_size_kb": 64, 00:19:28.949 "state": "configuring", 00:19:28.949 "raid_level": "raid0", 00:19:28.949 "superblock": false, 00:19:28.949 "num_base_bdevs": 3, 00:19:28.949 "num_base_bdevs_discovered": 2, 00:19:28.949 "num_base_bdevs_operational": 3, 00:19:28.949 "base_bdevs_list": [ 00:19:28.949 { 00:19:28.949 "name": "BaseBdev1", 00:19:28.949 "uuid": "4835d3f6-1e30-44ab-ad23-fb03d4b80e70", 00:19:28.949 "is_configured": true, 00:19:28.949 "data_offset": 0, 00:19:28.949 "data_size": 65536 00:19:28.949 }, 00:19:28.949 { 00:19:28.949 "name": null, 00:19:28.950 "uuid": "d4fe8e0a-063d-49fe-a61a-b8616797e3c1", 00:19:28.950 "is_configured": false, 00:19:28.950 "data_offset": 0, 00:19:28.950 "data_size": 65536 00:19:28.950 }, 00:19:28.950 { 00:19:28.950 "name": "BaseBdev3", 00:19:28.950 "uuid": "5cbe6f73-bb1f-4afa-8a27-0c1ec154493d", 00:19:28.950 "is_configured": true, 00:19:28.950 "data_offset": 0, 00:19:28.950 "data_size": 65536 00:19:28.950 } 00:19:28.950 ] 00:19:28.950 }' 00:19:28.950 11:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:28.950 11:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.516 11:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:29.516 11:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.775 11:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:29.775 11:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:30.033 [2024-06-10 11:43:01.837491] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:30.033 11:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:30.033 11:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:30.033 11:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:30.033 11:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:30.033 11:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:30.033 11:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:30.033 11:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:30.033 11:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:30.033 11:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:30.033 11:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:30.033 11:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.033 11:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.290 11:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:30.290 "name": "Existed_Raid", 00:19:30.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.290 "strip_size_kb": 64, 00:19:30.290 "state": "configuring", 00:19:30.290 "raid_level": "raid0", 00:19:30.290 "superblock": false, 00:19:30.290 "num_base_bdevs": 3, 00:19:30.290 "num_base_bdevs_discovered": 1, 00:19:30.290 "num_base_bdevs_operational": 3, 00:19:30.290 "base_bdevs_list": [ 00:19:30.290 { 00:19:30.290 "name": null, 00:19:30.290 "uuid": "4835d3f6-1e30-44ab-ad23-fb03d4b80e70", 00:19:30.290 "is_configured": false, 00:19:30.290 "data_offset": 0, 00:19:30.290 "data_size": 65536 00:19:30.290 }, 00:19:30.290 { 00:19:30.290 "name": null, 00:19:30.290 "uuid": "d4fe8e0a-063d-49fe-a61a-b8616797e3c1", 00:19:30.290 "is_configured": false, 00:19:30.290 "data_offset": 0, 00:19:30.290 "data_size": 65536 00:19:30.290 }, 00:19:30.290 { 00:19:30.290 "name": "BaseBdev3", 00:19:30.290 "uuid": "5cbe6f73-bb1f-4afa-8a27-0c1ec154493d", 00:19:30.290 "is_configured": true, 00:19:30.290 "data_offset": 0, 00:19:30.290 "data_size": 65536 00:19:30.290 } 00:19:30.290 ] 00:19:30.290 }' 00:19:30.290 11:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:30.290 11:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.856 11:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:30.856 11:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.114 11:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:31.114 11:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:31.372 [2024-06-10 11:43:03.399190] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:31.372 11:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:31.372 11:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:31.372 11:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:31.372 11:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:31.372 11:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:31.372 11:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:31.372 11:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:31.372 11:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:31.372 11:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:31.372 11:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:31.372 11:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.372 11:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.936 11:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:31.936 "name": "Existed_Raid", 00:19:31.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.936 "strip_size_kb": 64, 00:19:31.936 "state": "configuring", 00:19:31.936 "raid_level": "raid0", 00:19:31.936 "superblock": false, 00:19:31.936 "num_base_bdevs": 3, 00:19:31.936 "num_base_bdevs_discovered": 2, 00:19:31.936 "num_base_bdevs_operational": 3, 00:19:31.936 "base_bdevs_list": [ 00:19:31.936 { 00:19:31.936 "name": null, 00:19:31.936 "uuid": "4835d3f6-1e30-44ab-ad23-fb03d4b80e70", 00:19:31.936 "is_configured": false, 00:19:31.936 "data_offset": 0, 00:19:31.936 "data_size": 65536 00:19:31.936 }, 00:19:31.936 { 00:19:31.936 "name": "BaseBdev2", 00:19:31.936 "uuid": "d4fe8e0a-063d-49fe-a61a-b8616797e3c1", 00:19:31.936 "is_configured": true, 00:19:31.936 "data_offset": 0, 00:19:31.936 "data_size": 65536 00:19:31.936 }, 00:19:31.936 { 00:19:31.936 "name": "BaseBdev3", 00:19:31.936 "uuid": "5cbe6f73-bb1f-4afa-8a27-0c1ec154493d", 00:19:31.936 "is_configured": true, 00:19:31.936 "data_offset": 0, 00:19:31.936 "data_size": 65536 00:19:31.936 } 00:19:31.936 ] 00:19:31.936 }' 00:19:31.936 11:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:31.936 11:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.501 11:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:32.501 11:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.759 11:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:32.759 11:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.759 11:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:33.017 11:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4835d3f6-1e30-44ab-ad23-fb03d4b80e70 00:19:33.582 [2024-06-10 11:43:05.351266] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:33.582 [2024-06-10 11:43:05.351638] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:19:33.582 [2024-06-10 11:43:05.351688] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:33.582 [2024-06-10 11:43:05.351913] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:33.582 [2024-06-10 11:43:05.352393] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:19:33.582 [2024-06-10 11:43:05.352520] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:19:33.582 [2024-06-10 11:43:05.352884] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.582 NewBaseBdev 00:19:33.582 11:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:33.582 11:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:19:33.582 11:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:33.582 11:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:19:33.582 11:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:33.582 11:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:33.582 11:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:33.582 11:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:33.841 [ 00:19:33.841 { 00:19:33.841 "name": "NewBaseBdev", 00:19:33.841 "aliases": [ 00:19:33.841 "4835d3f6-1e30-44ab-ad23-fb03d4b80e70" 00:19:33.841 ], 00:19:33.841 "product_name": "Malloc disk", 00:19:33.841 "block_size": 512, 00:19:33.841 "num_blocks": 65536, 00:19:33.841 "uuid": "4835d3f6-1e30-44ab-ad23-fb03d4b80e70", 00:19:33.841 "assigned_rate_limits": { 00:19:33.841 "rw_ios_per_sec": 0, 00:19:33.841 "rw_mbytes_per_sec": 0, 00:19:33.841 "r_mbytes_per_sec": 0, 00:19:33.841 "w_mbytes_per_sec": 0 00:19:33.841 }, 00:19:33.841 "claimed": true, 00:19:33.841 "claim_type": "exclusive_write", 00:19:33.841 "zoned": false, 00:19:33.841 "supported_io_types": { 00:19:33.841 "read": true, 00:19:33.841 "write": true, 00:19:33.841 "unmap": true, 00:19:33.841 "write_zeroes": true, 00:19:33.841 "flush": true, 00:19:33.841 "reset": true, 00:19:33.841 "compare": false, 00:19:33.841 "compare_and_write": false, 00:19:33.841 "abort": true, 00:19:33.841 "nvme_admin": false, 00:19:33.841 "nvme_io": false 00:19:33.841 }, 00:19:33.841 "memory_domains": [ 00:19:33.841 { 00:19:33.841 "dma_device_id": "system", 00:19:33.841 "dma_device_type": 1 00:19:33.841 }, 00:19:33.841 { 00:19:33.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.841 "dma_device_type": 2 00:19:33.841 } 00:19:33.841 ], 00:19:33.841 "driver_specific": {} 00:19:33.841 } 00:19:33.841 ] 00:19:33.841 11:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:19:33.841 11:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:33.841 11:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:33.841 11:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:33.841 11:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:33.841 11:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:33.841 11:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:33.841 11:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:33.841 11:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:33.841 11:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:33.841 11:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:33.841 11:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.841 11:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.100 11:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:34.100 "name": "Existed_Raid", 00:19:34.100 "uuid": "9f78e866-d583-4c06-b757-015cf06c9c85", 00:19:34.100 "strip_size_kb": 64, 00:19:34.100 "state": "online", 00:19:34.100 "raid_level": "raid0", 00:19:34.100 "superblock": false, 00:19:34.100 "num_base_bdevs": 3, 00:19:34.100 "num_base_bdevs_discovered": 3, 00:19:34.100 "num_base_bdevs_operational": 3, 00:19:34.100 "base_bdevs_list": [ 00:19:34.100 { 00:19:34.100 "name": "NewBaseBdev", 00:19:34.100 "uuid": "4835d3f6-1e30-44ab-ad23-fb03d4b80e70", 00:19:34.100 "is_configured": true, 00:19:34.100 "data_offset": 0, 00:19:34.100 "data_size": 65536 00:19:34.100 }, 00:19:34.100 { 00:19:34.100 "name": "BaseBdev2", 00:19:34.100 "uuid": "d4fe8e0a-063d-49fe-a61a-b8616797e3c1", 00:19:34.100 "is_configured": true, 00:19:34.100 "data_offset": 0, 00:19:34.100 "data_size": 65536 00:19:34.100 }, 00:19:34.100 { 00:19:34.100 "name": "BaseBdev3", 00:19:34.100 "uuid": "5cbe6f73-bb1f-4afa-8a27-0c1ec154493d", 00:19:34.100 "is_configured": true, 00:19:34.100 "data_offset": 0, 00:19:34.100 "data_size": 65536 00:19:34.100 } 00:19:34.100 ] 00:19:34.100 }' 00:19:34.100 11:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:34.100 11:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.667 11:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:34.667 11:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:34.667 11:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:34.667 11:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:34.667 11:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:34.667 11:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:34.667 11:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:34.667 11:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:34.925 [2024-06-10 11:43:06.912078] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.925 11:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:34.926 "name": "Existed_Raid", 00:19:34.926 "aliases": [ 00:19:34.926 "9f78e866-d583-4c06-b757-015cf06c9c85" 00:19:34.926 ], 00:19:34.926 "product_name": "Raid Volume", 00:19:34.926 "block_size": 512, 00:19:34.926 "num_blocks": 196608, 00:19:34.926 "uuid": "9f78e866-d583-4c06-b757-015cf06c9c85", 00:19:34.926 "assigned_rate_limits": { 00:19:34.926 "rw_ios_per_sec": 0, 00:19:34.926 "rw_mbytes_per_sec": 0, 00:19:34.926 "r_mbytes_per_sec": 0, 00:19:34.926 "w_mbytes_per_sec": 0 00:19:34.926 }, 00:19:34.926 "claimed": false, 00:19:34.926 "zoned": false, 00:19:34.926 "supported_io_types": { 00:19:34.926 "read": true, 00:19:34.926 "write": true, 00:19:34.926 "unmap": true, 00:19:34.926 "write_zeroes": true, 00:19:34.926 "flush": true, 00:19:34.926 "reset": true, 00:19:34.926 "compare": false, 00:19:34.926 "compare_and_write": false, 00:19:34.926 "abort": false, 00:19:34.926 "nvme_admin": false, 00:19:34.926 "nvme_io": false 00:19:34.926 }, 00:19:34.926 "memory_domains": [ 00:19:34.926 { 00:19:34.926 "dma_device_id": "system", 00:19:34.926 "dma_device_type": 1 00:19:34.926 }, 00:19:34.926 { 00:19:34.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.926 "dma_device_type": 2 00:19:34.926 }, 00:19:34.926 { 00:19:34.926 "dma_device_id": "system", 00:19:34.926 "dma_device_type": 1 00:19:34.926 }, 00:19:34.926 { 00:19:34.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.926 "dma_device_type": 2 00:19:34.926 }, 00:19:34.926 { 00:19:34.926 "dma_device_id": "system", 00:19:34.926 "dma_device_type": 1 00:19:34.926 }, 00:19:34.926 { 00:19:34.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.926 "dma_device_type": 2 00:19:34.926 } 00:19:34.926 ], 00:19:34.926 "driver_specific": { 00:19:34.926 "raid": { 00:19:34.926 "uuid": "9f78e866-d583-4c06-b757-015cf06c9c85", 00:19:34.926 "strip_size_kb": 64, 00:19:34.926 "state": "online", 00:19:34.926 "raid_level": "raid0", 00:19:34.926 "superblock": false, 00:19:34.926 "num_base_bdevs": 3, 00:19:34.926 "num_base_bdevs_discovered": 3, 00:19:34.926 "num_base_bdevs_operational": 3, 00:19:34.926 "base_bdevs_list": [ 00:19:34.926 { 00:19:34.926 "name": "NewBaseBdev", 00:19:34.926 "uuid": "4835d3f6-1e30-44ab-ad23-fb03d4b80e70", 00:19:34.926 "is_configured": true, 00:19:34.926 "data_offset": 0, 00:19:34.926 "data_size": 65536 00:19:34.926 }, 00:19:34.926 { 00:19:34.926 "name": "BaseBdev2", 00:19:34.926 "uuid": "d4fe8e0a-063d-49fe-a61a-b8616797e3c1", 00:19:34.926 "is_configured": true, 00:19:34.926 "data_offset": 0, 00:19:34.926 "data_size": 65536 00:19:34.926 }, 00:19:34.926 { 00:19:34.926 "name": "BaseBdev3", 00:19:34.926 "uuid": "5cbe6f73-bb1f-4afa-8a27-0c1ec154493d", 00:19:34.926 "is_configured": true, 00:19:34.926 "data_offset": 0, 00:19:34.926 "data_size": 65536 00:19:34.926 } 00:19:34.926 ] 00:19:34.926 } 00:19:34.926 } 00:19:34.926 }' 00:19:34.926 11:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:35.184 11:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:35.184 BaseBdev2 00:19:35.184 BaseBdev3' 00:19:35.184 11:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:35.184 11:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:35.184 11:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:35.442 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:35.442 "name": "NewBaseBdev", 00:19:35.442 "aliases": [ 00:19:35.442 "4835d3f6-1e30-44ab-ad23-fb03d4b80e70" 00:19:35.442 ], 00:19:35.442 "product_name": "Malloc disk", 00:19:35.442 "block_size": 512, 00:19:35.442 "num_blocks": 65536, 00:19:35.442 "uuid": "4835d3f6-1e30-44ab-ad23-fb03d4b80e70", 00:19:35.442 "assigned_rate_limits": { 00:19:35.442 "rw_ios_per_sec": 0, 00:19:35.442 "rw_mbytes_per_sec": 0, 00:19:35.442 "r_mbytes_per_sec": 0, 00:19:35.442 "w_mbytes_per_sec": 0 00:19:35.442 }, 00:19:35.442 "claimed": true, 00:19:35.442 "claim_type": "exclusive_write", 00:19:35.442 "zoned": false, 00:19:35.442 "supported_io_types": { 00:19:35.442 "read": true, 00:19:35.442 "write": true, 00:19:35.442 "unmap": true, 00:19:35.442 "write_zeroes": true, 00:19:35.442 "flush": true, 00:19:35.442 "reset": true, 00:19:35.442 "compare": false, 00:19:35.442 "compare_and_write": false, 00:19:35.442 "abort": true, 00:19:35.442 "nvme_admin": false, 00:19:35.442 "nvme_io": false 00:19:35.442 }, 00:19:35.442 "memory_domains": [ 00:19:35.442 { 00:19:35.442 "dma_device_id": "system", 00:19:35.442 "dma_device_type": 1 00:19:35.442 }, 00:19:35.442 { 00:19:35.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.442 "dma_device_type": 2 00:19:35.442 } 00:19:35.442 ], 00:19:35.442 "driver_specific": {} 00:19:35.442 }' 00:19:35.442 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:35.442 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:35.442 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:35.442 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:35.442 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:35.700 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:35.700 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:35.700 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:35.700 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:35.700 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:35.700 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:35.700 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:35.700 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:35.700 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:35.700 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:35.959 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:35.959 "name": "BaseBdev2", 00:19:35.959 "aliases": [ 00:19:35.959 "d4fe8e0a-063d-49fe-a61a-b8616797e3c1" 00:19:35.959 ], 00:19:35.959 "product_name": "Malloc disk", 00:19:35.959 "block_size": 512, 00:19:35.959 "num_blocks": 65536, 00:19:35.959 "uuid": "d4fe8e0a-063d-49fe-a61a-b8616797e3c1", 00:19:35.959 "assigned_rate_limits": { 00:19:35.959 "rw_ios_per_sec": 0, 00:19:35.959 "rw_mbytes_per_sec": 0, 00:19:35.959 "r_mbytes_per_sec": 0, 00:19:35.959 "w_mbytes_per_sec": 0 00:19:35.959 }, 00:19:35.959 "claimed": true, 00:19:35.959 "claim_type": "exclusive_write", 00:19:35.959 "zoned": false, 00:19:35.959 "supported_io_types": { 00:19:35.959 "read": true, 00:19:35.959 "write": true, 00:19:35.959 "unmap": true, 00:19:35.959 "write_zeroes": true, 00:19:35.959 "flush": true, 00:19:35.959 "reset": true, 00:19:35.959 "compare": false, 00:19:35.959 "compare_and_write": false, 00:19:35.959 "abort": true, 00:19:35.959 "nvme_admin": false, 00:19:35.959 "nvme_io": false 00:19:35.959 }, 00:19:35.959 "memory_domains": [ 00:19:35.959 { 00:19:35.959 "dma_device_id": "system", 00:19:35.959 "dma_device_type": 1 00:19:35.959 }, 00:19:35.959 { 00:19:35.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.959 "dma_device_type": 2 00:19:35.959 } 00:19:35.959 ], 00:19:35.959 "driver_specific": {} 00:19:35.959 }' 00:19:35.959 11:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.217 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.217 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:36.217 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.217 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.217 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:36.217 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.218 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.476 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:36.476 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.476 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.476 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:36.476 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:36.476 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:36.476 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:36.734 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:36.734 "name": "BaseBdev3", 00:19:36.734 "aliases": [ 00:19:36.734 "5cbe6f73-bb1f-4afa-8a27-0c1ec154493d" 00:19:36.734 ], 00:19:36.734 "product_name": "Malloc disk", 00:19:36.734 "block_size": 512, 00:19:36.734 "num_blocks": 65536, 00:19:36.734 "uuid": "5cbe6f73-bb1f-4afa-8a27-0c1ec154493d", 00:19:36.734 "assigned_rate_limits": { 00:19:36.734 "rw_ios_per_sec": 0, 00:19:36.734 "rw_mbytes_per_sec": 0, 00:19:36.734 "r_mbytes_per_sec": 0, 00:19:36.734 "w_mbytes_per_sec": 0 00:19:36.734 }, 00:19:36.734 "claimed": true, 00:19:36.734 "claim_type": "exclusive_write", 00:19:36.734 "zoned": false, 00:19:36.734 "supported_io_types": { 00:19:36.734 "read": true, 00:19:36.734 "write": true, 00:19:36.734 "unmap": true, 00:19:36.734 "write_zeroes": true, 00:19:36.734 "flush": true, 00:19:36.734 "reset": true, 00:19:36.734 "compare": false, 00:19:36.734 "compare_and_write": false, 00:19:36.734 "abort": true, 00:19:36.734 "nvme_admin": false, 00:19:36.734 "nvme_io": false 00:19:36.734 }, 00:19:36.734 "memory_domains": [ 00:19:36.734 { 00:19:36.734 "dma_device_id": "system", 00:19:36.734 "dma_device_type": 1 00:19:36.734 }, 00:19:36.734 { 00:19:36.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.734 "dma_device_type": 2 00:19:36.734 } 00:19:36.734 ], 00:19:36.734 "driver_specific": {} 00:19:36.734 }' 00:19:36.734 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.734 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.734 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:36.734 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.991 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.991 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:36.991 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.991 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.991 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:36.991 11:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.991 11:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.249 11:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:37.249 11:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:37.506 [2024-06-10 11:43:09.316199] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:37.506 [2024-06-10 11:43:09.316490] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:37.506 [2024-06-10 11:43:09.316675] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.506 [2024-06-10 11:43:09.316821] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.506 [2024-06-10 11:43:09.316908] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:19:37.506 11:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 126492 00:19:37.506 11:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 126492 ']' 00:19:37.506 11:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 126492 00:19:37.506 11:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:19:37.506 11:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:37.506 11:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 126492 00:19:37.506 11:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:37.506 11:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:37.506 11:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 126492' 00:19:37.506 killing process with pid 126492 00:19:37.506 11:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 126492 00:19:37.506 [2024-06-10 11:43:09.360769] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:37.506 11:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 126492 00:19:37.764 [2024-06-10 11:43:09.705638] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:39.663 00:19:39.663 real 0m33.691s 00:19:39.663 user 1m1.444s 00:19:39.663 sys 0m4.345s 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:39.663 ************************************ 00:19:39.663 END TEST raid_state_function_test 00:19:39.663 ************************************ 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.663 11:43:11 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:19:39.663 11:43:11 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:19:39.663 11:43:11 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:39.663 11:43:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:39.663 ************************************ 00:19:39.663 START TEST raid_state_function_test_sb 00:19:39.663 ************************************ 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 3 true 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=127516 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 127516' 00:19:39.663 Process raid pid: 127516 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 127516 /var/tmp/spdk-raid.sock 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 127516 ']' 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:39.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:39.663 11:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.663 [2024-06-10 11:43:11.370150] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:19:39.663 [2024-06-10 11:43:11.370680] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.663 [2024-06-10 11:43:11.559364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.921 [2024-06-10 11:43:11.841419] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.181 [2024-06-10 11:43:12.078665] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:40.438 11:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:40.438 11:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:19:40.438 11:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:40.695 [2024-06-10 11:43:12.634223] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:40.695 [2024-06-10 11:43:12.634593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:40.695 [2024-06-10 11:43:12.634728] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:40.695 [2024-06-10 11:43:12.634798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:40.695 [2024-06-10 11:43:12.634893] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:40.695 [2024-06-10 11:43:12.634945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:40.695 11:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:40.695 11:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:40.695 11:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:40.695 11:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:40.695 11:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:40.695 11:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:40.695 11:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:40.695 11:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:40.695 11:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:40.696 11:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:40.696 11:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.696 11:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.953 11:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:40.953 "name": "Existed_Raid", 00:19:40.953 "uuid": "cdb18dd8-b875-4bdd-90c4-d855a44b7bb2", 00:19:40.953 "strip_size_kb": 64, 00:19:40.953 "state": "configuring", 00:19:40.953 "raid_level": "raid0", 00:19:40.953 "superblock": true, 00:19:40.953 "num_base_bdevs": 3, 00:19:40.953 "num_base_bdevs_discovered": 0, 00:19:40.953 "num_base_bdevs_operational": 3, 00:19:40.953 "base_bdevs_list": [ 00:19:40.953 { 00:19:40.953 "name": "BaseBdev1", 00:19:40.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.953 "is_configured": false, 00:19:40.953 "data_offset": 0, 00:19:40.953 "data_size": 0 00:19:40.953 }, 00:19:40.953 { 00:19:40.953 "name": "BaseBdev2", 00:19:40.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.953 "is_configured": false, 00:19:40.953 "data_offset": 0, 00:19:40.953 "data_size": 0 00:19:40.953 }, 00:19:40.953 { 00:19:40.953 "name": "BaseBdev3", 00:19:40.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.953 "is_configured": false, 00:19:40.953 "data_offset": 0, 00:19:40.953 "data_size": 0 00:19:40.953 } 00:19:40.953 ] 00:19:40.953 }' 00:19:40.953 11:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:40.953 11:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.519 11:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:41.779 [2024-06-10 11:43:13.742319] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:41.779 [2024-06-10 11:43:13.742561] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:41.779 11:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:42.037 [2024-06-10 11:43:14.042399] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:42.037 [2024-06-10 11:43:14.042767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:42.037 [2024-06-10 11:43:14.042893] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:42.037 [2024-06-10 11:43:14.042964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:42.037 [2024-06-10 11:43:14.043067] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:42.037 [2024-06-10 11:43:14.043145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:42.037 11:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:42.603 [2024-06-10 11:43:14.370594] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:42.603 BaseBdev1 00:19:42.603 11:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:42.603 11:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:19:42.603 11:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:42.603 11:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:19:42.603 11:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:42.603 11:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:42.603 11:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:42.860 11:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:43.118 [ 00:19:43.118 { 00:19:43.118 "name": "BaseBdev1", 00:19:43.118 "aliases": [ 00:19:43.118 "193c0808-cb24-42b8-9d98-5f5368a63e39" 00:19:43.118 ], 00:19:43.118 "product_name": "Malloc disk", 00:19:43.118 "block_size": 512, 00:19:43.118 "num_blocks": 65536, 00:19:43.118 "uuid": "193c0808-cb24-42b8-9d98-5f5368a63e39", 00:19:43.118 "assigned_rate_limits": { 00:19:43.118 "rw_ios_per_sec": 0, 00:19:43.118 "rw_mbytes_per_sec": 0, 00:19:43.118 "r_mbytes_per_sec": 0, 00:19:43.118 "w_mbytes_per_sec": 0 00:19:43.118 }, 00:19:43.118 "claimed": true, 00:19:43.118 "claim_type": "exclusive_write", 00:19:43.118 "zoned": false, 00:19:43.118 "supported_io_types": { 00:19:43.118 "read": true, 00:19:43.118 "write": true, 00:19:43.118 "unmap": true, 00:19:43.118 "write_zeroes": true, 00:19:43.118 "flush": true, 00:19:43.118 "reset": true, 00:19:43.118 "compare": false, 00:19:43.118 "compare_and_write": false, 00:19:43.118 "abort": true, 00:19:43.118 "nvme_admin": false, 00:19:43.118 "nvme_io": false 00:19:43.118 }, 00:19:43.118 "memory_domains": [ 00:19:43.118 { 00:19:43.118 "dma_device_id": "system", 00:19:43.118 "dma_device_type": 1 00:19:43.118 }, 00:19:43.118 { 00:19:43.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.118 "dma_device_type": 2 00:19:43.118 } 00:19:43.118 ], 00:19:43.118 "driver_specific": {} 00:19:43.118 } 00:19:43.118 ] 00:19:43.118 11:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:19:43.118 11:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:43.118 11:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:43.118 11:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:43.118 11:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:43.118 11:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:43.118 11:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:43.118 11:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:43.118 11:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:43.118 11:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:43.118 11:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:43.118 11:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.118 11:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.399 11:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:43.399 "name": "Existed_Raid", 00:19:43.399 "uuid": "6cdaf5f1-7a23-44f0-8262-9d5acd09a1e7", 00:19:43.399 "strip_size_kb": 64, 00:19:43.399 "state": "configuring", 00:19:43.399 "raid_level": "raid0", 00:19:43.399 "superblock": true, 00:19:43.399 "num_base_bdevs": 3, 00:19:43.399 "num_base_bdevs_discovered": 1, 00:19:43.399 "num_base_bdevs_operational": 3, 00:19:43.399 "base_bdevs_list": [ 00:19:43.399 { 00:19:43.399 "name": "BaseBdev1", 00:19:43.399 "uuid": "193c0808-cb24-42b8-9d98-5f5368a63e39", 00:19:43.399 "is_configured": true, 00:19:43.399 "data_offset": 2048, 00:19:43.399 "data_size": 63488 00:19:43.399 }, 00:19:43.399 { 00:19:43.399 "name": "BaseBdev2", 00:19:43.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.399 "is_configured": false, 00:19:43.399 "data_offset": 0, 00:19:43.399 "data_size": 0 00:19:43.399 }, 00:19:43.399 { 00:19:43.399 "name": "BaseBdev3", 00:19:43.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.399 "is_configured": false, 00:19:43.399 "data_offset": 0, 00:19:43.399 "data_size": 0 00:19:43.399 } 00:19:43.399 ] 00:19:43.399 }' 00:19:43.399 11:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:43.399 11:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.968 11:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:43.968 [2024-06-10 11:43:15.935133] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:43.968 [2024-06-10 11:43:15.935485] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:43.968 11:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:44.226 [2024-06-10 11:43:16.167207] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:44.226 [2024-06-10 11:43:16.169789] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:44.226 [2024-06-10 11:43:16.170070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:44.226 [2024-06-10 11:43:16.170224] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:44.226 [2024-06-10 11:43:16.170371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:44.226 11:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:44.226 11:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:44.226 11:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:44.226 11:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:44.226 11:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:44.226 11:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:44.226 11:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:44.226 11:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:44.226 11:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:44.226 11:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:44.226 11:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:44.226 11:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:44.226 11:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.226 11:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.484 11:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:44.484 "name": "Existed_Raid", 00:19:44.484 "uuid": "e3266a48-699f-4ee4-806b-d26df878609b", 00:19:44.484 "strip_size_kb": 64, 00:19:44.484 "state": "configuring", 00:19:44.484 "raid_level": "raid0", 00:19:44.484 "superblock": true, 00:19:44.484 "num_base_bdevs": 3, 00:19:44.484 "num_base_bdevs_discovered": 1, 00:19:44.484 "num_base_bdevs_operational": 3, 00:19:44.484 "base_bdevs_list": [ 00:19:44.484 { 00:19:44.484 "name": "BaseBdev1", 00:19:44.484 "uuid": "193c0808-cb24-42b8-9d98-5f5368a63e39", 00:19:44.484 "is_configured": true, 00:19:44.484 "data_offset": 2048, 00:19:44.484 "data_size": 63488 00:19:44.484 }, 00:19:44.484 { 00:19:44.484 "name": "BaseBdev2", 00:19:44.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.484 "is_configured": false, 00:19:44.484 "data_offset": 0, 00:19:44.484 "data_size": 0 00:19:44.484 }, 00:19:44.484 { 00:19:44.484 "name": "BaseBdev3", 00:19:44.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.484 "is_configured": false, 00:19:44.484 "data_offset": 0, 00:19:44.484 "data_size": 0 00:19:44.484 } 00:19:44.484 ] 00:19:44.484 }' 00:19:44.484 11:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:44.484 11:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.058 11:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:45.316 [2024-06-10 11:43:17.333102] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:45.316 BaseBdev2 00:19:45.316 11:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:45.316 11:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:19:45.316 11:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:45.316 11:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:19:45.316 11:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:45.316 11:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:45.316 11:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:45.884 11:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:45.884 [ 00:19:45.884 { 00:19:45.884 "name": "BaseBdev2", 00:19:45.884 "aliases": [ 00:19:45.884 "a091aef4-41c8-472a-9f0f-967c4a5264af" 00:19:45.884 ], 00:19:45.884 "product_name": "Malloc disk", 00:19:45.884 "block_size": 512, 00:19:45.884 "num_blocks": 65536, 00:19:45.884 "uuid": "a091aef4-41c8-472a-9f0f-967c4a5264af", 00:19:45.884 "assigned_rate_limits": { 00:19:45.884 "rw_ios_per_sec": 0, 00:19:45.884 "rw_mbytes_per_sec": 0, 00:19:45.884 "r_mbytes_per_sec": 0, 00:19:45.884 "w_mbytes_per_sec": 0 00:19:45.884 }, 00:19:45.884 "claimed": true, 00:19:45.884 "claim_type": "exclusive_write", 00:19:45.885 "zoned": false, 00:19:45.885 "supported_io_types": { 00:19:45.885 "read": true, 00:19:45.885 "write": true, 00:19:45.885 "unmap": true, 00:19:45.885 "write_zeroes": true, 00:19:45.885 "flush": true, 00:19:45.885 "reset": true, 00:19:45.885 "compare": false, 00:19:45.885 "compare_and_write": false, 00:19:45.885 "abort": true, 00:19:45.885 "nvme_admin": false, 00:19:45.885 "nvme_io": false 00:19:45.885 }, 00:19:45.885 "memory_domains": [ 00:19:45.885 { 00:19:45.885 "dma_device_id": "system", 00:19:45.885 "dma_device_type": 1 00:19:45.885 }, 00:19:45.885 { 00:19:45.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:45.885 "dma_device_type": 2 00:19:45.885 } 00:19:45.885 ], 00:19:45.885 "driver_specific": {} 00:19:45.885 } 00:19:45.885 ] 00:19:45.885 11:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:19:45.885 11:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:45.885 11:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:45.885 11:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:45.885 11:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:45.885 11:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:45.885 11:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:45.885 11:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:45.885 11:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:45.885 11:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:45.885 11:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:45.885 11:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:45.885 11:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:45.885 11:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.885 11:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.143 11:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.143 "name": "Existed_Raid", 00:19:46.143 "uuid": "e3266a48-699f-4ee4-806b-d26df878609b", 00:19:46.143 "strip_size_kb": 64, 00:19:46.143 "state": "configuring", 00:19:46.143 "raid_level": "raid0", 00:19:46.143 "superblock": true, 00:19:46.143 "num_base_bdevs": 3, 00:19:46.143 "num_base_bdevs_discovered": 2, 00:19:46.143 "num_base_bdevs_operational": 3, 00:19:46.143 "base_bdevs_list": [ 00:19:46.143 { 00:19:46.143 "name": "BaseBdev1", 00:19:46.143 "uuid": "193c0808-cb24-42b8-9d98-5f5368a63e39", 00:19:46.143 "is_configured": true, 00:19:46.143 "data_offset": 2048, 00:19:46.143 "data_size": 63488 00:19:46.143 }, 00:19:46.143 { 00:19:46.143 "name": "BaseBdev2", 00:19:46.143 "uuid": "a091aef4-41c8-472a-9f0f-967c4a5264af", 00:19:46.143 "is_configured": true, 00:19:46.143 "data_offset": 2048, 00:19:46.143 "data_size": 63488 00:19:46.143 }, 00:19:46.143 { 00:19:46.143 "name": "BaseBdev3", 00:19:46.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.143 "is_configured": false, 00:19:46.143 "data_offset": 0, 00:19:46.143 "data_size": 0 00:19:46.143 } 00:19:46.143 ] 00:19:46.143 }' 00:19:46.143 11:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.143 11:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.711 11:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:46.969 [2024-06-10 11:43:18.970054] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:46.969 [2024-06-10 11:43:18.970583] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:19:46.969 BaseBdev3 00:19:46.969 [2024-06-10 11:43:18.971738] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:46.969 [2024-06-10 11:43:18.972041] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:19:46.969 [2024-06-10 11:43:18.972518] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:19:46.969 [2024-06-10 11:43:18.972635] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:19:46.969 [2024-06-10 11:43:18.972884] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.969 11:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:46.969 11:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:19:46.969 11:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:46.969 11:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:19:46.969 11:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:46.969 11:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:46.969 11:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:47.227 11:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:47.485 [ 00:19:47.485 { 00:19:47.485 "name": "BaseBdev3", 00:19:47.485 "aliases": [ 00:19:47.485 "8720ccaa-dcf5-42d0-be56-88a6624e1a01" 00:19:47.485 ], 00:19:47.485 "product_name": "Malloc disk", 00:19:47.485 "block_size": 512, 00:19:47.485 "num_blocks": 65536, 00:19:47.485 "uuid": "8720ccaa-dcf5-42d0-be56-88a6624e1a01", 00:19:47.485 "assigned_rate_limits": { 00:19:47.485 "rw_ios_per_sec": 0, 00:19:47.485 "rw_mbytes_per_sec": 0, 00:19:47.485 "r_mbytes_per_sec": 0, 00:19:47.485 "w_mbytes_per_sec": 0 00:19:47.485 }, 00:19:47.485 "claimed": true, 00:19:47.485 "claim_type": "exclusive_write", 00:19:47.485 "zoned": false, 00:19:47.485 "supported_io_types": { 00:19:47.485 "read": true, 00:19:47.485 "write": true, 00:19:47.485 "unmap": true, 00:19:47.485 "write_zeroes": true, 00:19:47.485 "flush": true, 00:19:47.485 "reset": true, 00:19:47.485 "compare": false, 00:19:47.485 "compare_and_write": false, 00:19:47.485 "abort": true, 00:19:47.485 "nvme_admin": false, 00:19:47.485 "nvme_io": false 00:19:47.485 }, 00:19:47.485 "memory_domains": [ 00:19:47.485 { 00:19:47.485 "dma_device_id": "system", 00:19:47.485 "dma_device_type": 1 00:19:47.485 }, 00:19:47.485 { 00:19:47.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.485 "dma_device_type": 2 00:19:47.485 } 00:19:47.485 ], 00:19:47.485 "driver_specific": {} 00:19:47.485 } 00:19:47.485 ] 00:19:47.485 11:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:19:47.485 11:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:47.485 11:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:47.485 11:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:47.485 11:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:47.485 11:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:47.485 11:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:47.485 11:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:47.485 11:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:47.485 11:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:47.485 11:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:47.485 11:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:47.485 11:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:47.485 11:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.486 11:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.744 11:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:47.744 "name": "Existed_Raid", 00:19:47.744 "uuid": "e3266a48-699f-4ee4-806b-d26df878609b", 00:19:47.744 "strip_size_kb": 64, 00:19:47.744 "state": "online", 00:19:47.744 "raid_level": "raid0", 00:19:47.744 "superblock": true, 00:19:47.744 "num_base_bdevs": 3, 00:19:47.744 "num_base_bdevs_discovered": 3, 00:19:47.744 "num_base_bdevs_operational": 3, 00:19:47.744 "base_bdevs_list": [ 00:19:47.744 { 00:19:47.744 "name": "BaseBdev1", 00:19:47.744 "uuid": "193c0808-cb24-42b8-9d98-5f5368a63e39", 00:19:47.744 "is_configured": true, 00:19:47.744 "data_offset": 2048, 00:19:47.744 "data_size": 63488 00:19:47.744 }, 00:19:47.744 { 00:19:47.744 "name": "BaseBdev2", 00:19:47.744 "uuid": "a091aef4-41c8-472a-9f0f-967c4a5264af", 00:19:47.744 "is_configured": true, 00:19:47.744 "data_offset": 2048, 00:19:47.744 "data_size": 63488 00:19:47.744 }, 00:19:47.744 { 00:19:47.744 "name": "BaseBdev3", 00:19:47.744 "uuid": "8720ccaa-dcf5-42d0-be56-88a6624e1a01", 00:19:47.744 "is_configured": true, 00:19:47.744 "data_offset": 2048, 00:19:47.744 "data_size": 63488 00:19:47.744 } 00:19:47.744 ] 00:19:47.744 }' 00:19:47.744 11:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:47.744 11:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.333 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:48.333 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:48.333 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:48.333 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:48.333 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:48.333 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:48.333 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:48.333 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:48.594 [2024-06-10 11:43:20.524919] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:48.594 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:48.594 "name": "Existed_Raid", 00:19:48.594 "aliases": [ 00:19:48.594 "e3266a48-699f-4ee4-806b-d26df878609b" 00:19:48.594 ], 00:19:48.594 "product_name": "Raid Volume", 00:19:48.594 "block_size": 512, 00:19:48.594 "num_blocks": 190464, 00:19:48.594 "uuid": "e3266a48-699f-4ee4-806b-d26df878609b", 00:19:48.594 "assigned_rate_limits": { 00:19:48.594 "rw_ios_per_sec": 0, 00:19:48.594 "rw_mbytes_per_sec": 0, 00:19:48.594 "r_mbytes_per_sec": 0, 00:19:48.594 "w_mbytes_per_sec": 0 00:19:48.594 }, 00:19:48.594 "claimed": false, 00:19:48.594 "zoned": false, 00:19:48.594 "supported_io_types": { 00:19:48.594 "read": true, 00:19:48.594 "write": true, 00:19:48.594 "unmap": true, 00:19:48.594 "write_zeroes": true, 00:19:48.594 "flush": true, 00:19:48.594 "reset": true, 00:19:48.594 "compare": false, 00:19:48.594 "compare_and_write": false, 00:19:48.594 "abort": false, 00:19:48.594 "nvme_admin": false, 00:19:48.594 "nvme_io": false 00:19:48.594 }, 00:19:48.594 "memory_domains": [ 00:19:48.594 { 00:19:48.594 "dma_device_id": "system", 00:19:48.594 "dma_device_type": 1 00:19:48.594 }, 00:19:48.594 { 00:19:48.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.594 "dma_device_type": 2 00:19:48.594 }, 00:19:48.594 { 00:19:48.594 "dma_device_id": "system", 00:19:48.594 "dma_device_type": 1 00:19:48.594 }, 00:19:48.594 { 00:19:48.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.594 "dma_device_type": 2 00:19:48.594 }, 00:19:48.594 { 00:19:48.594 "dma_device_id": "system", 00:19:48.594 "dma_device_type": 1 00:19:48.594 }, 00:19:48.594 { 00:19:48.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.594 "dma_device_type": 2 00:19:48.594 } 00:19:48.594 ], 00:19:48.594 "driver_specific": { 00:19:48.594 "raid": { 00:19:48.594 "uuid": "e3266a48-699f-4ee4-806b-d26df878609b", 00:19:48.594 "strip_size_kb": 64, 00:19:48.594 "state": "online", 00:19:48.594 "raid_level": "raid0", 00:19:48.594 "superblock": true, 00:19:48.594 "num_base_bdevs": 3, 00:19:48.594 "num_base_bdevs_discovered": 3, 00:19:48.594 "num_base_bdevs_operational": 3, 00:19:48.594 "base_bdevs_list": [ 00:19:48.594 { 00:19:48.594 "name": "BaseBdev1", 00:19:48.594 "uuid": "193c0808-cb24-42b8-9d98-5f5368a63e39", 00:19:48.594 "is_configured": true, 00:19:48.594 "data_offset": 2048, 00:19:48.594 "data_size": 63488 00:19:48.594 }, 00:19:48.594 { 00:19:48.594 "name": "BaseBdev2", 00:19:48.594 "uuid": "a091aef4-41c8-472a-9f0f-967c4a5264af", 00:19:48.594 "is_configured": true, 00:19:48.594 "data_offset": 2048, 00:19:48.594 "data_size": 63488 00:19:48.594 }, 00:19:48.594 { 00:19:48.594 "name": "BaseBdev3", 00:19:48.594 "uuid": "8720ccaa-dcf5-42d0-be56-88a6624e1a01", 00:19:48.594 "is_configured": true, 00:19:48.594 "data_offset": 2048, 00:19:48.594 "data_size": 63488 00:19:48.594 } 00:19:48.594 ] 00:19:48.594 } 00:19:48.594 } 00:19:48.594 }' 00:19:48.594 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:48.594 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:48.594 BaseBdev2 00:19:48.594 BaseBdev3' 00:19:48.594 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:48.594 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:48.594 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:48.853 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:48.853 "name": "BaseBdev1", 00:19:48.853 "aliases": [ 00:19:48.853 "193c0808-cb24-42b8-9d98-5f5368a63e39" 00:19:48.853 ], 00:19:48.853 "product_name": "Malloc disk", 00:19:48.853 "block_size": 512, 00:19:48.853 "num_blocks": 65536, 00:19:48.853 "uuid": "193c0808-cb24-42b8-9d98-5f5368a63e39", 00:19:48.853 "assigned_rate_limits": { 00:19:48.853 "rw_ios_per_sec": 0, 00:19:48.853 "rw_mbytes_per_sec": 0, 00:19:48.853 "r_mbytes_per_sec": 0, 00:19:48.853 "w_mbytes_per_sec": 0 00:19:48.853 }, 00:19:48.853 "claimed": true, 00:19:48.853 "claim_type": "exclusive_write", 00:19:48.853 "zoned": false, 00:19:48.853 "supported_io_types": { 00:19:48.853 "read": true, 00:19:48.853 "write": true, 00:19:48.853 "unmap": true, 00:19:48.853 "write_zeroes": true, 00:19:48.853 "flush": true, 00:19:48.853 "reset": true, 00:19:48.853 "compare": false, 00:19:48.853 "compare_and_write": false, 00:19:48.853 "abort": true, 00:19:48.853 "nvme_admin": false, 00:19:48.853 "nvme_io": false 00:19:48.853 }, 00:19:48.853 "memory_domains": [ 00:19:48.853 { 00:19:48.853 "dma_device_id": "system", 00:19:48.853 "dma_device_type": 1 00:19:48.853 }, 00:19:48.853 { 00:19:48.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.853 "dma_device_type": 2 00:19:48.853 } 00:19:48.853 ], 00:19:48.853 "driver_specific": {} 00:19:48.853 }' 00:19:48.853 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:49.109 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:49.109 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:49.109 11:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:49.109 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:49.109 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:49.109 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:49.109 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:49.109 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:49.109 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:49.366 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:49.366 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:49.366 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:49.366 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:49.366 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:49.625 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:49.625 "name": "BaseBdev2", 00:19:49.625 "aliases": [ 00:19:49.625 "a091aef4-41c8-472a-9f0f-967c4a5264af" 00:19:49.625 ], 00:19:49.625 "product_name": "Malloc disk", 00:19:49.625 "block_size": 512, 00:19:49.625 "num_blocks": 65536, 00:19:49.625 "uuid": "a091aef4-41c8-472a-9f0f-967c4a5264af", 00:19:49.625 "assigned_rate_limits": { 00:19:49.625 "rw_ios_per_sec": 0, 00:19:49.625 "rw_mbytes_per_sec": 0, 00:19:49.625 "r_mbytes_per_sec": 0, 00:19:49.625 "w_mbytes_per_sec": 0 00:19:49.625 }, 00:19:49.625 "claimed": true, 00:19:49.625 "claim_type": "exclusive_write", 00:19:49.625 "zoned": false, 00:19:49.625 "supported_io_types": { 00:19:49.625 "read": true, 00:19:49.625 "write": true, 00:19:49.625 "unmap": true, 00:19:49.625 "write_zeroes": true, 00:19:49.625 "flush": true, 00:19:49.625 "reset": true, 00:19:49.625 "compare": false, 00:19:49.625 "compare_and_write": false, 00:19:49.625 "abort": true, 00:19:49.625 "nvme_admin": false, 00:19:49.625 "nvme_io": false 00:19:49.625 }, 00:19:49.625 "memory_domains": [ 00:19:49.625 { 00:19:49.625 "dma_device_id": "system", 00:19:49.625 "dma_device_type": 1 00:19:49.625 }, 00:19:49.625 { 00:19:49.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.625 "dma_device_type": 2 00:19:49.625 } 00:19:49.625 ], 00:19:49.625 "driver_specific": {} 00:19:49.625 }' 00:19:49.625 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:49.625 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:49.625 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:49.625 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:49.625 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:49.625 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:49.625 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:49.883 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:49.883 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:49.883 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:49.883 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:49.883 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:49.883 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:49.883 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:49.883 11:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:50.140 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:50.140 "name": "BaseBdev3", 00:19:50.140 "aliases": [ 00:19:50.140 "8720ccaa-dcf5-42d0-be56-88a6624e1a01" 00:19:50.140 ], 00:19:50.140 "product_name": "Malloc disk", 00:19:50.140 "block_size": 512, 00:19:50.140 "num_blocks": 65536, 00:19:50.140 "uuid": "8720ccaa-dcf5-42d0-be56-88a6624e1a01", 00:19:50.140 "assigned_rate_limits": { 00:19:50.140 "rw_ios_per_sec": 0, 00:19:50.140 "rw_mbytes_per_sec": 0, 00:19:50.140 "r_mbytes_per_sec": 0, 00:19:50.140 "w_mbytes_per_sec": 0 00:19:50.140 }, 00:19:50.140 "claimed": true, 00:19:50.140 "claim_type": "exclusive_write", 00:19:50.140 "zoned": false, 00:19:50.140 "supported_io_types": { 00:19:50.140 "read": true, 00:19:50.140 "write": true, 00:19:50.140 "unmap": true, 00:19:50.140 "write_zeroes": true, 00:19:50.140 "flush": true, 00:19:50.140 "reset": true, 00:19:50.140 "compare": false, 00:19:50.140 "compare_and_write": false, 00:19:50.140 "abort": true, 00:19:50.140 "nvme_admin": false, 00:19:50.140 "nvme_io": false 00:19:50.140 }, 00:19:50.140 "memory_domains": [ 00:19:50.140 { 00:19:50.140 "dma_device_id": "system", 00:19:50.140 "dma_device_type": 1 00:19:50.140 }, 00:19:50.140 { 00:19:50.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.140 "dma_device_type": 2 00:19:50.140 } 00:19:50.140 ], 00:19:50.140 "driver_specific": {} 00:19:50.140 }' 00:19:50.140 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:50.140 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:50.140 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:50.140 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.398 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.398 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:50.398 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.398 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.398 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:50.398 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:50.398 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:50.398 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:50.398 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:50.658 [2024-06-10 11:43:22.643307] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:50.658 [2024-06-10 11:43:22.643609] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:50.658 [2024-06-10 11:43:22.643751] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:50.926 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:50.926 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:19:50.926 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:50.926 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:19:50.926 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:50.926 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:19:50.926 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:50.926 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:50.926 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:50.926 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:50.926 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:50.926 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:50.926 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:50.926 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:50.926 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:50.926 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.926 11:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.193 11:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:51.193 "name": "Existed_Raid", 00:19:51.193 "uuid": "e3266a48-699f-4ee4-806b-d26df878609b", 00:19:51.193 "strip_size_kb": 64, 00:19:51.193 "state": "offline", 00:19:51.193 "raid_level": "raid0", 00:19:51.193 "superblock": true, 00:19:51.193 "num_base_bdevs": 3, 00:19:51.193 "num_base_bdevs_discovered": 2, 00:19:51.193 "num_base_bdevs_operational": 2, 00:19:51.193 "base_bdevs_list": [ 00:19:51.193 { 00:19:51.193 "name": null, 00:19:51.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.193 "is_configured": false, 00:19:51.193 "data_offset": 2048, 00:19:51.193 "data_size": 63488 00:19:51.193 }, 00:19:51.193 { 00:19:51.193 "name": "BaseBdev2", 00:19:51.193 "uuid": "a091aef4-41c8-472a-9f0f-967c4a5264af", 00:19:51.193 "is_configured": true, 00:19:51.193 "data_offset": 2048, 00:19:51.193 "data_size": 63488 00:19:51.193 }, 00:19:51.193 { 00:19:51.193 "name": "BaseBdev3", 00:19:51.193 "uuid": "8720ccaa-dcf5-42d0-be56-88a6624e1a01", 00:19:51.193 "is_configured": true, 00:19:51.193 "data_offset": 2048, 00:19:51.193 "data_size": 63488 00:19:51.193 } 00:19:51.193 ] 00:19:51.193 }' 00:19:51.194 11:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:51.194 11:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.783 11:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:51.783 11:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:51.783 11:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.783 11:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:51.783 11:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:51.783 11:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:51.783 11:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:52.055 [2024-06-10 11:43:23.907166] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:52.055 11:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:52.055 11:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:52.055 11:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.055 11:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:52.328 11:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:52.328 11:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:52.328 11:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:52.590 [2024-06-10 11:43:24.414417] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:52.590 [2024-06-10 11:43:24.414699] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:19:52.590 11:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:52.590 11:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:52.590 11:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:52.590 11:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.848 11:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:52.848 11:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:52.848 11:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:52.848 11:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:52.848 11:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:52.848 11:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:53.106 BaseBdev2 00:19:53.106 11:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:53.106 11:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:19:53.106 11:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:53.106 11:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:19:53.106 11:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:53.106 11:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:53.106 11:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:53.363 11:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:53.622 [ 00:19:53.622 { 00:19:53.622 "name": "BaseBdev2", 00:19:53.622 "aliases": [ 00:19:53.622 "244139e9-7659-479a-a053-723e24606e9d" 00:19:53.622 ], 00:19:53.622 "product_name": "Malloc disk", 00:19:53.622 "block_size": 512, 00:19:53.622 "num_blocks": 65536, 00:19:53.622 "uuid": "244139e9-7659-479a-a053-723e24606e9d", 00:19:53.622 "assigned_rate_limits": { 00:19:53.622 "rw_ios_per_sec": 0, 00:19:53.622 "rw_mbytes_per_sec": 0, 00:19:53.622 "r_mbytes_per_sec": 0, 00:19:53.622 "w_mbytes_per_sec": 0 00:19:53.622 }, 00:19:53.622 "claimed": false, 00:19:53.622 "zoned": false, 00:19:53.622 "supported_io_types": { 00:19:53.622 "read": true, 00:19:53.622 "write": true, 00:19:53.622 "unmap": true, 00:19:53.622 "write_zeroes": true, 00:19:53.622 "flush": true, 00:19:53.622 "reset": true, 00:19:53.622 "compare": false, 00:19:53.622 "compare_and_write": false, 00:19:53.622 "abort": true, 00:19:53.622 "nvme_admin": false, 00:19:53.622 "nvme_io": false 00:19:53.622 }, 00:19:53.622 "memory_domains": [ 00:19:53.622 { 00:19:53.622 "dma_device_id": "system", 00:19:53.622 "dma_device_type": 1 00:19:53.622 }, 00:19:53.622 { 00:19:53.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.622 "dma_device_type": 2 00:19:53.622 } 00:19:53.622 ], 00:19:53.622 "driver_specific": {} 00:19:53.622 } 00:19:53.622 ] 00:19:53.622 11:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:19:53.622 11:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:53.622 11:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:53.622 11:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:53.881 BaseBdev3 00:19:54.139 11:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:54.139 11:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:19:54.139 11:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:54.139 11:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:19:54.139 11:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:54.139 11:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:54.139 11:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:54.397 11:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:54.655 [ 00:19:54.655 { 00:19:54.655 "name": "BaseBdev3", 00:19:54.655 "aliases": [ 00:19:54.655 "47845ec5-7abd-4bbd-aac8-382ec8afa1f6" 00:19:54.655 ], 00:19:54.655 "product_name": "Malloc disk", 00:19:54.655 "block_size": 512, 00:19:54.655 "num_blocks": 65536, 00:19:54.655 "uuid": "47845ec5-7abd-4bbd-aac8-382ec8afa1f6", 00:19:54.655 "assigned_rate_limits": { 00:19:54.655 "rw_ios_per_sec": 0, 00:19:54.655 "rw_mbytes_per_sec": 0, 00:19:54.655 "r_mbytes_per_sec": 0, 00:19:54.655 "w_mbytes_per_sec": 0 00:19:54.655 }, 00:19:54.655 "claimed": false, 00:19:54.655 "zoned": false, 00:19:54.655 "supported_io_types": { 00:19:54.655 "read": true, 00:19:54.655 "write": true, 00:19:54.655 "unmap": true, 00:19:54.655 "write_zeroes": true, 00:19:54.655 "flush": true, 00:19:54.655 "reset": true, 00:19:54.655 "compare": false, 00:19:54.655 "compare_and_write": false, 00:19:54.655 "abort": true, 00:19:54.655 "nvme_admin": false, 00:19:54.655 "nvme_io": false 00:19:54.655 }, 00:19:54.655 "memory_domains": [ 00:19:54.655 { 00:19:54.655 "dma_device_id": "system", 00:19:54.655 "dma_device_type": 1 00:19:54.655 }, 00:19:54.655 { 00:19:54.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.655 "dma_device_type": 2 00:19:54.655 } 00:19:54.655 ], 00:19:54.655 "driver_specific": {} 00:19:54.655 } 00:19:54.655 ] 00:19:54.655 11:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:19:54.655 11:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:54.655 11:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:54.655 11:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:54.913 [2024-06-10 11:43:26.780077] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:54.913 [2024-06-10 11:43:26.780326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:54.913 [2024-06-10 11:43:26.780474] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:54.913 [2024-06-10 11:43:26.782633] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:54.913 11:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:54.913 11:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:54.913 11:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:54.913 11:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:54.913 11:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:54.913 11:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:54.913 11:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:54.913 11:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:54.913 11:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:54.913 11:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:54.913 11:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.913 11:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.171 11:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.171 "name": "Existed_Raid", 00:19:55.171 "uuid": "87eef1bb-53c9-4691-a6e7-c40a75e2afba", 00:19:55.171 "strip_size_kb": 64, 00:19:55.171 "state": "configuring", 00:19:55.171 "raid_level": "raid0", 00:19:55.171 "superblock": true, 00:19:55.171 "num_base_bdevs": 3, 00:19:55.171 "num_base_bdevs_discovered": 2, 00:19:55.171 "num_base_bdevs_operational": 3, 00:19:55.171 "base_bdevs_list": [ 00:19:55.171 { 00:19:55.171 "name": "BaseBdev1", 00:19:55.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.171 "is_configured": false, 00:19:55.171 "data_offset": 0, 00:19:55.171 "data_size": 0 00:19:55.171 }, 00:19:55.171 { 00:19:55.171 "name": "BaseBdev2", 00:19:55.171 "uuid": "244139e9-7659-479a-a053-723e24606e9d", 00:19:55.171 "is_configured": true, 00:19:55.171 "data_offset": 2048, 00:19:55.171 "data_size": 63488 00:19:55.171 }, 00:19:55.171 { 00:19:55.171 "name": "BaseBdev3", 00:19:55.171 "uuid": "47845ec5-7abd-4bbd-aac8-382ec8afa1f6", 00:19:55.171 "is_configured": true, 00:19:55.171 "data_offset": 2048, 00:19:55.171 "data_size": 63488 00:19:55.171 } 00:19:55.171 ] 00:19:55.171 }' 00:19:55.171 11:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.171 11:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.736 11:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:55.995 [2024-06-10 11:43:27.891089] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:55.995 11:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:55.995 11:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:55.995 11:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:55.995 11:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:55.995 11:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:55.995 11:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:55.995 11:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:55.995 11:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:55.995 11:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:55.995 11:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:55.995 11:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.995 11:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.253 11:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:56.253 "name": "Existed_Raid", 00:19:56.253 "uuid": "87eef1bb-53c9-4691-a6e7-c40a75e2afba", 00:19:56.253 "strip_size_kb": 64, 00:19:56.253 "state": "configuring", 00:19:56.253 "raid_level": "raid0", 00:19:56.253 "superblock": true, 00:19:56.253 "num_base_bdevs": 3, 00:19:56.253 "num_base_bdevs_discovered": 1, 00:19:56.253 "num_base_bdevs_operational": 3, 00:19:56.253 "base_bdevs_list": [ 00:19:56.253 { 00:19:56.253 "name": "BaseBdev1", 00:19:56.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.253 "is_configured": false, 00:19:56.253 "data_offset": 0, 00:19:56.253 "data_size": 0 00:19:56.253 }, 00:19:56.253 { 00:19:56.253 "name": null, 00:19:56.253 "uuid": "244139e9-7659-479a-a053-723e24606e9d", 00:19:56.253 "is_configured": false, 00:19:56.253 "data_offset": 2048, 00:19:56.253 "data_size": 63488 00:19:56.253 }, 00:19:56.253 { 00:19:56.253 "name": "BaseBdev3", 00:19:56.253 "uuid": "47845ec5-7abd-4bbd-aac8-382ec8afa1f6", 00:19:56.253 "is_configured": true, 00:19:56.253 "data_offset": 2048, 00:19:56.253 "data_size": 63488 00:19:56.253 } 00:19:56.253 ] 00:19:56.253 }' 00:19:56.253 11:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:56.253 11:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.819 11:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.819 11:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:57.077 11:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:57.077 11:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:57.336 [2024-06-10 11:43:29.281333] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:57.336 BaseBdev1 00:19:57.336 11:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:57.336 11:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:19:57.336 11:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:57.336 11:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:19:57.336 11:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:57.336 11:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:57.336 11:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:57.671 11:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:57.930 [ 00:19:57.930 { 00:19:57.930 "name": "BaseBdev1", 00:19:57.930 "aliases": [ 00:19:57.930 "4f213f2a-825a-4534-b025-dcc8b6585b55" 00:19:57.930 ], 00:19:57.930 "product_name": "Malloc disk", 00:19:57.930 "block_size": 512, 00:19:57.930 "num_blocks": 65536, 00:19:57.930 "uuid": "4f213f2a-825a-4534-b025-dcc8b6585b55", 00:19:57.930 "assigned_rate_limits": { 00:19:57.930 "rw_ios_per_sec": 0, 00:19:57.930 "rw_mbytes_per_sec": 0, 00:19:57.930 "r_mbytes_per_sec": 0, 00:19:57.930 "w_mbytes_per_sec": 0 00:19:57.930 }, 00:19:57.930 "claimed": true, 00:19:57.930 "claim_type": "exclusive_write", 00:19:57.930 "zoned": false, 00:19:57.930 "supported_io_types": { 00:19:57.930 "read": true, 00:19:57.930 "write": true, 00:19:57.930 "unmap": true, 00:19:57.930 "write_zeroes": true, 00:19:57.930 "flush": true, 00:19:57.930 "reset": true, 00:19:57.930 "compare": false, 00:19:57.930 "compare_and_write": false, 00:19:57.930 "abort": true, 00:19:57.930 "nvme_admin": false, 00:19:57.930 "nvme_io": false 00:19:57.930 }, 00:19:57.930 "memory_domains": [ 00:19:57.930 { 00:19:57.930 "dma_device_id": "system", 00:19:57.930 "dma_device_type": 1 00:19:57.930 }, 00:19:57.930 { 00:19:57.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.930 "dma_device_type": 2 00:19:57.930 } 00:19:57.930 ], 00:19:57.930 "driver_specific": {} 00:19:57.930 } 00:19:57.930 ] 00:19:57.930 11:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:19:57.930 11:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:57.930 11:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:57.930 11:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:57.930 11:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:57.930 11:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:57.930 11:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:57.930 11:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:57.930 11:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:57.930 11:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:57.930 11:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:57.930 11:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.930 11:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.188 11:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:58.188 "name": "Existed_Raid", 00:19:58.188 "uuid": "87eef1bb-53c9-4691-a6e7-c40a75e2afba", 00:19:58.188 "strip_size_kb": 64, 00:19:58.188 "state": "configuring", 00:19:58.188 "raid_level": "raid0", 00:19:58.188 "superblock": true, 00:19:58.188 "num_base_bdevs": 3, 00:19:58.188 "num_base_bdevs_discovered": 2, 00:19:58.188 "num_base_bdevs_operational": 3, 00:19:58.188 "base_bdevs_list": [ 00:19:58.188 { 00:19:58.188 "name": "BaseBdev1", 00:19:58.188 "uuid": "4f213f2a-825a-4534-b025-dcc8b6585b55", 00:19:58.188 "is_configured": true, 00:19:58.188 "data_offset": 2048, 00:19:58.188 "data_size": 63488 00:19:58.188 }, 00:19:58.188 { 00:19:58.188 "name": null, 00:19:58.188 "uuid": "244139e9-7659-479a-a053-723e24606e9d", 00:19:58.188 "is_configured": false, 00:19:58.188 "data_offset": 2048, 00:19:58.188 "data_size": 63488 00:19:58.188 }, 00:19:58.188 { 00:19:58.188 "name": "BaseBdev3", 00:19:58.188 "uuid": "47845ec5-7abd-4bbd-aac8-382ec8afa1f6", 00:19:58.188 "is_configured": true, 00:19:58.188 "data_offset": 2048, 00:19:58.188 "data_size": 63488 00:19:58.188 } 00:19:58.188 ] 00:19:58.188 }' 00:19:58.188 11:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:58.188 11:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.755 11:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.755 11:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:59.013 11:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:59.013 11:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:59.271 [2024-06-10 11:43:31.298136] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:59.271 11:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:59.271 11:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:59.271 11:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:59.271 11:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:59.271 11:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:59.271 11:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:59.271 11:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:59.271 11:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:59.530 11:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:59.530 11:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:59.530 11:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.530 11:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.788 11:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:59.788 "name": "Existed_Raid", 00:19:59.788 "uuid": "87eef1bb-53c9-4691-a6e7-c40a75e2afba", 00:19:59.788 "strip_size_kb": 64, 00:19:59.788 "state": "configuring", 00:19:59.788 "raid_level": "raid0", 00:19:59.788 "superblock": true, 00:19:59.788 "num_base_bdevs": 3, 00:19:59.788 "num_base_bdevs_discovered": 1, 00:19:59.788 "num_base_bdevs_operational": 3, 00:19:59.788 "base_bdevs_list": [ 00:19:59.788 { 00:19:59.788 "name": "BaseBdev1", 00:19:59.788 "uuid": "4f213f2a-825a-4534-b025-dcc8b6585b55", 00:19:59.788 "is_configured": true, 00:19:59.788 "data_offset": 2048, 00:19:59.788 "data_size": 63488 00:19:59.788 }, 00:19:59.788 { 00:19:59.788 "name": null, 00:19:59.788 "uuid": "244139e9-7659-479a-a053-723e24606e9d", 00:19:59.788 "is_configured": false, 00:19:59.788 "data_offset": 2048, 00:19:59.788 "data_size": 63488 00:19:59.788 }, 00:19:59.788 { 00:19:59.788 "name": null, 00:19:59.788 "uuid": "47845ec5-7abd-4bbd-aac8-382ec8afa1f6", 00:19:59.788 "is_configured": false, 00:19:59.788 "data_offset": 2048, 00:19:59.788 "data_size": 63488 00:19:59.788 } 00:19:59.788 ] 00:19:59.788 }' 00:19:59.788 11:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:59.788 11:43:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.353 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:00.353 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.611 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:00.611 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:00.869 [2024-06-10 11:43:32.739190] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:00.869 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:00.869 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:00.869 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:00.869 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:00.869 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:00.869 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:00.869 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:00.869 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:00.869 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:00.869 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:00.869 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.869 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.128 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:01.128 "name": "Existed_Raid", 00:20:01.128 "uuid": "87eef1bb-53c9-4691-a6e7-c40a75e2afba", 00:20:01.128 "strip_size_kb": 64, 00:20:01.128 "state": "configuring", 00:20:01.128 "raid_level": "raid0", 00:20:01.128 "superblock": true, 00:20:01.128 "num_base_bdevs": 3, 00:20:01.128 "num_base_bdevs_discovered": 2, 00:20:01.128 "num_base_bdevs_operational": 3, 00:20:01.128 "base_bdevs_list": [ 00:20:01.128 { 00:20:01.128 "name": "BaseBdev1", 00:20:01.128 "uuid": "4f213f2a-825a-4534-b025-dcc8b6585b55", 00:20:01.128 "is_configured": true, 00:20:01.128 "data_offset": 2048, 00:20:01.128 "data_size": 63488 00:20:01.128 }, 00:20:01.128 { 00:20:01.128 "name": null, 00:20:01.128 "uuid": "244139e9-7659-479a-a053-723e24606e9d", 00:20:01.128 "is_configured": false, 00:20:01.128 "data_offset": 2048, 00:20:01.128 "data_size": 63488 00:20:01.128 }, 00:20:01.128 { 00:20:01.128 "name": "BaseBdev3", 00:20:01.128 "uuid": "47845ec5-7abd-4bbd-aac8-382ec8afa1f6", 00:20:01.128 "is_configured": true, 00:20:01.128 "data_offset": 2048, 00:20:01.128 "data_size": 63488 00:20:01.128 } 00:20:01.128 ] 00:20:01.128 }' 00:20:01.128 11:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:01.128 11:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.694 11:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:01.694 11:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.952 11:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:01.952 11:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:02.211 [2024-06-10 11:43:34.051643] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:02.211 11:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:02.211 11:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:02.211 11:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:02.211 11:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:02.211 11:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:02.211 11:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:02.211 11:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:02.211 11:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:02.211 11:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:02.211 11:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:02.211 11:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.211 11:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.469 11:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.469 "name": "Existed_Raid", 00:20:02.469 "uuid": "87eef1bb-53c9-4691-a6e7-c40a75e2afba", 00:20:02.469 "strip_size_kb": 64, 00:20:02.469 "state": "configuring", 00:20:02.469 "raid_level": "raid0", 00:20:02.469 "superblock": true, 00:20:02.469 "num_base_bdevs": 3, 00:20:02.469 "num_base_bdevs_discovered": 1, 00:20:02.469 "num_base_bdevs_operational": 3, 00:20:02.469 "base_bdevs_list": [ 00:20:02.469 { 00:20:02.469 "name": null, 00:20:02.469 "uuid": "4f213f2a-825a-4534-b025-dcc8b6585b55", 00:20:02.469 "is_configured": false, 00:20:02.469 "data_offset": 2048, 00:20:02.469 "data_size": 63488 00:20:02.469 }, 00:20:02.469 { 00:20:02.469 "name": null, 00:20:02.469 "uuid": "244139e9-7659-479a-a053-723e24606e9d", 00:20:02.469 "is_configured": false, 00:20:02.469 "data_offset": 2048, 00:20:02.469 "data_size": 63488 00:20:02.469 }, 00:20:02.469 { 00:20:02.469 "name": "BaseBdev3", 00:20:02.469 "uuid": "47845ec5-7abd-4bbd-aac8-382ec8afa1f6", 00:20:02.469 "is_configured": true, 00:20:02.469 "data_offset": 2048, 00:20:02.469 "data_size": 63488 00:20:02.469 } 00:20:02.469 ] 00:20:02.469 }' 00:20:02.469 11:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.469 11:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.406 11:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.406 11:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:03.406 11:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:03.406 11:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:03.665 [2024-06-10 11:43:35.680663] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:03.666 11:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:03.666 11:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:03.666 11:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:03.666 11:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:03.666 11:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:03.666 11:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:03.666 11:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:03.666 11:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:03.666 11:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:03.666 11:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:03.666 11:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.666 11:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.233 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.233 "name": "Existed_Raid", 00:20:04.233 "uuid": "87eef1bb-53c9-4691-a6e7-c40a75e2afba", 00:20:04.233 "strip_size_kb": 64, 00:20:04.233 "state": "configuring", 00:20:04.233 "raid_level": "raid0", 00:20:04.233 "superblock": true, 00:20:04.233 "num_base_bdevs": 3, 00:20:04.233 "num_base_bdevs_discovered": 2, 00:20:04.233 "num_base_bdevs_operational": 3, 00:20:04.233 "base_bdevs_list": [ 00:20:04.233 { 00:20:04.233 "name": null, 00:20:04.233 "uuid": "4f213f2a-825a-4534-b025-dcc8b6585b55", 00:20:04.233 "is_configured": false, 00:20:04.233 "data_offset": 2048, 00:20:04.233 "data_size": 63488 00:20:04.233 }, 00:20:04.233 { 00:20:04.233 "name": "BaseBdev2", 00:20:04.233 "uuid": "244139e9-7659-479a-a053-723e24606e9d", 00:20:04.233 "is_configured": true, 00:20:04.233 "data_offset": 2048, 00:20:04.233 "data_size": 63488 00:20:04.233 }, 00:20:04.233 { 00:20:04.233 "name": "BaseBdev3", 00:20:04.233 "uuid": "47845ec5-7abd-4bbd-aac8-382ec8afa1f6", 00:20:04.233 "is_configured": true, 00:20:04.233 "data_offset": 2048, 00:20:04.233 "data_size": 63488 00:20:04.233 } 00:20:04.233 ] 00:20:04.233 }' 00:20:04.233 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.233 11:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.800 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:04.800 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.059 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:05.059 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.059 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:05.317 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4f213f2a-825a-4534-b025-dcc8b6585b55 00:20:05.576 [2024-06-10 11:43:37.552151] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:05.576 [2024-06-10 11:43:37.552529] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:20:05.576 [2024-06-10 11:43:37.552645] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:05.576 [2024-06-10 11:43:37.552809] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:05.576 [2024-06-10 11:43:37.553185] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:20:05.576 [2024-06-10 11:43:37.553304] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:20:05.576 [2024-06-10 11:43:37.553554] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.576 NewBaseBdev 00:20:05.576 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:05.576 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:20:05.576 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:05.576 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:20:05.576 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:05.576 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:05.576 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:05.835 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:06.093 [ 00:20:06.093 { 00:20:06.093 "name": "NewBaseBdev", 00:20:06.093 "aliases": [ 00:20:06.093 "4f213f2a-825a-4534-b025-dcc8b6585b55" 00:20:06.093 ], 00:20:06.093 "product_name": "Malloc disk", 00:20:06.093 "block_size": 512, 00:20:06.093 "num_blocks": 65536, 00:20:06.093 "uuid": "4f213f2a-825a-4534-b025-dcc8b6585b55", 00:20:06.093 "assigned_rate_limits": { 00:20:06.093 "rw_ios_per_sec": 0, 00:20:06.093 "rw_mbytes_per_sec": 0, 00:20:06.093 "r_mbytes_per_sec": 0, 00:20:06.093 "w_mbytes_per_sec": 0 00:20:06.093 }, 00:20:06.093 "claimed": true, 00:20:06.093 "claim_type": "exclusive_write", 00:20:06.093 "zoned": false, 00:20:06.093 "supported_io_types": { 00:20:06.093 "read": true, 00:20:06.093 "write": true, 00:20:06.093 "unmap": true, 00:20:06.093 "write_zeroes": true, 00:20:06.093 "flush": true, 00:20:06.093 "reset": true, 00:20:06.093 "compare": false, 00:20:06.093 "compare_and_write": false, 00:20:06.093 "abort": true, 00:20:06.093 "nvme_admin": false, 00:20:06.093 "nvme_io": false 00:20:06.093 }, 00:20:06.093 "memory_domains": [ 00:20:06.093 { 00:20:06.093 "dma_device_id": "system", 00:20:06.093 "dma_device_type": 1 00:20:06.093 }, 00:20:06.093 { 00:20:06.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.093 "dma_device_type": 2 00:20:06.093 } 00:20:06.093 ], 00:20:06.093 "driver_specific": {} 00:20:06.093 } 00:20:06.093 ] 00:20:06.093 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:20:06.093 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:06.093 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:06.093 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:06.093 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:06.093 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:06.093 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:06.093 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:06.093 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:06.093 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:06.093 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:06.093 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.093 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:06.352 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:06.352 "name": "Existed_Raid", 00:20:06.352 "uuid": "87eef1bb-53c9-4691-a6e7-c40a75e2afba", 00:20:06.352 "strip_size_kb": 64, 00:20:06.352 "state": "online", 00:20:06.352 "raid_level": "raid0", 00:20:06.352 "superblock": true, 00:20:06.352 "num_base_bdevs": 3, 00:20:06.352 "num_base_bdevs_discovered": 3, 00:20:06.352 "num_base_bdevs_operational": 3, 00:20:06.352 "base_bdevs_list": [ 00:20:06.352 { 00:20:06.352 "name": "NewBaseBdev", 00:20:06.352 "uuid": "4f213f2a-825a-4534-b025-dcc8b6585b55", 00:20:06.352 "is_configured": true, 00:20:06.352 "data_offset": 2048, 00:20:06.352 "data_size": 63488 00:20:06.352 }, 00:20:06.352 { 00:20:06.352 "name": "BaseBdev2", 00:20:06.352 "uuid": "244139e9-7659-479a-a053-723e24606e9d", 00:20:06.352 "is_configured": true, 00:20:06.352 "data_offset": 2048, 00:20:06.352 "data_size": 63488 00:20:06.352 }, 00:20:06.352 { 00:20:06.352 "name": "BaseBdev3", 00:20:06.352 "uuid": "47845ec5-7abd-4bbd-aac8-382ec8afa1f6", 00:20:06.352 "is_configured": true, 00:20:06.352 "data_offset": 2048, 00:20:06.352 "data_size": 63488 00:20:06.352 } 00:20:06.352 ] 00:20:06.352 }' 00:20:06.352 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:06.352 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.919 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:06.919 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:06.919 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:06.919 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:06.919 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:06.919 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:06.919 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:06.919 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:07.178 [2024-06-10 11:43:39.131806] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:07.178 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:07.178 "name": "Existed_Raid", 00:20:07.178 "aliases": [ 00:20:07.178 "87eef1bb-53c9-4691-a6e7-c40a75e2afba" 00:20:07.178 ], 00:20:07.178 "product_name": "Raid Volume", 00:20:07.178 "block_size": 512, 00:20:07.178 "num_blocks": 190464, 00:20:07.178 "uuid": "87eef1bb-53c9-4691-a6e7-c40a75e2afba", 00:20:07.178 "assigned_rate_limits": { 00:20:07.178 "rw_ios_per_sec": 0, 00:20:07.178 "rw_mbytes_per_sec": 0, 00:20:07.178 "r_mbytes_per_sec": 0, 00:20:07.178 "w_mbytes_per_sec": 0 00:20:07.178 }, 00:20:07.178 "claimed": false, 00:20:07.178 "zoned": false, 00:20:07.178 "supported_io_types": { 00:20:07.178 "read": true, 00:20:07.178 "write": true, 00:20:07.178 "unmap": true, 00:20:07.178 "write_zeroes": true, 00:20:07.178 "flush": true, 00:20:07.178 "reset": true, 00:20:07.178 "compare": false, 00:20:07.178 "compare_and_write": false, 00:20:07.178 "abort": false, 00:20:07.178 "nvme_admin": false, 00:20:07.178 "nvme_io": false 00:20:07.178 }, 00:20:07.178 "memory_domains": [ 00:20:07.178 { 00:20:07.178 "dma_device_id": "system", 00:20:07.178 "dma_device_type": 1 00:20:07.178 }, 00:20:07.178 { 00:20:07.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.178 "dma_device_type": 2 00:20:07.178 }, 00:20:07.178 { 00:20:07.178 "dma_device_id": "system", 00:20:07.178 "dma_device_type": 1 00:20:07.178 }, 00:20:07.178 { 00:20:07.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.178 "dma_device_type": 2 00:20:07.178 }, 00:20:07.178 { 00:20:07.178 "dma_device_id": "system", 00:20:07.178 "dma_device_type": 1 00:20:07.178 }, 00:20:07.178 { 00:20:07.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.178 "dma_device_type": 2 00:20:07.178 } 00:20:07.178 ], 00:20:07.178 "driver_specific": { 00:20:07.178 "raid": { 00:20:07.178 "uuid": "87eef1bb-53c9-4691-a6e7-c40a75e2afba", 00:20:07.178 "strip_size_kb": 64, 00:20:07.178 "state": "online", 00:20:07.178 "raid_level": "raid0", 00:20:07.178 "superblock": true, 00:20:07.178 "num_base_bdevs": 3, 00:20:07.178 "num_base_bdevs_discovered": 3, 00:20:07.178 "num_base_bdevs_operational": 3, 00:20:07.178 "base_bdevs_list": [ 00:20:07.178 { 00:20:07.178 "name": "NewBaseBdev", 00:20:07.178 "uuid": "4f213f2a-825a-4534-b025-dcc8b6585b55", 00:20:07.178 "is_configured": true, 00:20:07.178 "data_offset": 2048, 00:20:07.178 "data_size": 63488 00:20:07.178 }, 00:20:07.178 { 00:20:07.178 "name": "BaseBdev2", 00:20:07.178 "uuid": "244139e9-7659-479a-a053-723e24606e9d", 00:20:07.178 "is_configured": true, 00:20:07.178 "data_offset": 2048, 00:20:07.178 "data_size": 63488 00:20:07.178 }, 00:20:07.178 { 00:20:07.178 "name": "BaseBdev3", 00:20:07.178 "uuid": "47845ec5-7abd-4bbd-aac8-382ec8afa1f6", 00:20:07.178 "is_configured": true, 00:20:07.178 "data_offset": 2048, 00:20:07.178 "data_size": 63488 00:20:07.178 } 00:20:07.178 ] 00:20:07.178 } 00:20:07.178 } 00:20:07.178 }' 00:20:07.178 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:07.178 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:07.178 BaseBdev2 00:20:07.178 BaseBdev3' 00:20:07.178 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:07.178 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:07.178 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:07.437 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:07.437 "name": "NewBaseBdev", 00:20:07.437 "aliases": [ 00:20:07.437 "4f213f2a-825a-4534-b025-dcc8b6585b55" 00:20:07.437 ], 00:20:07.437 "product_name": "Malloc disk", 00:20:07.437 "block_size": 512, 00:20:07.437 "num_blocks": 65536, 00:20:07.437 "uuid": "4f213f2a-825a-4534-b025-dcc8b6585b55", 00:20:07.437 "assigned_rate_limits": { 00:20:07.437 "rw_ios_per_sec": 0, 00:20:07.437 "rw_mbytes_per_sec": 0, 00:20:07.437 "r_mbytes_per_sec": 0, 00:20:07.437 "w_mbytes_per_sec": 0 00:20:07.437 }, 00:20:07.437 "claimed": true, 00:20:07.437 "claim_type": "exclusive_write", 00:20:07.437 "zoned": false, 00:20:07.437 "supported_io_types": { 00:20:07.437 "read": true, 00:20:07.437 "write": true, 00:20:07.437 "unmap": true, 00:20:07.437 "write_zeroes": true, 00:20:07.437 "flush": true, 00:20:07.437 "reset": true, 00:20:07.437 "compare": false, 00:20:07.437 "compare_and_write": false, 00:20:07.437 "abort": true, 00:20:07.437 "nvme_admin": false, 00:20:07.437 "nvme_io": false 00:20:07.437 }, 00:20:07.437 "memory_domains": [ 00:20:07.437 { 00:20:07.437 "dma_device_id": "system", 00:20:07.437 "dma_device_type": 1 00:20:07.437 }, 00:20:07.437 { 00:20:07.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.437 "dma_device_type": 2 00:20:07.437 } 00:20:07.437 ], 00:20:07.437 "driver_specific": {} 00:20:07.437 }' 00:20:07.437 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:07.437 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:07.698 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:07.698 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:07.698 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:07.698 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:07.698 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:07.698 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:07.698 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:07.698 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:07.958 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:07.958 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:07.958 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:07.958 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:07.958 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:08.218 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:08.218 "name": "BaseBdev2", 00:20:08.218 "aliases": [ 00:20:08.218 "244139e9-7659-479a-a053-723e24606e9d" 00:20:08.218 ], 00:20:08.218 "product_name": "Malloc disk", 00:20:08.218 "block_size": 512, 00:20:08.218 "num_blocks": 65536, 00:20:08.218 "uuid": "244139e9-7659-479a-a053-723e24606e9d", 00:20:08.218 "assigned_rate_limits": { 00:20:08.218 "rw_ios_per_sec": 0, 00:20:08.218 "rw_mbytes_per_sec": 0, 00:20:08.218 "r_mbytes_per_sec": 0, 00:20:08.218 "w_mbytes_per_sec": 0 00:20:08.218 }, 00:20:08.218 "claimed": true, 00:20:08.218 "claim_type": "exclusive_write", 00:20:08.218 "zoned": false, 00:20:08.218 "supported_io_types": { 00:20:08.218 "read": true, 00:20:08.218 "write": true, 00:20:08.218 "unmap": true, 00:20:08.218 "write_zeroes": true, 00:20:08.218 "flush": true, 00:20:08.218 "reset": true, 00:20:08.218 "compare": false, 00:20:08.218 "compare_and_write": false, 00:20:08.218 "abort": true, 00:20:08.218 "nvme_admin": false, 00:20:08.218 "nvme_io": false 00:20:08.218 }, 00:20:08.218 "memory_domains": [ 00:20:08.218 { 00:20:08.218 "dma_device_id": "system", 00:20:08.218 "dma_device_type": 1 00:20:08.218 }, 00:20:08.218 { 00:20:08.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.218 "dma_device_type": 2 00:20:08.218 } 00:20:08.218 ], 00:20:08.218 "driver_specific": {} 00:20:08.218 }' 00:20:08.218 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.218 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.218 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:08.218 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:08.218 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:08.476 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:08.476 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:08.476 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:08.476 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:08.476 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:08.476 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:08.476 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:08.476 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:08.476 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:08.476 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:08.733 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:08.733 "name": "BaseBdev3", 00:20:08.733 "aliases": [ 00:20:08.733 "47845ec5-7abd-4bbd-aac8-382ec8afa1f6" 00:20:08.733 ], 00:20:08.733 "product_name": "Malloc disk", 00:20:08.733 "block_size": 512, 00:20:08.733 "num_blocks": 65536, 00:20:08.733 "uuid": "47845ec5-7abd-4bbd-aac8-382ec8afa1f6", 00:20:08.733 "assigned_rate_limits": { 00:20:08.733 "rw_ios_per_sec": 0, 00:20:08.733 "rw_mbytes_per_sec": 0, 00:20:08.733 "r_mbytes_per_sec": 0, 00:20:08.733 "w_mbytes_per_sec": 0 00:20:08.733 }, 00:20:08.733 "claimed": true, 00:20:08.733 "claim_type": "exclusive_write", 00:20:08.733 "zoned": false, 00:20:08.733 "supported_io_types": { 00:20:08.733 "read": true, 00:20:08.733 "write": true, 00:20:08.733 "unmap": true, 00:20:08.733 "write_zeroes": true, 00:20:08.733 "flush": true, 00:20:08.733 "reset": true, 00:20:08.733 "compare": false, 00:20:08.733 "compare_and_write": false, 00:20:08.733 "abort": true, 00:20:08.733 "nvme_admin": false, 00:20:08.733 "nvme_io": false 00:20:08.733 }, 00:20:08.733 "memory_domains": [ 00:20:08.733 { 00:20:08.733 "dma_device_id": "system", 00:20:08.733 "dma_device_type": 1 00:20:08.733 }, 00:20:08.733 { 00:20:08.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.733 "dma_device_type": 2 00:20:08.733 } 00:20:08.733 ], 00:20:08.733 "driver_specific": {} 00:20:08.733 }' 00:20:08.733 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.991 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.992 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:08.992 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:08.992 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:08.992 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:08.992 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:08.992 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:08.992 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:08.992 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:09.250 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:09.250 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:09.250 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:09.508 [2024-06-10 11:43:41.372062] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:09.508 [2024-06-10 11:43:41.372286] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:09.508 [2024-06-10 11:43:41.372479] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.508 [2024-06-10 11:43:41.372637] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:09.508 [2024-06-10 11:43:41.372722] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:20:09.508 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 127516 00:20:09.508 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 127516 ']' 00:20:09.508 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 127516 00:20:09.508 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:20:09.508 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:09.508 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 127516 00:20:09.508 killing process with pid 127516 00:20:09.508 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:09.508 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:09.508 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 127516' 00:20:09.508 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 127516 00:20:09.508 [2024-06-10 11:43:41.415212] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:09.508 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 127516 00:20:09.766 [2024-06-10 11:43:41.726147] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:11.140 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:11.140 00:20:11.140 real 0m31.844s 00:20:11.140 user 0m57.575s 00:20:11.140 sys 0m4.484s 00:20:11.140 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:11.140 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.140 ************************************ 00:20:11.140 END TEST raid_state_function_test_sb 00:20:11.140 ************************************ 00:20:11.140 11:43:43 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:20:11.140 11:43:43 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:20:11.140 11:43:43 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:11.140 11:43:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:11.140 ************************************ 00:20:11.140 START TEST raid_superblock_test 00:20:11.140 ************************************ 00:20:11.140 11:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid0 3 00:20:11.140 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:20:11.140 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:20:11.140 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=128513 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 128513 /var/tmp/spdk-raid.sock 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 128513 ']' 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:11.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:11.398 11:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.398 [2024-06-10 11:43:43.265361] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:20:11.398 [2024-06-10 11:43:43.265760] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128513 ] 00:20:11.398 [2024-06-10 11:43:43.430156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.656 [2024-06-10 11:43:43.674400] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.914 [2024-06-10 11:43:43.902772] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:12.172 11:43:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:12.172 11:43:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:20:12.172 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:20:12.172 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:12.172 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:20:12.172 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:20:12.172 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:12.172 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:12.172 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:12.172 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:12.172 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:12.738 malloc1 00:20:12.738 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:12.738 [2024-06-10 11:43:44.693617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:12.738 [2024-06-10 11:43:44.693910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:12.738 [2024-06-10 11:43:44.694066] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:12.738 [2024-06-10 11:43:44.694189] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:12.738 [2024-06-10 11:43:44.697293] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:12.738 [2024-06-10 11:43:44.697474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:12.738 pt1 00:20:12.738 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:12.738 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:12.738 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:20:12.738 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:20:12.738 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:12.738 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:12.738 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:12.738 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:12.738 11:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:12.996 malloc2 00:20:12.996 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:13.254 [2024-06-10 11:43:45.239978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:13.254 [2024-06-10 11:43:45.240291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.254 [2024-06-10 11:43:45.240385] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:13.254 [2024-06-10 11:43:45.240510] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.254 [2024-06-10 11:43:45.243030] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.254 [2024-06-10 11:43:45.243210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:13.254 pt2 00:20:13.254 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:13.254 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:13.254 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:20:13.254 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:20:13.254 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:13.254 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:13.254 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:13.254 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:13.254 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:13.516 malloc3 00:20:13.516 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:13.777 [2024-06-10 11:43:45.694558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:13.777 [2024-06-10 11:43:45.694951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.777 [2024-06-10 11:43:45.695114] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:13.777 [2024-06-10 11:43:45.695244] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.777 [2024-06-10 11:43:45.698182] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.777 [2024-06-10 11:43:45.698372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:13.777 pt3 00:20:13.777 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:13.777 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:13.777 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:20:14.034 [2024-06-10 11:43:45.914822] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:14.034 [2024-06-10 11:43:45.917317] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:14.034 [2024-06-10 11:43:45.917525] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:14.034 [2024-06-10 11:43:45.917867] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:20:14.034 [2024-06-10 11:43:45.918002] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:14.034 [2024-06-10 11:43:45.918176] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:14.034 [2024-06-10 11:43:45.918592] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:20:14.034 [2024-06-10 11:43:45.918730] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:20:14.034 [2024-06-10 11:43:45.919026] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.034 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:14.034 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:14.034 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:14.034 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:14.034 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:14.034 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:14.034 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:14.034 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:14.034 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:14.034 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:14.035 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.035 11:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.292 11:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:14.292 "name": "raid_bdev1", 00:20:14.292 "uuid": "03510e8e-6bcb-4b6c-baf4-e8de73e01ac7", 00:20:14.292 "strip_size_kb": 64, 00:20:14.292 "state": "online", 00:20:14.292 "raid_level": "raid0", 00:20:14.292 "superblock": true, 00:20:14.292 "num_base_bdevs": 3, 00:20:14.292 "num_base_bdevs_discovered": 3, 00:20:14.292 "num_base_bdevs_operational": 3, 00:20:14.292 "base_bdevs_list": [ 00:20:14.292 { 00:20:14.292 "name": "pt1", 00:20:14.292 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:14.292 "is_configured": true, 00:20:14.292 "data_offset": 2048, 00:20:14.292 "data_size": 63488 00:20:14.292 }, 00:20:14.292 { 00:20:14.292 "name": "pt2", 00:20:14.292 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:14.292 "is_configured": true, 00:20:14.292 "data_offset": 2048, 00:20:14.292 "data_size": 63488 00:20:14.292 }, 00:20:14.292 { 00:20:14.292 "name": "pt3", 00:20:14.292 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:14.292 "is_configured": true, 00:20:14.292 "data_offset": 2048, 00:20:14.292 "data_size": 63488 00:20:14.292 } 00:20:14.292 ] 00:20:14.292 }' 00:20:14.292 11:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:14.292 11:43:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.858 11:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:20:14.858 11:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:14.858 11:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:14.858 11:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:14.858 11:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:14.858 11:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:14.858 11:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:14.858 11:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:15.116 [2024-06-10 11:43:46.943431] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:15.116 11:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:15.116 "name": "raid_bdev1", 00:20:15.116 "aliases": [ 00:20:15.116 "03510e8e-6bcb-4b6c-baf4-e8de73e01ac7" 00:20:15.116 ], 00:20:15.116 "product_name": "Raid Volume", 00:20:15.116 "block_size": 512, 00:20:15.116 "num_blocks": 190464, 00:20:15.116 "uuid": "03510e8e-6bcb-4b6c-baf4-e8de73e01ac7", 00:20:15.116 "assigned_rate_limits": { 00:20:15.116 "rw_ios_per_sec": 0, 00:20:15.116 "rw_mbytes_per_sec": 0, 00:20:15.116 "r_mbytes_per_sec": 0, 00:20:15.116 "w_mbytes_per_sec": 0 00:20:15.116 }, 00:20:15.116 "claimed": false, 00:20:15.116 "zoned": false, 00:20:15.116 "supported_io_types": { 00:20:15.116 "read": true, 00:20:15.116 "write": true, 00:20:15.116 "unmap": true, 00:20:15.116 "write_zeroes": true, 00:20:15.116 "flush": true, 00:20:15.116 "reset": true, 00:20:15.116 "compare": false, 00:20:15.116 "compare_and_write": false, 00:20:15.116 "abort": false, 00:20:15.116 "nvme_admin": false, 00:20:15.116 "nvme_io": false 00:20:15.116 }, 00:20:15.116 "memory_domains": [ 00:20:15.116 { 00:20:15.116 "dma_device_id": "system", 00:20:15.116 "dma_device_type": 1 00:20:15.116 }, 00:20:15.116 { 00:20:15.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.116 "dma_device_type": 2 00:20:15.116 }, 00:20:15.116 { 00:20:15.116 "dma_device_id": "system", 00:20:15.116 "dma_device_type": 1 00:20:15.116 }, 00:20:15.116 { 00:20:15.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.116 "dma_device_type": 2 00:20:15.116 }, 00:20:15.116 { 00:20:15.116 "dma_device_id": "system", 00:20:15.116 "dma_device_type": 1 00:20:15.116 }, 00:20:15.116 { 00:20:15.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.116 "dma_device_type": 2 00:20:15.116 } 00:20:15.116 ], 00:20:15.116 "driver_specific": { 00:20:15.116 "raid": { 00:20:15.116 "uuid": "03510e8e-6bcb-4b6c-baf4-e8de73e01ac7", 00:20:15.116 "strip_size_kb": 64, 00:20:15.116 "state": "online", 00:20:15.116 "raid_level": "raid0", 00:20:15.116 "superblock": true, 00:20:15.116 "num_base_bdevs": 3, 00:20:15.116 "num_base_bdevs_discovered": 3, 00:20:15.116 "num_base_bdevs_operational": 3, 00:20:15.116 "base_bdevs_list": [ 00:20:15.116 { 00:20:15.116 "name": "pt1", 00:20:15.116 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:15.116 "is_configured": true, 00:20:15.116 "data_offset": 2048, 00:20:15.116 "data_size": 63488 00:20:15.116 }, 00:20:15.116 { 00:20:15.116 "name": "pt2", 00:20:15.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:15.116 "is_configured": true, 00:20:15.116 "data_offset": 2048, 00:20:15.116 "data_size": 63488 00:20:15.116 }, 00:20:15.116 { 00:20:15.116 "name": "pt3", 00:20:15.116 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:15.116 "is_configured": true, 00:20:15.116 "data_offset": 2048, 00:20:15.116 "data_size": 63488 00:20:15.116 } 00:20:15.116 ] 00:20:15.116 } 00:20:15.116 } 00:20:15.116 }' 00:20:15.116 11:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:15.116 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:15.116 pt2 00:20:15.116 pt3' 00:20:15.116 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:15.116 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:15.116 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:15.374 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:15.374 "name": "pt1", 00:20:15.374 "aliases": [ 00:20:15.374 "00000000-0000-0000-0000-000000000001" 00:20:15.374 ], 00:20:15.374 "product_name": "passthru", 00:20:15.374 "block_size": 512, 00:20:15.374 "num_blocks": 65536, 00:20:15.374 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:15.374 "assigned_rate_limits": { 00:20:15.374 "rw_ios_per_sec": 0, 00:20:15.374 "rw_mbytes_per_sec": 0, 00:20:15.374 "r_mbytes_per_sec": 0, 00:20:15.374 "w_mbytes_per_sec": 0 00:20:15.374 }, 00:20:15.374 "claimed": true, 00:20:15.374 "claim_type": "exclusive_write", 00:20:15.374 "zoned": false, 00:20:15.374 "supported_io_types": { 00:20:15.374 "read": true, 00:20:15.374 "write": true, 00:20:15.374 "unmap": true, 00:20:15.374 "write_zeroes": true, 00:20:15.374 "flush": true, 00:20:15.374 "reset": true, 00:20:15.374 "compare": false, 00:20:15.374 "compare_and_write": false, 00:20:15.374 "abort": true, 00:20:15.374 "nvme_admin": false, 00:20:15.374 "nvme_io": false 00:20:15.374 }, 00:20:15.374 "memory_domains": [ 00:20:15.374 { 00:20:15.374 "dma_device_id": "system", 00:20:15.374 "dma_device_type": 1 00:20:15.374 }, 00:20:15.374 { 00:20:15.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.374 "dma_device_type": 2 00:20:15.374 } 00:20:15.374 ], 00:20:15.374 "driver_specific": { 00:20:15.374 "passthru": { 00:20:15.374 "name": "pt1", 00:20:15.374 "base_bdev_name": "malloc1" 00:20:15.374 } 00:20:15.374 } 00:20:15.374 }' 00:20:15.374 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:15.374 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:15.374 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:15.374 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:15.374 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:15.374 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:15.374 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:15.632 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:15.632 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:15.632 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:15.632 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:15.632 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:15.632 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:15.632 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:15.632 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:15.889 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:15.889 "name": "pt2", 00:20:15.889 "aliases": [ 00:20:15.889 "00000000-0000-0000-0000-000000000002" 00:20:15.889 ], 00:20:15.889 "product_name": "passthru", 00:20:15.889 "block_size": 512, 00:20:15.889 "num_blocks": 65536, 00:20:15.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:15.889 "assigned_rate_limits": { 00:20:15.889 "rw_ios_per_sec": 0, 00:20:15.889 "rw_mbytes_per_sec": 0, 00:20:15.889 "r_mbytes_per_sec": 0, 00:20:15.889 "w_mbytes_per_sec": 0 00:20:15.889 }, 00:20:15.889 "claimed": true, 00:20:15.889 "claim_type": "exclusive_write", 00:20:15.889 "zoned": false, 00:20:15.889 "supported_io_types": { 00:20:15.889 "read": true, 00:20:15.889 "write": true, 00:20:15.889 "unmap": true, 00:20:15.889 "write_zeroes": true, 00:20:15.889 "flush": true, 00:20:15.889 "reset": true, 00:20:15.889 "compare": false, 00:20:15.889 "compare_and_write": false, 00:20:15.889 "abort": true, 00:20:15.889 "nvme_admin": false, 00:20:15.889 "nvme_io": false 00:20:15.889 }, 00:20:15.889 "memory_domains": [ 00:20:15.889 { 00:20:15.889 "dma_device_id": "system", 00:20:15.889 "dma_device_type": 1 00:20:15.889 }, 00:20:15.889 { 00:20:15.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.889 "dma_device_type": 2 00:20:15.889 } 00:20:15.889 ], 00:20:15.889 "driver_specific": { 00:20:15.889 "passthru": { 00:20:15.889 "name": "pt2", 00:20:15.889 "base_bdev_name": "malloc2" 00:20:15.889 } 00:20:15.889 } 00:20:15.889 }' 00:20:15.889 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:15.889 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:16.148 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:16.148 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:16.148 11:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:16.148 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:16.148 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:16.148 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:16.148 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:16.148 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:16.148 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:16.405 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:16.405 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:16.405 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:16.405 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:16.664 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:16.664 "name": "pt3", 00:20:16.664 "aliases": [ 00:20:16.664 "00000000-0000-0000-0000-000000000003" 00:20:16.664 ], 00:20:16.664 "product_name": "passthru", 00:20:16.664 "block_size": 512, 00:20:16.664 "num_blocks": 65536, 00:20:16.664 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:16.664 "assigned_rate_limits": { 00:20:16.664 "rw_ios_per_sec": 0, 00:20:16.664 "rw_mbytes_per_sec": 0, 00:20:16.664 "r_mbytes_per_sec": 0, 00:20:16.664 "w_mbytes_per_sec": 0 00:20:16.664 }, 00:20:16.664 "claimed": true, 00:20:16.664 "claim_type": "exclusive_write", 00:20:16.664 "zoned": false, 00:20:16.664 "supported_io_types": { 00:20:16.664 "read": true, 00:20:16.664 "write": true, 00:20:16.664 "unmap": true, 00:20:16.664 "write_zeroes": true, 00:20:16.664 "flush": true, 00:20:16.664 "reset": true, 00:20:16.664 "compare": false, 00:20:16.664 "compare_and_write": false, 00:20:16.664 "abort": true, 00:20:16.664 "nvme_admin": false, 00:20:16.664 "nvme_io": false 00:20:16.664 }, 00:20:16.664 "memory_domains": [ 00:20:16.664 { 00:20:16.664 "dma_device_id": "system", 00:20:16.664 "dma_device_type": 1 00:20:16.664 }, 00:20:16.664 { 00:20:16.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.664 "dma_device_type": 2 00:20:16.664 } 00:20:16.664 ], 00:20:16.664 "driver_specific": { 00:20:16.664 "passthru": { 00:20:16.664 "name": "pt3", 00:20:16.664 "base_bdev_name": "malloc3" 00:20:16.664 } 00:20:16.664 } 00:20:16.664 }' 00:20:16.664 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:16.664 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:16.664 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:16.664 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:16.664 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:16.664 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:16.664 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:16.664 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:16.922 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:16.922 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:16.922 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:16.922 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:16.922 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:16.922 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:20:17.181 [2024-06-10 11:43:49.131993] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.181 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=03510e8e-6bcb-4b6c-baf4-e8de73e01ac7 00:20:17.181 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 03510e8e-6bcb-4b6c-baf4-e8de73e01ac7 ']' 00:20:17.181 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:17.438 [2024-06-10 11:43:49.415795] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:17.438 [2024-06-10 11:43:49.416003] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:17.438 [2024-06-10 11:43:49.416177] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:17.438 [2024-06-10 11:43:49.416323] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:17.438 [2024-06-10 11:43:49.416407] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:20:17.438 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.438 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:20:17.696 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:20:17.696 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:20:17.696 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:17.696 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:17.954 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:17.954 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:18.520 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:18.520 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:18.520 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:18.520 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:18.780 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:20:18.780 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:18.780 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:20:18.780 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:18.780 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.780 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:18.780 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.780 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:18.780 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.780 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:18.780 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.780 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:18.780 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:19.041 [2024-06-10 11:43:51.028165] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:19.041 [2024-06-10 11:43:51.030451] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:19.041 [2024-06-10 11:43:51.030643] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:19.041 [2024-06-10 11:43:51.030904] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:19.041 [2024-06-10 11:43:51.031096] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:19.041 [2024-06-10 11:43:51.031264] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:19.041 [2024-06-10 11:43:51.031415] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:19.041 [2024-06-10 11:43:51.031454] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:20:19.041 request: 00:20:19.041 { 00:20:19.041 "name": "raid_bdev1", 00:20:19.041 "raid_level": "raid0", 00:20:19.041 "base_bdevs": [ 00:20:19.041 "malloc1", 00:20:19.041 "malloc2", 00:20:19.041 "malloc3" 00:20:19.041 ], 00:20:19.041 "strip_size_kb": 64, 00:20:19.041 "superblock": false, 00:20:19.041 "method": "bdev_raid_create", 00:20:19.041 "req_id": 1 00:20:19.041 } 00:20:19.041 Got JSON-RPC error response 00:20:19.041 response: 00:20:19.041 { 00:20:19.041 "code": -17, 00:20:19.041 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:19.041 } 00:20:19.041 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:20:19.041 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:19.041 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:19.041 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:19.041 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.041 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:20:19.618 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:20:19.618 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:20:19.618 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:19.618 [2024-06-10 11:43:51.560290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:19.618 [2024-06-10 11:43:51.560565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.618 [2024-06-10 11:43:51.560643] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:19.618 [2024-06-10 11:43:51.560745] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.618 [2024-06-10 11:43:51.563459] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.618 [2024-06-10 11:43:51.563634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:19.618 [2024-06-10 11:43:51.563854] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:19.618 [2024-06-10 11:43:51.564061] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:19.618 pt1 00:20:19.618 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:20:19.618 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:19.618 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:19.618 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:19.618 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:19.618 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:19.618 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:19.618 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:19.618 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:19.618 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:19.618 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.618 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.886 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:19.886 "name": "raid_bdev1", 00:20:19.886 "uuid": "03510e8e-6bcb-4b6c-baf4-e8de73e01ac7", 00:20:19.886 "strip_size_kb": 64, 00:20:19.886 "state": "configuring", 00:20:19.886 "raid_level": "raid0", 00:20:19.886 "superblock": true, 00:20:19.886 "num_base_bdevs": 3, 00:20:19.886 "num_base_bdevs_discovered": 1, 00:20:19.886 "num_base_bdevs_operational": 3, 00:20:19.886 "base_bdevs_list": [ 00:20:19.886 { 00:20:19.886 "name": "pt1", 00:20:19.886 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:19.886 "is_configured": true, 00:20:19.886 "data_offset": 2048, 00:20:19.886 "data_size": 63488 00:20:19.886 }, 00:20:19.886 { 00:20:19.886 "name": null, 00:20:19.886 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:19.886 "is_configured": false, 00:20:19.886 "data_offset": 2048, 00:20:19.886 "data_size": 63488 00:20:19.886 }, 00:20:19.886 { 00:20:19.886 "name": null, 00:20:19.886 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:19.886 "is_configured": false, 00:20:19.886 "data_offset": 2048, 00:20:19.886 "data_size": 63488 00:20:19.886 } 00:20:19.886 ] 00:20:19.886 }' 00:20:19.886 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:19.886 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.454 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:20:20.455 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:20.711 [2024-06-10 11:43:52.768571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:20.967 [2024-06-10 11:43:52.768865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:20.967 [2024-06-10 11:43:52.768957] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:20.967 [2024-06-10 11:43:52.769064] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:20.967 [2024-06-10 11:43:52.769607] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:20.967 [2024-06-10 11:43:52.769758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:20.967 [2024-06-10 11:43:52.770010] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:20.967 [2024-06-10 11:43:52.770139] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:20.967 pt2 00:20:20.967 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:21.223 [2024-06-10 11:43:53.060661] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:21.223 11:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:20:21.223 11:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:21.223 11:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:21.223 11:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:21.223 11:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:21.223 11:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:21.223 11:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:21.223 11:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:21.223 11:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:21.224 11:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:21.224 11:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.224 11:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.480 11:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:21.480 "name": "raid_bdev1", 00:20:21.480 "uuid": "03510e8e-6bcb-4b6c-baf4-e8de73e01ac7", 00:20:21.480 "strip_size_kb": 64, 00:20:21.480 "state": "configuring", 00:20:21.480 "raid_level": "raid0", 00:20:21.480 "superblock": true, 00:20:21.480 "num_base_bdevs": 3, 00:20:21.480 "num_base_bdevs_discovered": 1, 00:20:21.480 "num_base_bdevs_operational": 3, 00:20:21.480 "base_bdevs_list": [ 00:20:21.480 { 00:20:21.480 "name": "pt1", 00:20:21.480 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:21.480 "is_configured": true, 00:20:21.480 "data_offset": 2048, 00:20:21.480 "data_size": 63488 00:20:21.480 }, 00:20:21.480 { 00:20:21.480 "name": null, 00:20:21.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:21.480 "is_configured": false, 00:20:21.480 "data_offset": 2048, 00:20:21.480 "data_size": 63488 00:20:21.480 }, 00:20:21.480 { 00:20:21.480 "name": null, 00:20:21.480 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:21.480 "is_configured": false, 00:20:21.480 "data_offset": 2048, 00:20:21.480 "data_size": 63488 00:20:21.480 } 00:20:21.480 ] 00:20:21.480 }' 00:20:21.480 11:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:21.480 11:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.046 11:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:20:22.046 11:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:22.046 11:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:22.304 [2024-06-10 11:43:54.247170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:22.304 [2024-06-10 11:43:54.247405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.304 [2024-06-10 11:43:54.247474] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:22.304 [2024-06-10 11:43:54.247666] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.304 [2024-06-10 11:43:54.248232] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.304 [2024-06-10 11:43:54.248314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:22.304 [2024-06-10 11:43:54.248458] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:22.304 [2024-06-10 11:43:54.248503] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:22.304 pt2 00:20:22.304 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:22.304 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:22.304 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:22.562 [2024-06-10 11:43:54.527247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:22.562 [2024-06-10 11:43:54.527527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.562 [2024-06-10 11:43:54.527601] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:22.562 [2024-06-10 11:43:54.527703] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.562 [2024-06-10 11:43:54.528314] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.562 [2024-06-10 11:43:54.528386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:22.562 [2024-06-10 11:43:54.528620] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:22.562 [2024-06-10 11:43:54.528678] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:22.562 [2024-06-10 11:43:54.528833] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:20:22.562 [2024-06-10 11:43:54.528880] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:22.562 [2024-06-10 11:43:54.529006] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:22.562 [2024-06-10 11:43:54.529555] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:20:22.562 [2024-06-10 11:43:54.529673] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:20:22.562 [2024-06-10 11:43:54.529910] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.562 pt3 00:20:22.562 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:22.562 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:22.562 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:22.562 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:22.562 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:22.562 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:22.562 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:22.562 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:22.562 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:22.562 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:22.562 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:22.562 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:22.562 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.562 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.821 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:22.821 "name": "raid_bdev1", 00:20:22.821 "uuid": "03510e8e-6bcb-4b6c-baf4-e8de73e01ac7", 00:20:22.821 "strip_size_kb": 64, 00:20:22.821 "state": "online", 00:20:22.821 "raid_level": "raid0", 00:20:22.821 "superblock": true, 00:20:22.821 "num_base_bdevs": 3, 00:20:22.821 "num_base_bdevs_discovered": 3, 00:20:22.821 "num_base_bdevs_operational": 3, 00:20:22.821 "base_bdevs_list": [ 00:20:22.821 { 00:20:22.821 "name": "pt1", 00:20:22.821 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:22.821 "is_configured": true, 00:20:22.821 "data_offset": 2048, 00:20:22.821 "data_size": 63488 00:20:22.821 }, 00:20:22.821 { 00:20:22.821 "name": "pt2", 00:20:22.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:22.821 "is_configured": true, 00:20:22.821 "data_offset": 2048, 00:20:22.821 "data_size": 63488 00:20:22.821 }, 00:20:22.821 { 00:20:22.821 "name": "pt3", 00:20:22.821 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:22.821 "is_configured": true, 00:20:22.821 "data_offset": 2048, 00:20:22.821 "data_size": 63488 00:20:22.821 } 00:20:22.821 ] 00:20:22.821 }' 00:20:22.821 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:22.821 11:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.386 11:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:20:23.386 11:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:23.386 11:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:23.386 11:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:23.386 11:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:23.386 11:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:23.386 11:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:23.386 11:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:23.643 [2024-06-10 11:43:55.587762] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:23.643 11:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:23.643 "name": "raid_bdev1", 00:20:23.643 "aliases": [ 00:20:23.643 "03510e8e-6bcb-4b6c-baf4-e8de73e01ac7" 00:20:23.643 ], 00:20:23.643 "product_name": "Raid Volume", 00:20:23.643 "block_size": 512, 00:20:23.643 "num_blocks": 190464, 00:20:23.643 "uuid": "03510e8e-6bcb-4b6c-baf4-e8de73e01ac7", 00:20:23.643 "assigned_rate_limits": { 00:20:23.643 "rw_ios_per_sec": 0, 00:20:23.643 "rw_mbytes_per_sec": 0, 00:20:23.643 "r_mbytes_per_sec": 0, 00:20:23.643 "w_mbytes_per_sec": 0 00:20:23.643 }, 00:20:23.643 "claimed": false, 00:20:23.643 "zoned": false, 00:20:23.643 "supported_io_types": { 00:20:23.643 "read": true, 00:20:23.643 "write": true, 00:20:23.643 "unmap": true, 00:20:23.643 "write_zeroes": true, 00:20:23.643 "flush": true, 00:20:23.643 "reset": true, 00:20:23.643 "compare": false, 00:20:23.643 "compare_and_write": false, 00:20:23.643 "abort": false, 00:20:23.643 "nvme_admin": false, 00:20:23.643 "nvme_io": false 00:20:23.643 }, 00:20:23.643 "memory_domains": [ 00:20:23.643 { 00:20:23.643 "dma_device_id": "system", 00:20:23.643 "dma_device_type": 1 00:20:23.643 }, 00:20:23.643 { 00:20:23.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.643 "dma_device_type": 2 00:20:23.643 }, 00:20:23.643 { 00:20:23.643 "dma_device_id": "system", 00:20:23.643 "dma_device_type": 1 00:20:23.643 }, 00:20:23.643 { 00:20:23.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.643 "dma_device_type": 2 00:20:23.643 }, 00:20:23.643 { 00:20:23.643 "dma_device_id": "system", 00:20:23.643 "dma_device_type": 1 00:20:23.643 }, 00:20:23.643 { 00:20:23.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.643 "dma_device_type": 2 00:20:23.643 } 00:20:23.643 ], 00:20:23.643 "driver_specific": { 00:20:23.643 "raid": { 00:20:23.643 "uuid": "03510e8e-6bcb-4b6c-baf4-e8de73e01ac7", 00:20:23.643 "strip_size_kb": 64, 00:20:23.643 "state": "online", 00:20:23.643 "raid_level": "raid0", 00:20:23.643 "superblock": true, 00:20:23.643 "num_base_bdevs": 3, 00:20:23.643 "num_base_bdevs_discovered": 3, 00:20:23.643 "num_base_bdevs_operational": 3, 00:20:23.643 "base_bdevs_list": [ 00:20:23.643 { 00:20:23.643 "name": "pt1", 00:20:23.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:23.643 "is_configured": true, 00:20:23.643 "data_offset": 2048, 00:20:23.643 "data_size": 63488 00:20:23.643 }, 00:20:23.643 { 00:20:23.643 "name": "pt2", 00:20:23.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:23.643 "is_configured": true, 00:20:23.643 "data_offset": 2048, 00:20:23.643 "data_size": 63488 00:20:23.643 }, 00:20:23.643 { 00:20:23.643 "name": "pt3", 00:20:23.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:23.643 "is_configured": true, 00:20:23.643 "data_offset": 2048, 00:20:23.643 "data_size": 63488 00:20:23.643 } 00:20:23.643 ] 00:20:23.643 } 00:20:23.643 } 00:20:23.643 }' 00:20:23.643 11:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:23.643 11:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:23.643 pt2 00:20:23.643 pt3' 00:20:23.643 11:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:23.643 11:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:23.643 11:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:23.901 11:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:23.901 "name": "pt1", 00:20:23.901 "aliases": [ 00:20:23.901 "00000000-0000-0000-0000-000000000001" 00:20:23.901 ], 00:20:23.901 "product_name": "passthru", 00:20:23.901 "block_size": 512, 00:20:23.901 "num_blocks": 65536, 00:20:23.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:23.901 "assigned_rate_limits": { 00:20:23.901 "rw_ios_per_sec": 0, 00:20:23.901 "rw_mbytes_per_sec": 0, 00:20:23.901 "r_mbytes_per_sec": 0, 00:20:23.901 "w_mbytes_per_sec": 0 00:20:23.901 }, 00:20:23.901 "claimed": true, 00:20:23.901 "claim_type": "exclusive_write", 00:20:23.901 "zoned": false, 00:20:23.901 "supported_io_types": { 00:20:23.901 "read": true, 00:20:23.901 "write": true, 00:20:23.901 "unmap": true, 00:20:23.901 "write_zeroes": true, 00:20:23.901 "flush": true, 00:20:23.901 "reset": true, 00:20:23.901 "compare": false, 00:20:23.901 "compare_and_write": false, 00:20:23.901 "abort": true, 00:20:23.901 "nvme_admin": false, 00:20:23.901 "nvme_io": false 00:20:23.901 }, 00:20:23.901 "memory_domains": [ 00:20:23.901 { 00:20:23.901 "dma_device_id": "system", 00:20:23.901 "dma_device_type": 1 00:20:23.901 }, 00:20:23.901 { 00:20:23.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.901 "dma_device_type": 2 00:20:23.901 } 00:20:23.901 ], 00:20:23.901 "driver_specific": { 00:20:23.901 "passthru": { 00:20:23.901 "name": "pt1", 00:20:23.901 "base_bdev_name": "malloc1" 00:20:23.901 } 00:20:23.901 } 00:20:23.901 }' 00:20:23.901 11:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:24.158 11:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:24.158 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:24.158 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:24.158 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:24.158 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:24.158 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:24.158 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:24.158 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:24.158 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:24.417 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:24.417 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:24.417 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:24.417 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:24.417 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:24.675 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:24.675 "name": "pt2", 00:20:24.675 "aliases": [ 00:20:24.675 "00000000-0000-0000-0000-000000000002" 00:20:24.675 ], 00:20:24.675 "product_name": "passthru", 00:20:24.675 "block_size": 512, 00:20:24.675 "num_blocks": 65536, 00:20:24.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:24.675 "assigned_rate_limits": { 00:20:24.675 "rw_ios_per_sec": 0, 00:20:24.675 "rw_mbytes_per_sec": 0, 00:20:24.675 "r_mbytes_per_sec": 0, 00:20:24.675 "w_mbytes_per_sec": 0 00:20:24.675 }, 00:20:24.675 "claimed": true, 00:20:24.675 "claim_type": "exclusive_write", 00:20:24.675 "zoned": false, 00:20:24.675 "supported_io_types": { 00:20:24.675 "read": true, 00:20:24.675 "write": true, 00:20:24.675 "unmap": true, 00:20:24.675 "write_zeroes": true, 00:20:24.675 "flush": true, 00:20:24.675 "reset": true, 00:20:24.675 "compare": false, 00:20:24.675 "compare_and_write": false, 00:20:24.675 "abort": true, 00:20:24.675 "nvme_admin": false, 00:20:24.675 "nvme_io": false 00:20:24.675 }, 00:20:24.675 "memory_domains": [ 00:20:24.675 { 00:20:24.675 "dma_device_id": "system", 00:20:24.675 "dma_device_type": 1 00:20:24.675 }, 00:20:24.675 { 00:20:24.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.675 "dma_device_type": 2 00:20:24.675 } 00:20:24.675 ], 00:20:24.675 "driver_specific": { 00:20:24.675 "passthru": { 00:20:24.675 "name": "pt2", 00:20:24.675 "base_bdev_name": "malloc2" 00:20:24.675 } 00:20:24.675 } 00:20:24.675 }' 00:20:24.675 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:24.675 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:24.675 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:24.675 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:24.675 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:24.675 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:24.675 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:24.933 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:24.933 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:24.933 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:24.933 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:24.933 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:24.933 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:24.933 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:24.933 11:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:25.191 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:25.191 "name": "pt3", 00:20:25.191 "aliases": [ 00:20:25.191 "00000000-0000-0000-0000-000000000003" 00:20:25.191 ], 00:20:25.191 "product_name": "passthru", 00:20:25.191 "block_size": 512, 00:20:25.191 "num_blocks": 65536, 00:20:25.191 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:25.191 "assigned_rate_limits": { 00:20:25.191 "rw_ios_per_sec": 0, 00:20:25.191 "rw_mbytes_per_sec": 0, 00:20:25.191 "r_mbytes_per_sec": 0, 00:20:25.191 "w_mbytes_per_sec": 0 00:20:25.191 }, 00:20:25.191 "claimed": true, 00:20:25.191 "claim_type": "exclusive_write", 00:20:25.191 "zoned": false, 00:20:25.191 "supported_io_types": { 00:20:25.191 "read": true, 00:20:25.191 "write": true, 00:20:25.191 "unmap": true, 00:20:25.191 "write_zeroes": true, 00:20:25.191 "flush": true, 00:20:25.191 "reset": true, 00:20:25.191 "compare": false, 00:20:25.191 "compare_and_write": false, 00:20:25.191 "abort": true, 00:20:25.191 "nvme_admin": false, 00:20:25.191 "nvme_io": false 00:20:25.191 }, 00:20:25.191 "memory_domains": [ 00:20:25.191 { 00:20:25.191 "dma_device_id": "system", 00:20:25.191 "dma_device_type": 1 00:20:25.191 }, 00:20:25.191 { 00:20:25.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.191 "dma_device_type": 2 00:20:25.191 } 00:20:25.191 ], 00:20:25.191 "driver_specific": { 00:20:25.191 "passthru": { 00:20:25.191 "name": "pt3", 00:20:25.191 "base_bdev_name": "malloc3" 00:20:25.191 } 00:20:25.191 } 00:20:25.191 }' 00:20:25.191 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:25.191 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:25.191 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:25.191 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:25.191 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:25.448 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:25.448 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:25.448 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:25.448 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:25.448 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:25.448 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:25.448 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:25.448 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:20:25.448 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:25.706 [2024-06-10 11:43:57.640225] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:25.706 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 03510e8e-6bcb-4b6c-baf4-e8de73e01ac7 '!=' 03510e8e-6bcb-4b6c-baf4-e8de73e01ac7 ']' 00:20:25.706 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:20:25.706 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:25.706 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:25.706 11:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 128513 00:20:25.706 11:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 128513 ']' 00:20:25.706 11:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 128513 00:20:25.706 11:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:20:25.706 11:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:25.706 11:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 128513 00:20:25.706 11:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:25.706 11:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:25.706 11:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 128513' 00:20:25.706 killing process with pid 128513 00:20:25.706 11:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 128513 00:20:25.706 [2024-06-10 11:43:57.690212] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:25.706 11:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 128513 00:20:25.706 [2024-06-10 11:43:57.690480] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:25.706 [2024-06-10 11:43:57.690618] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:25.706 [2024-06-10 11:43:57.690711] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:20:26.270 [2024-06-10 11:43:58.036532] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:27.644 ************************************ 00:20:27.644 END TEST raid_superblock_test 00:20:27.644 ************************************ 00:20:27.644 11:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:20:27.644 00:20:27.644 real 0m16.307s 00:20:27.644 user 0m28.414s 00:20:27.644 sys 0m2.289s 00:20:27.644 11:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:27.644 11:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.644 11:43:59 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:20:27.644 11:43:59 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:20:27.644 11:43:59 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:27.644 11:43:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:27.644 ************************************ 00:20:27.644 START TEST raid_read_error_test 00:20:27.644 ************************************ 00:20:27.644 11:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 3 read 00:20:27.644 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.kFirDGtiWo 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=129087 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 129087 /var/tmp/spdk-raid.sock 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 129087 ']' 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:27.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:27.645 11:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.645 [2024-06-10 11:43:59.656921] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:20:27.645 [2024-06-10 11:43:59.657291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129087 ] 00:20:27.902 [2024-06-10 11:43:59.824769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.160 [2024-06-10 11:44:00.063327] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.417 [2024-06-10 11:44:00.346180] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.675 11:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:28.675 11:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:20:28.675 11:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:28.675 11:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:28.933 BaseBdev1_malloc 00:20:28.933 11:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:29.191 true 00:20:29.191 11:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:29.450 [2024-06-10 11:44:01.350240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:29.450 [2024-06-10 11:44:01.350564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.450 [2024-06-10 11:44:01.350741] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:20:29.450 [2024-06-10 11:44:01.350866] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.450 [2024-06-10 11:44:01.353479] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.450 [2024-06-10 11:44:01.353635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:29.450 BaseBdev1 00:20:29.450 11:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:29.450 11:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:29.708 BaseBdev2_malloc 00:20:29.708 11:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:29.966 true 00:20:29.966 11:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:30.224 [2024-06-10 11:44:02.185820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:30.224 [2024-06-10 11:44:02.186120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.224 [2024-06-10 11:44:02.186212] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:30.224 [2024-06-10 11:44:02.186416] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.224 [2024-06-10 11:44:02.188931] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.224 [2024-06-10 11:44:02.189087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:30.224 BaseBdev2 00:20:30.224 11:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:30.224 11:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:30.482 BaseBdev3_malloc 00:20:30.482 11:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:30.740 true 00:20:30.998 11:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:30.998 [2024-06-10 11:44:03.051855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:30.998 [2024-06-10 11:44:03.052171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.998 [2024-06-10 11:44:03.052243] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:30.998 [2024-06-10 11:44:03.052347] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.998 [2024-06-10 11:44:03.054758] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.998 [2024-06-10 11:44:03.054926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:31.257 BaseBdev3 00:20:31.257 11:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:31.257 [2024-06-10 11:44:03.243972] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:31.257 [2024-06-10 11:44:03.246402] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:31.257 [2024-06-10 11:44:03.246648] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:31.257 [2024-06-10 11:44:03.247036] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:20:31.257 [2024-06-10 11:44:03.247145] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:31.257 [2024-06-10 11:44:03.247333] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:31.257 [2024-06-10 11:44:03.247767] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:20:31.257 [2024-06-10 11:44:03.247888] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:20:31.257 [2024-06-10 11:44:03.248200] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.257 11:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:31.257 11:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:31.257 11:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:31.257 11:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:31.257 11:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:31.257 11:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:31.257 11:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:31.257 11:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:31.257 11:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:31.257 11:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:31.257 11:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.257 11:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.516 11:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:31.516 "name": "raid_bdev1", 00:20:31.516 "uuid": "3bcf2748-8a3f-45e0-b899-c613cd996e48", 00:20:31.516 "strip_size_kb": 64, 00:20:31.516 "state": "online", 00:20:31.516 "raid_level": "raid0", 00:20:31.516 "superblock": true, 00:20:31.516 "num_base_bdevs": 3, 00:20:31.516 "num_base_bdevs_discovered": 3, 00:20:31.516 "num_base_bdevs_operational": 3, 00:20:31.516 "base_bdevs_list": [ 00:20:31.516 { 00:20:31.516 "name": "BaseBdev1", 00:20:31.516 "uuid": "129bad65-ccea-5b86-a0d6-fbcb78adc13f", 00:20:31.516 "is_configured": true, 00:20:31.516 "data_offset": 2048, 00:20:31.516 "data_size": 63488 00:20:31.516 }, 00:20:31.516 { 00:20:31.516 "name": "BaseBdev2", 00:20:31.516 "uuid": "05556943-0196-592f-a2f3-f9cf7c9a092c", 00:20:31.516 "is_configured": true, 00:20:31.516 "data_offset": 2048, 00:20:31.516 "data_size": 63488 00:20:31.516 }, 00:20:31.516 { 00:20:31.516 "name": "BaseBdev3", 00:20:31.516 "uuid": "51afa51e-dc0b-505a-a18e-9e4c5c45d5d1", 00:20:31.516 "is_configured": true, 00:20:31.516 "data_offset": 2048, 00:20:31.516 "data_size": 63488 00:20:31.516 } 00:20:31.516 ] 00:20:31.516 }' 00:20:31.516 11:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:31.516 11:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.085 11:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:32.085 11:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:32.343 [2024-06-10 11:44:04.214692] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:33.276 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:33.535 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:33.535 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:33.535 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:20:33.535 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:33.535 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:33.535 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:33.535 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:33.535 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:33.535 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:33.535 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:33.535 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:33.535 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:33.535 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:33.535 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.535 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.792 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:33.792 "name": "raid_bdev1", 00:20:33.792 "uuid": "3bcf2748-8a3f-45e0-b899-c613cd996e48", 00:20:33.792 "strip_size_kb": 64, 00:20:33.792 "state": "online", 00:20:33.792 "raid_level": "raid0", 00:20:33.792 "superblock": true, 00:20:33.792 "num_base_bdevs": 3, 00:20:33.792 "num_base_bdevs_discovered": 3, 00:20:33.792 "num_base_bdevs_operational": 3, 00:20:33.792 "base_bdevs_list": [ 00:20:33.792 { 00:20:33.792 "name": "BaseBdev1", 00:20:33.792 "uuid": "129bad65-ccea-5b86-a0d6-fbcb78adc13f", 00:20:33.792 "is_configured": true, 00:20:33.792 "data_offset": 2048, 00:20:33.792 "data_size": 63488 00:20:33.792 }, 00:20:33.792 { 00:20:33.792 "name": "BaseBdev2", 00:20:33.792 "uuid": "05556943-0196-592f-a2f3-f9cf7c9a092c", 00:20:33.792 "is_configured": true, 00:20:33.792 "data_offset": 2048, 00:20:33.792 "data_size": 63488 00:20:33.792 }, 00:20:33.792 { 00:20:33.792 "name": "BaseBdev3", 00:20:33.792 "uuid": "51afa51e-dc0b-505a-a18e-9e4c5c45d5d1", 00:20:33.792 "is_configured": true, 00:20:33.792 "data_offset": 2048, 00:20:33.792 "data_size": 63488 00:20:33.792 } 00:20:33.792 ] 00:20:33.792 }' 00:20:33.792 11:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:33.792 11:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.357 11:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:34.614 [2024-06-10 11:44:06.496026] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:34.614 [2024-06-10 11:44:06.496281] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:34.614 [2024-06-10 11:44:06.499208] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.614 [2024-06-10 11:44:06.499376] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.614 [2024-06-10 11:44:06.499447] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:34.614 [2024-06-10 11:44:06.499649] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:20:34.614 0 00:20:34.614 11:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 129087 00:20:34.614 11:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 129087 ']' 00:20:34.614 11:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 129087 00:20:34.614 11:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:20:34.614 11:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:34.614 11:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 129087 00:20:34.614 killing process with pid 129087 00:20:34.614 11:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:34.614 11:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:34.614 11:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 129087' 00:20:34.614 11:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 129087 00:20:34.614 [2024-06-10 11:44:06.539189] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:34.614 11:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 129087 00:20:34.871 [2024-06-10 11:44:06.815619] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:36.769 11:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:36.769 11:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.kFirDGtiWo 00:20:36.769 11:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:36.769 11:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:20:36.769 11:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:20:36.769 11:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:36.769 11:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:36.769 11:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:20:36.769 00:20:36.769 real 0m8.831s 00:20:36.769 user 0m13.072s 00:20:36.769 sys 0m1.152s 00:20:36.769 11:44:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:36.769 ************************************ 00:20:36.769 END TEST raid_read_error_test 00:20:36.769 ************************************ 00:20:36.769 11:44:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.769 11:44:08 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:20:36.769 11:44:08 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:20:36.769 11:44:08 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:36.769 11:44:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:36.769 ************************************ 00:20:36.769 START TEST raid_write_error_test 00:20:36.769 ************************************ 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 3 write 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.5BXhAKYJ48 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=129302 00:20:36.769 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 129302 /var/tmp/spdk-raid.sock 00:20:36.770 11:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 129302 ']' 00:20:36.770 11:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:36.770 11:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:36.770 11:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:36.770 11:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:36.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:36.770 11:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:36.770 11:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.770 [2024-06-10 11:44:08.573972] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:20:36.770 [2024-06-10 11:44:08.574413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129302 ] 00:20:36.770 [2024-06-10 11:44:08.755721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.027 [2024-06-10 11:44:08.985469] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.285 [2024-06-10 11:44:09.222140] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:37.542 11:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:37.542 11:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:20:37.542 11:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:37.542 11:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:38.106 BaseBdev1_malloc 00:20:38.106 11:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:38.364 true 00:20:38.364 11:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:38.622 [2024-06-10 11:44:10.461425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:38.622 [2024-06-10 11:44:10.461708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.622 [2024-06-10 11:44:10.461851] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:20:38.622 [2024-06-10 11:44:10.461942] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.622 [2024-06-10 11:44:10.464574] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.622 [2024-06-10 11:44:10.464743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:38.622 BaseBdev1 00:20:38.622 11:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:38.622 11:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:38.879 BaseBdev2_malloc 00:20:38.879 11:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:39.137 true 00:20:39.137 11:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:39.137 [2024-06-10 11:44:11.176252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:39.137 [2024-06-10 11:44:11.176580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.137 [2024-06-10 11:44:11.176667] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:39.137 [2024-06-10 11:44:11.176790] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.137 [2024-06-10 11:44:11.179077] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.137 [2024-06-10 11:44:11.179225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:39.137 BaseBdev2 00:20:39.137 11:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:39.137 11:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:39.711 BaseBdev3_malloc 00:20:39.711 11:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:39.977 true 00:20:39.977 11:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:39.977 [2024-06-10 11:44:11.988308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:39.977 [2024-06-10 11:44:11.988620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.977 [2024-06-10 11:44:11.988839] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:39.977 [2024-06-10 11:44:11.988960] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.977 [2024-06-10 11:44:11.991533] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.977 [2024-06-10 11:44:11.991693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:39.977 BaseBdev3 00:20:39.977 11:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:40.245 [2024-06-10 11:44:12.260513] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:40.245 [2024-06-10 11:44:12.262827] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:40.245 [2024-06-10 11:44:12.263032] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:40.245 [2024-06-10 11:44:12.263343] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:20:40.245 [2024-06-10 11:44:12.263459] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:40.245 [2024-06-10 11:44:12.263628] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:40.245 [2024-06-10 11:44:12.264053] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:20:40.245 [2024-06-10 11:44:12.264171] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:20:40.245 [2024-06-10 11:44:12.264470] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.245 11:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:40.245 11:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:40.245 11:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:40.245 11:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:40.245 11:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:40.245 11:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:40.245 11:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:40.245 11:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:40.245 11:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:40.245 11:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:40.245 11:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.245 11:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.516 11:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:40.516 "name": "raid_bdev1", 00:20:40.516 "uuid": "8d4e3099-b3ec-45fd-8908-e083ca014261", 00:20:40.516 "strip_size_kb": 64, 00:20:40.516 "state": "online", 00:20:40.516 "raid_level": "raid0", 00:20:40.516 "superblock": true, 00:20:40.516 "num_base_bdevs": 3, 00:20:40.516 "num_base_bdevs_discovered": 3, 00:20:40.516 "num_base_bdevs_operational": 3, 00:20:40.516 "base_bdevs_list": [ 00:20:40.516 { 00:20:40.516 "name": "BaseBdev1", 00:20:40.516 "uuid": "335b8708-96b5-5c1c-94ed-d8fd387464c9", 00:20:40.516 "is_configured": true, 00:20:40.516 "data_offset": 2048, 00:20:40.516 "data_size": 63488 00:20:40.516 }, 00:20:40.516 { 00:20:40.516 "name": "BaseBdev2", 00:20:40.516 "uuid": "9747901b-533f-5313-8a97-9fde0eac267b", 00:20:40.516 "is_configured": true, 00:20:40.516 "data_offset": 2048, 00:20:40.516 "data_size": 63488 00:20:40.516 }, 00:20:40.516 { 00:20:40.516 "name": "BaseBdev3", 00:20:40.516 "uuid": "b537c290-1fab-5abf-a3b0-a94f3d3da4f1", 00:20:40.516 "is_configured": true, 00:20:40.516 "data_offset": 2048, 00:20:40.516 "data_size": 63488 00:20:40.516 } 00:20:40.516 ] 00:20:40.516 }' 00:20:40.516 11:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:40.516 11:44:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.109 11:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:41.109 11:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:41.109 [2024-06-10 11:44:13.154107] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:42.047 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:42.613 "name": "raid_bdev1", 00:20:42.613 "uuid": "8d4e3099-b3ec-45fd-8908-e083ca014261", 00:20:42.613 "strip_size_kb": 64, 00:20:42.613 "state": "online", 00:20:42.613 "raid_level": "raid0", 00:20:42.613 "superblock": true, 00:20:42.613 "num_base_bdevs": 3, 00:20:42.613 "num_base_bdevs_discovered": 3, 00:20:42.613 "num_base_bdevs_operational": 3, 00:20:42.613 "base_bdevs_list": [ 00:20:42.613 { 00:20:42.613 "name": "BaseBdev1", 00:20:42.613 "uuid": "335b8708-96b5-5c1c-94ed-d8fd387464c9", 00:20:42.613 "is_configured": true, 00:20:42.613 "data_offset": 2048, 00:20:42.613 "data_size": 63488 00:20:42.613 }, 00:20:42.613 { 00:20:42.613 "name": "BaseBdev2", 00:20:42.613 "uuid": "9747901b-533f-5313-8a97-9fde0eac267b", 00:20:42.613 "is_configured": true, 00:20:42.613 "data_offset": 2048, 00:20:42.613 "data_size": 63488 00:20:42.613 }, 00:20:42.613 { 00:20:42.613 "name": "BaseBdev3", 00:20:42.613 "uuid": "b537c290-1fab-5abf-a3b0-a94f3d3da4f1", 00:20:42.613 "is_configured": true, 00:20:42.613 "data_offset": 2048, 00:20:42.613 "data_size": 63488 00:20:42.613 } 00:20:42.613 ] 00:20:42.613 }' 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:42.613 11:44:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.578 11:44:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:43.578 [2024-06-10 11:44:15.512143] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:43.578 [2024-06-10 11:44:15.512401] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:43.578 [2024-06-10 11:44:15.515256] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.578 [2024-06-10 11:44:15.515405] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.578 [2024-06-10 11:44:15.515472] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.578 [2024-06-10 11:44:15.515580] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:20:43.578 0 00:20:43.578 11:44:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 129302 00:20:43.578 11:44:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 129302 ']' 00:20:43.578 11:44:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 129302 00:20:43.578 11:44:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:20:43.578 11:44:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:43.578 11:44:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 129302 00:20:43.578 11:44:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:43.578 11:44:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:43.578 11:44:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 129302' 00:20:43.578 killing process with pid 129302 00:20:43.578 11:44:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 129302 00:20:43.578 [2024-06-10 11:44:15.563318] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:43.578 11:44:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 129302 00:20:43.848 [2024-06-10 11:44:15.826059] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:45.750 11:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.5BXhAKYJ48 00:20:45.750 11:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:45.750 11:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:45.750 11:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.42 00:20:45.750 11:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:20:45.750 11:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:45.750 11:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:45.750 11:44:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.42 != \0\.\0\0 ]] 00:20:45.750 00:20:45.750 real 0m8.923s 00:20:45.750 user 0m13.385s 00:20:45.750 sys 0m1.043s 00:20:45.750 11:44:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:45.750 ************************************ 00:20:45.750 END TEST raid_write_error_test 00:20:45.750 ************************************ 00:20:45.750 11:44:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.750 11:44:17 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:20:45.750 11:44:17 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:20:45.750 11:44:17 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:20:45.750 11:44:17 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:45.750 11:44:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:45.750 ************************************ 00:20:45.750 START TEST raid_state_function_test 00:20:45.750 ************************************ 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 3 false 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=129513 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 129513' 00:20:45.750 Process raid pid: 129513 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 129513 /var/tmp/spdk-raid.sock 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 129513 ']' 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:45.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:45.750 11:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.750 [2024-06-10 11:44:17.554150] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:20:45.750 [2024-06-10 11:44:17.554517] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.750 [2024-06-10 11:44:17.736398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.008 [2024-06-10 11:44:17.938278] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.265 [2024-06-10 11:44:18.156438] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:46.524 11:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:46.524 11:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:20:46.524 11:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:46.783 [2024-06-10 11:44:18.730767] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:46.783 [2024-06-10 11:44:18.730985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:46.783 [2024-06-10 11:44:18.731075] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:46.783 [2024-06-10 11:44:18.731133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:46.783 [2024-06-10 11:44:18.731161] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:46.783 [2024-06-10 11:44:18.731249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:46.783 11:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:46.783 11:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:46.783 11:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:46.783 11:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:46.783 11:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:46.783 11:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:46.783 11:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.783 11:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.783 11:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.783 11:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.783 11:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.783 11:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.042 11:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:47.042 "name": "Existed_Raid", 00:20:47.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.042 "strip_size_kb": 64, 00:20:47.042 "state": "configuring", 00:20:47.042 "raid_level": "concat", 00:20:47.042 "superblock": false, 00:20:47.042 "num_base_bdevs": 3, 00:20:47.042 "num_base_bdevs_discovered": 0, 00:20:47.042 "num_base_bdevs_operational": 3, 00:20:47.042 "base_bdevs_list": [ 00:20:47.042 { 00:20:47.042 "name": "BaseBdev1", 00:20:47.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.042 "is_configured": false, 00:20:47.042 "data_offset": 0, 00:20:47.042 "data_size": 0 00:20:47.042 }, 00:20:47.042 { 00:20:47.042 "name": "BaseBdev2", 00:20:47.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.042 "is_configured": false, 00:20:47.042 "data_offset": 0, 00:20:47.042 "data_size": 0 00:20:47.042 }, 00:20:47.042 { 00:20:47.042 "name": "BaseBdev3", 00:20:47.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.042 "is_configured": false, 00:20:47.042 "data_offset": 0, 00:20:47.042 "data_size": 0 00:20:47.042 } 00:20:47.042 ] 00:20:47.042 }' 00:20:47.042 11:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:47.042 11:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.607 11:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:47.866 [2024-06-10 11:44:19.778975] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:47.866 [2024-06-10 11:44:19.779240] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:20:47.866 11:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:48.124 [2024-06-10 11:44:19.999024] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:48.124 [2024-06-10 11:44:19.999253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:48.124 [2024-06-10 11:44:19.999341] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:48.124 [2024-06-10 11:44:19.999395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:48.124 [2024-06-10 11:44:19.999474] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:48.124 [2024-06-10 11:44:19.999528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:48.125 11:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:48.384 [2024-06-10 11:44:20.231932] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:48.384 BaseBdev1 00:20:48.384 11:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:48.384 11:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:20:48.384 11:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:48.384 11:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:20:48.384 11:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:48.384 11:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:48.384 11:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:48.643 11:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:48.643 [ 00:20:48.643 { 00:20:48.643 "name": "BaseBdev1", 00:20:48.643 "aliases": [ 00:20:48.643 "26863470-9623-4955-a7c5-2e1c9a502081" 00:20:48.643 ], 00:20:48.643 "product_name": "Malloc disk", 00:20:48.643 "block_size": 512, 00:20:48.643 "num_blocks": 65536, 00:20:48.643 "uuid": "26863470-9623-4955-a7c5-2e1c9a502081", 00:20:48.643 "assigned_rate_limits": { 00:20:48.643 "rw_ios_per_sec": 0, 00:20:48.643 "rw_mbytes_per_sec": 0, 00:20:48.643 "r_mbytes_per_sec": 0, 00:20:48.643 "w_mbytes_per_sec": 0 00:20:48.643 }, 00:20:48.643 "claimed": true, 00:20:48.643 "claim_type": "exclusive_write", 00:20:48.643 "zoned": false, 00:20:48.643 "supported_io_types": { 00:20:48.643 "read": true, 00:20:48.643 "write": true, 00:20:48.643 "unmap": true, 00:20:48.643 "write_zeroes": true, 00:20:48.643 "flush": true, 00:20:48.643 "reset": true, 00:20:48.643 "compare": false, 00:20:48.643 "compare_and_write": false, 00:20:48.643 "abort": true, 00:20:48.643 "nvme_admin": false, 00:20:48.643 "nvme_io": false 00:20:48.643 }, 00:20:48.643 "memory_domains": [ 00:20:48.643 { 00:20:48.643 "dma_device_id": "system", 00:20:48.643 "dma_device_type": 1 00:20:48.643 }, 00:20:48.643 { 00:20:48.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.643 "dma_device_type": 2 00:20:48.643 } 00:20:48.643 ], 00:20:48.643 "driver_specific": {} 00:20:48.643 } 00:20:48.643 ] 00:20:48.643 11:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:20:48.643 11:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:48.643 11:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:48.643 11:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:48.643 11:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:48.643 11:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:48.643 11:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:48.643 11:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:48.643 11:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:48.643 11:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:48.643 11:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:48.643 11:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.643 11:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.902 11:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:48.902 "name": "Existed_Raid", 00:20:48.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.902 "strip_size_kb": 64, 00:20:48.902 "state": "configuring", 00:20:48.902 "raid_level": "concat", 00:20:48.902 "superblock": false, 00:20:48.902 "num_base_bdevs": 3, 00:20:48.902 "num_base_bdevs_discovered": 1, 00:20:48.902 "num_base_bdevs_operational": 3, 00:20:48.902 "base_bdevs_list": [ 00:20:48.902 { 00:20:48.902 "name": "BaseBdev1", 00:20:48.902 "uuid": "26863470-9623-4955-a7c5-2e1c9a502081", 00:20:48.902 "is_configured": true, 00:20:48.902 "data_offset": 0, 00:20:48.902 "data_size": 65536 00:20:48.902 }, 00:20:48.902 { 00:20:48.902 "name": "BaseBdev2", 00:20:48.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.902 "is_configured": false, 00:20:48.902 "data_offset": 0, 00:20:48.902 "data_size": 0 00:20:48.902 }, 00:20:48.902 { 00:20:48.902 "name": "BaseBdev3", 00:20:48.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.902 "is_configured": false, 00:20:48.902 "data_offset": 0, 00:20:48.902 "data_size": 0 00:20:48.902 } 00:20:48.902 ] 00:20:48.902 }' 00:20:48.902 11:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:48.902 11:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.467 11:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:49.725 [2024-06-10 11:44:21.684297] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:49.725 [2024-06-10 11:44:21.684536] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:20:49.725 11:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:49.984 [2024-06-10 11:44:21.936382] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:49.984 [2024-06-10 11:44:21.938550] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:49.984 [2024-06-10 11:44:21.938787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:49.984 [2024-06-10 11:44:21.938893] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:49.984 [2024-06-10 11:44:21.939035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:49.984 11:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:49.984 11:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:49.984 11:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:49.984 11:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:49.984 11:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:49.984 11:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:49.984 11:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:49.984 11:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:49.984 11:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.984 11:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.984 11:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.984 11:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.984 11:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.984 11:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.243 11:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:50.243 "name": "Existed_Raid", 00:20:50.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.243 "strip_size_kb": 64, 00:20:50.243 "state": "configuring", 00:20:50.243 "raid_level": "concat", 00:20:50.243 "superblock": false, 00:20:50.243 "num_base_bdevs": 3, 00:20:50.243 "num_base_bdevs_discovered": 1, 00:20:50.243 "num_base_bdevs_operational": 3, 00:20:50.243 "base_bdevs_list": [ 00:20:50.243 { 00:20:50.243 "name": "BaseBdev1", 00:20:50.243 "uuid": "26863470-9623-4955-a7c5-2e1c9a502081", 00:20:50.243 "is_configured": true, 00:20:50.243 "data_offset": 0, 00:20:50.243 "data_size": 65536 00:20:50.243 }, 00:20:50.243 { 00:20:50.243 "name": "BaseBdev2", 00:20:50.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.243 "is_configured": false, 00:20:50.243 "data_offset": 0, 00:20:50.243 "data_size": 0 00:20:50.243 }, 00:20:50.243 { 00:20:50.243 "name": "BaseBdev3", 00:20:50.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.243 "is_configured": false, 00:20:50.243 "data_offset": 0, 00:20:50.243 "data_size": 0 00:20:50.243 } 00:20:50.243 ] 00:20:50.243 }' 00:20:50.243 11:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:50.243 11:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.810 11:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:51.066 [2024-06-10 11:44:23.054933] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:51.067 BaseBdev2 00:20:51.067 11:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:51.067 11:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:20:51.067 11:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:51.067 11:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:20:51.067 11:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:51.067 11:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:51.067 11:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:51.323 11:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:51.581 [ 00:20:51.581 { 00:20:51.581 "name": "BaseBdev2", 00:20:51.581 "aliases": [ 00:20:51.581 "f4d44da1-cb47-47f1-9823-7ee62bd55a6b" 00:20:51.581 ], 00:20:51.581 "product_name": "Malloc disk", 00:20:51.581 "block_size": 512, 00:20:51.581 "num_blocks": 65536, 00:20:51.581 "uuid": "f4d44da1-cb47-47f1-9823-7ee62bd55a6b", 00:20:51.581 "assigned_rate_limits": { 00:20:51.581 "rw_ios_per_sec": 0, 00:20:51.581 "rw_mbytes_per_sec": 0, 00:20:51.581 "r_mbytes_per_sec": 0, 00:20:51.581 "w_mbytes_per_sec": 0 00:20:51.581 }, 00:20:51.581 "claimed": true, 00:20:51.581 "claim_type": "exclusive_write", 00:20:51.581 "zoned": false, 00:20:51.581 "supported_io_types": { 00:20:51.581 "read": true, 00:20:51.581 "write": true, 00:20:51.581 "unmap": true, 00:20:51.581 "write_zeroes": true, 00:20:51.581 "flush": true, 00:20:51.581 "reset": true, 00:20:51.581 "compare": false, 00:20:51.581 "compare_and_write": false, 00:20:51.581 "abort": true, 00:20:51.581 "nvme_admin": false, 00:20:51.581 "nvme_io": false 00:20:51.581 }, 00:20:51.581 "memory_domains": [ 00:20:51.581 { 00:20:51.581 "dma_device_id": "system", 00:20:51.581 "dma_device_type": 1 00:20:51.581 }, 00:20:51.581 { 00:20:51.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.581 "dma_device_type": 2 00:20:51.581 } 00:20:51.581 ], 00:20:51.581 "driver_specific": {} 00:20:51.581 } 00:20:51.581 ] 00:20:51.581 11:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:20:51.581 11:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:51.581 11:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:51.581 11:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:51.581 11:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:51.581 11:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:51.581 11:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:51.581 11:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:51.581 11:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:51.581 11:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:51.581 11:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:51.581 11:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:51.581 11:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:51.838 11:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.838 11:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.104 11:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:52.104 "name": "Existed_Raid", 00:20:52.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.104 "strip_size_kb": 64, 00:20:52.104 "state": "configuring", 00:20:52.104 "raid_level": "concat", 00:20:52.104 "superblock": false, 00:20:52.104 "num_base_bdevs": 3, 00:20:52.104 "num_base_bdevs_discovered": 2, 00:20:52.104 "num_base_bdevs_operational": 3, 00:20:52.104 "base_bdevs_list": [ 00:20:52.104 { 00:20:52.104 "name": "BaseBdev1", 00:20:52.104 "uuid": "26863470-9623-4955-a7c5-2e1c9a502081", 00:20:52.104 "is_configured": true, 00:20:52.104 "data_offset": 0, 00:20:52.104 "data_size": 65536 00:20:52.104 }, 00:20:52.104 { 00:20:52.104 "name": "BaseBdev2", 00:20:52.104 "uuid": "f4d44da1-cb47-47f1-9823-7ee62bd55a6b", 00:20:52.104 "is_configured": true, 00:20:52.104 "data_offset": 0, 00:20:52.104 "data_size": 65536 00:20:52.104 }, 00:20:52.104 { 00:20:52.104 "name": "BaseBdev3", 00:20:52.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.104 "is_configured": false, 00:20:52.104 "data_offset": 0, 00:20:52.104 "data_size": 0 00:20:52.104 } 00:20:52.104 ] 00:20:52.104 }' 00:20:52.104 11:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:52.104 11:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.669 11:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:52.927 [2024-06-10 11:44:24.787749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:52.927 [2024-06-10 11:44:24.787964] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:20:52.927 [2024-06-10 11:44:24.788017] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:52.927 [2024-06-10 11:44:24.788278] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:52.927 [2024-06-10 11:44:24.788731] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:20:52.927 [2024-06-10 11:44:24.788858] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:20:52.927 [2024-06-10 11:44:24.789217] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.927 BaseBdev3 00:20:52.927 11:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:52.927 11:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:20:52.927 11:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:52.927 11:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:20:52.927 11:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:52.927 11:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:52.927 11:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:53.188 11:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:53.451 [ 00:20:53.451 { 00:20:53.451 "name": "BaseBdev3", 00:20:53.451 "aliases": [ 00:20:53.451 "a0062bca-4e2e-4930-a01d-6071aa00a6f4" 00:20:53.451 ], 00:20:53.451 "product_name": "Malloc disk", 00:20:53.451 "block_size": 512, 00:20:53.451 "num_blocks": 65536, 00:20:53.451 "uuid": "a0062bca-4e2e-4930-a01d-6071aa00a6f4", 00:20:53.451 "assigned_rate_limits": { 00:20:53.451 "rw_ios_per_sec": 0, 00:20:53.451 "rw_mbytes_per_sec": 0, 00:20:53.451 "r_mbytes_per_sec": 0, 00:20:53.451 "w_mbytes_per_sec": 0 00:20:53.451 }, 00:20:53.451 "claimed": true, 00:20:53.451 "claim_type": "exclusive_write", 00:20:53.451 "zoned": false, 00:20:53.451 "supported_io_types": { 00:20:53.451 "read": true, 00:20:53.451 "write": true, 00:20:53.451 "unmap": true, 00:20:53.451 "write_zeroes": true, 00:20:53.451 "flush": true, 00:20:53.451 "reset": true, 00:20:53.451 "compare": false, 00:20:53.451 "compare_and_write": false, 00:20:53.451 "abort": true, 00:20:53.451 "nvme_admin": false, 00:20:53.451 "nvme_io": false 00:20:53.451 }, 00:20:53.451 "memory_domains": [ 00:20:53.451 { 00:20:53.451 "dma_device_id": "system", 00:20:53.451 "dma_device_type": 1 00:20:53.451 }, 00:20:53.451 { 00:20:53.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.451 "dma_device_type": 2 00:20:53.451 } 00:20:53.451 ], 00:20:53.451 "driver_specific": {} 00:20:53.451 } 00:20:53.451 ] 00:20:53.451 11:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:20:53.451 11:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:53.451 11:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:53.451 11:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:53.451 11:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:53.451 11:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:53.451 11:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:53.451 11:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:53.452 11:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:53.452 11:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:53.452 11:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:53.452 11:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:53.452 11:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:53.452 11:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.452 11:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.718 11:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:53.718 "name": "Existed_Raid", 00:20:53.718 "uuid": "103f906f-5b80-4160-b9f4-3ec0bfe77d3c", 00:20:53.718 "strip_size_kb": 64, 00:20:53.718 "state": "online", 00:20:53.718 "raid_level": "concat", 00:20:53.718 "superblock": false, 00:20:53.718 "num_base_bdevs": 3, 00:20:53.718 "num_base_bdevs_discovered": 3, 00:20:53.718 "num_base_bdevs_operational": 3, 00:20:53.718 "base_bdevs_list": [ 00:20:53.718 { 00:20:53.718 "name": "BaseBdev1", 00:20:53.718 "uuid": "26863470-9623-4955-a7c5-2e1c9a502081", 00:20:53.718 "is_configured": true, 00:20:53.718 "data_offset": 0, 00:20:53.718 "data_size": 65536 00:20:53.718 }, 00:20:53.718 { 00:20:53.718 "name": "BaseBdev2", 00:20:53.718 "uuid": "f4d44da1-cb47-47f1-9823-7ee62bd55a6b", 00:20:53.718 "is_configured": true, 00:20:53.718 "data_offset": 0, 00:20:53.718 "data_size": 65536 00:20:53.718 }, 00:20:53.718 { 00:20:53.718 "name": "BaseBdev3", 00:20:53.718 "uuid": "a0062bca-4e2e-4930-a01d-6071aa00a6f4", 00:20:53.718 "is_configured": true, 00:20:53.718 "data_offset": 0, 00:20:53.718 "data_size": 65536 00:20:53.718 } 00:20:53.718 ] 00:20:53.718 }' 00:20:53.718 11:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:53.718 11:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.297 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:54.297 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:54.297 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:54.297 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:54.297 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:54.297 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:54.297 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:54.297 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:54.555 [2024-06-10 11:44:26.472425] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:54.555 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:54.555 "name": "Existed_Raid", 00:20:54.555 "aliases": [ 00:20:54.555 "103f906f-5b80-4160-b9f4-3ec0bfe77d3c" 00:20:54.555 ], 00:20:54.555 "product_name": "Raid Volume", 00:20:54.555 "block_size": 512, 00:20:54.555 "num_blocks": 196608, 00:20:54.555 "uuid": "103f906f-5b80-4160-b9f4-3ec0bfe77d3c", 00:20:54.555 "assigned_rate_limits": { 00:20:54.555 "rw_ios_per_sec": 0, 00:20:54.555 "rw_mbytes_per_sec": 0, 00:20:54.555 "r_mbytes_per_sec": 0, 00:20:54.555 "w_mbytes_per_sec": 0 00:20:54.555 }, 00:20:54.555 "claimed": false, 00:20:54.555 "zoned": false, 00:20:54.555 "supported_io_types": { 00:20:54.555 "read": true, 00:20:54.555 "write": true, 00:20:54.555 "unmap": true, 00:20:54.555 "write_zeroes": true, 00:20:54.555 "flush": true, 00:20:54.555 "reset": true, 00:20:54.555 "compare": false, 00:20:54.555 "compare_and_write": false, 00:20:54.555 "abort": false, 00:20:54.555 "nvme_admin": false, 00:20:54.555 "nvme_io": false 00:20:54.555 }, 00:20:54.555 "memory_domains": [ 00:20:54.555 { 00:20:54.555 "dma_device_id": "system", 00:20:54.555 "dma_device_type": 1 00:20:54.555 }, 00:20:54.555 { 00:20:54.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.555 "dma_device_type": 2 00:20:54.555 }, 00:20:54.555 { 00:20:54.555 "dma_device_id": "system", 00:20:54.555 "dma_device_type": 1 00:20:54.555 }, 00:20:54.555 { 00:20:54.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.555 "dma_device_type": 2 00:20:54.555 }, 00:20:54.555 { 00:20:54.555 "dma_device_id": "system", 00:20:54.555 "dma_device_type": 1 00:20:54.555 }, 00:20:54.555 { 00:20:54.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.555 "dma_device_type": 2 00:20:54.555 } 00:20:54.555 ], 00:20:54.555 "driver_specific": { 00:20:54.555 "raid": { 00:20:54.555 "uuid": "103f906f-5b80-4160-b9f4-3ec0bfe77d3c", 00:20:54.555 "strip_size_kb": 64, 00:20:54.555 "state": "online", 00:20:54.555 "raid_level": "concat", 00:20:54.555 "superblock": false, 00:20:54.555 "num_base_bdevs": 3, 00:20:54.555 "num_base_bdevs_discovered": 3, 00:20:54.555 "num_base_bdevs_operational": 3, 00:20:54.555 "base_bdevs_list": [ 00:20:54.555 { 00:20:54.555 "name": "BaseBdev1", 00:20:54.555 "uuid": "26863470-9623-4955-a7c5-2e1c9a502081", 00:20:54.555 "is_configured": true, 00:20:54.555 "data_offset": 0, 00:20:54.555 "data_size": 65536 00:20:54.555 }, 00:20:54.555 { 00:20:54.555 "name": "BaseBdev2", 00:20:54.555 "uuid": "f4d44da1-cb47-47f1-9823-7ee62bd55a6b", 00:20:54.555 "is_configured": true, 00:20:54.555 "data_offset": 0, 00:20:54.555 "data_size": 65536 00:20:54.555 }, 00:20:54.555 { 00:20:54.555 "name": "BaseBdev3", 00:20:54.555 "uuid": "a0062bca-4e2e-4930-a01d-6071aa00a6f4", 00:20:54.555 "is_configured": true, 00:20:54.555 "data_offset": 0, 00:20:54.555 "data_size": 65536 00:20:54.555 } 00:20:54.555 ] 00:20:54.555 } 00:20:54.555 } 00:20:54.555 }' 00:20:54.555 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:54.555 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:54.555 BaseBdev2 00:20:54.555 BaseBdev3' 00:20:54.555 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:54.555 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:54.555 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:54.813 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:54.813 "name": "BaseBdev1", 00:20:54.813 "aliases": [ 00:20:54.813 "26863470-9623-4955-a7c5-2e1c9a502081" 00:20:54.813 ], 00:20:54.813 "product_name": "Malloc disk", 00:20:54.813 "block_size": 512, 00:20:54.813 "num_blocks": 65536, 00:20:54.813 "uuid": "26863470-9623-4955-a7c5-2e1c9a502081", 00:20:54.813 "assigned_rate_limits": { 00:20:54.813 "rw_ios_per_sec": 0, 00:20:54.813 "rw_mbytes_per_sec": 0, 00:20:54.813 "r_mbytes_per_sec": 0, 00:20:54.813 "w_mbytes_per_sec": 0 00:20:54.813 }, 00:20:54.813 "claimed": true, 00:20:54.813 "claim_type": "exclusive_write", 00:20:54.813 "zoned": false, 00:20:54.813 "supported_io_types": { 00:20:54.813 "read": true, 00:20:54.813 "write": true, 00:20:54.813 "unmap": true, 00:20:54.813 "write_zeroes": true, 00:20:54.813 "flush": true, 00:20:54.813 "reset": true, 00:20:54.813 "compare": false, 00:20:54.813 "compare_and_write": false, 00:20:54.813 "abort": true, 00:20:54.813 "nvme_admin": false, 00:20:54.813 "nvme_io": false 00:20:54.813 }, 00:20:54.813 "memory_domains": [ 00:20:54.813 { 00:20:54.813 "dma_device_id": "system", 00:20:54.813 "dma_device_type": 1 00:20:54.813 }, 00:20:54.813 { 00:20:54.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.813 "dma_device_type": 2 00:20:54.813 } 00:20:54.813 ], 00:20:54.813 "driver_specific": {} 00:20:54.813 }' 00:20:54.813 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.072 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.072 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:55.072 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.072 11:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.072 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:55.072 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:55.072 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:55.331 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:55.331 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:55.331 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:55.331 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:55.331 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:55.331 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:55.331 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:55.590 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:55.590 "name": "BaseBdev2", 00:20:55.590 "aliases": [ 00:20:55.590 "f4d44da1-cb47-47f1-9823-7ee62bd55a6b" 00:20:55.590 ], 00:20:55.590 "product_name": "Malloc disk", 00:20:55.590 "block_size": 512, 00:20:55.590 "num_blocks": 65536, 00:20:55.590 "uuid": "f4d44da1-cb47-47f1-9823-7ee62bd55a6b", 00:20:55.590 "assigned_rate_limits": { 00:20:55.590 "rw_ios_per_sec": 0, 00:20:55.590 "rw_mbytes_per_sec": 0, 00:20:55.590 "r_mbytes_per_sec": 0, 00:20:55.590 "w_mbytes_per_sec": 0 00:20:55.590 }, 00:20:55.590 "claimed": true, 00:20:55.590 "claim_type": "exclusive_write", 00:20:55.590 "zoned": false, 00:20:55.590 "supported_io_types": { 00:20:55.590 "read": true, 00:20:55.590 "write": true, 00:20:55.590 "unmap": true, 00:20:55.590 "write_zeroes": true, 00:20:55.590 "flush": true, 00:20:55.590 "reset": true, 00:20:55.590 "compare": false, 00:20:55.590 "compare_and_write": false, 00:20:55.590 "abort": true, 00:20:55.590 "nvme_admin": false, 00:20:55.590 "nvme_io": false 00:20:55.590 }, 00:20:55.590 "memory_domains": [ 00:20:55.590 { 00:20:55.590 "dma_device_id": "system", 00:20:55.590 "dma_device_type": 1 00:20:55.590 }, 00:20:55.590 { 00:20:55.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.590 "dma_device_type": 2 00:20:55.590 } 00:20:55.590 ], 00:20:55.590 "driver_specific": {} 00:20:55.590 }' 00:20:55.590 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.590 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.590 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:55.590 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.849 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.849 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:55.849 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:55.849 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:55.849 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:55.849 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:55.849 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:55.849 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:55.849 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:56.106 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:56.106 11:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:56.363 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:56.363 "name": "BaseBdev3", 00:20:56.363 "aliases": [ 00:20:56.363 "a0062bca-4e2e-4930-a01d-6071aa00a6f4" 00:20:56.363 ], 00:20:56.363 "product_name": "Malloc disk", 00:20:56.363 "block_size": 512, 00:20:56.363 "num_blocks": 65536, 00:20:56.363 "uuid": "a0062bca-4e2e-4930-a01d-6071aa00a6f4", 00:20:56.363 "assigned_rate_limits": { 00:20:56.363 "rw_ios_per_sec": 0, 00:20:56.363 "rw_mbytes_per_sec": 0, 00:20:56.363 "r_mbytes_per_sec": 0, 00:20:56.363 "w_mbytes_per_sec": 0 00:20:56.363 }, 00:20:56.363 "claimed": true, 00:20:56.363 "claim_type": "exclusive_write", 00:20:56.363 "zoned": false, 00:20:56.363 "supported_io_types": { 00:20:56.363 "read": true, 00:20:56.363 "write": true, 00:20:56.363 "unmap": true, 00:20:56.363 "write_zeroes": true, 00:20:56.363 "flush": true, 00:20:56.363 "reset": true, 00:20:56.363 "compare": false, 00:20:56.363 "compare_and_write": false, 00:20:56.363 "abort": true, 00:20:56.363 "nvme_admin": false, 00:20:56.363 "nvme_io": false 00:20:56.363 }, 00:20:56.363 "memory_domains": [ 00:20:56.363 { 00:20:56.363 "dma_device_id": "system", 00:20:56.363 "dma_device_type": 1 00:20:56.363 }, 00:20:56.363 { 00:20:56.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.363 "dma_device_type": 2 00:20:56.363 } 00:20:56.363 ], 00:20:56.363 "driver_specific": {} 00:20:56.363 }' 00:20:56.363 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.363 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.363 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:56.363 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.363 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.363 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:56.363 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.620 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.620 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:56.620 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.620 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.620 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:56.621 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:56.879 [2024-06-10 11:44:28.860720] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:56.879 [2024-06-10 11:44:28.860925] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:56.879 [2024-06-10 11:44:28.861114] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:57.138 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:57.138 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:20:57.138 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:57.138 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:57.138 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:20:57.138 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:20:57.138 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:57.138 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:20:57.138 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:57.138 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:57.138 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:57.138 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:57.138 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:57.138 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:57.138 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:57.138 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.138 11:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:57.395 11:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:57.395 "name": "Existed_Raid", 00:20:57.395 "uuid": "103f906f-5b80-4160-b9f4-3ec0bfe77d3c", 00:20:57.395 "strip_size_kb": 64, 00:20:57.395 "state": "offline", 00:20:57.395 "raid_level": "concat", 00:20:57.395 "superblock": false, 00:20:57.395 "num_base_bdevs": 3, 00:20:57.395 "num_base_bdevs_discovered": 2, 00:20:57.395 "num_base_bdevs_operational": 2, 00:20:57.395 "base_bdevs_list": [ 00:20:57.395 { 00:20:57.395 "name": null, 00:20:57.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.395 "is_configured": false, 00:20:57.395 "data_offset": 0, 00:20:57.395 "data_size": 65536 00:20:57.395 }, 00:20:57.395 { 00:20:57.395 "name": "BaseBdev2", 00:20:57.395 "uuid": "f4d44da1-cb47-47f1-9823-7ee62bd55a6b", 00:20:57.395 "is_configured": true, 00:20:57.395 "data_offset": 0, 00:20:57.395 "data_size": 65536 00:20:57.395 }, 00:20:57.395 { 00:20:57.395 "name": "BaseBdev3", 00:20:57.395 "uuid": "a0062bca-4e2e-4930-a01d-6071aa00a6f4", 00:20:57.395 "is_configured": true, 00:20:57.395 "data_offset": 0, 00:20:57.395 "data_size": 65536 00:20:57.395 } 00:20:57.395 ] 00:20:57.395 }' 00:20:57.395 11:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:57.395 11:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.961 11:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:57.961 11:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:57.961 11:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.961 11:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:58.220 11:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:58.220 11:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:58.220 11:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:58.526 [2024-06-10 11:44:30.346962] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:58.526 11:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:58.526 11:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:58.526 11:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.526 11:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:58.799 11:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:58.799 11:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:58.799 11:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:59.057 [2024-06-10 11:44:30.978927] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:59.057 [2024-06-10 11:44:30.979145] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:20:59.057 11:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:59.057 11:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:59.314 11:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:59.314 11:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.314 11:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:59.314 11:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:59.314 11:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:59.314 11:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:59.314 11:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:59.314 11:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:59.572 BaseBdev2 00:20:59.572 11:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:59.572 11:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:20:59.572 11:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:59.572 11:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:20:59.572 11:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:59.572 11:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:59.572 11:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:59.830 11:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:00.088 [ 00:21:00.088 { 00:21:00.088 "name": "BaseBdev2", 00:21:00.088 "aliases": [ 00:21:00.088 "1434d844-71c2-4345-aabf-a1f00fe0dedd" 00:21:00.088 ], 00:21:00.088 "product_name": "Malloc disk", 00:21:00.088 "block_size": 512, 00:21:00.088 "num_blocks": 65536, 00:21:00.088 "uuid": "1434d844-71c2-4345-aabf-a1f00fe0dedd", 00:21:00.088 "assigned_rate_limits": { 00:21:00.088 "rw_ios_per_sec": 0, 00:21:00.088 "rw_mbytes_per_sec": 0, 00:21:00.088 "r_mbytes_per_sec": 0, 00:21:00.088 "w_mbytes_per_sec": 0 00:21:00.088 }, 00:21:00.088 "claimed": false, 00:21:00.088 "zoned": false, 00:21:00.088 "supported_io_types": { 00:21:00.088 "read": true, 00:21:00.088 "write": true, 00:21:00.088 "unmap": true, 00:21:00.088 "write_zeroes": true, 00:21:00.088 "flush": true, 00:21:00.088 "reset": true, 00:21:00.088 "compare": false, 00:21:00.088 "compare_and_write": false, 00:21:00.088 "abort": true, 00:21:00.088 "nvme_admin": false, 00:21:00.088 "nvme_io": false 00:21:00.088 }, 00:21:00.088 "memory_domains": [ 00:21:00.088 { 00:21:00.088 "dma_device_id": "system", 00:21:00.088 "dma_device_type": 1 00:21:00.088 }, 00:21:00.088 { 00:21:00.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.088 "dma_device_type": 2 00:21:00.088 } 00:21:00.088 ], 00:21:00.088 "driver_specific": {} 00:21:00.088 } 00:21:00.088 ] 00:21:00.088 11:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:21:00.088 11:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:00.089 11:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:00.089 11:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:00.347 BaseBdev3 00:21:00.347 11:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:00.347 11:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:21:00.347 11:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:00.347 11:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:21:00.347 11:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:00.347 11:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:00.347 11:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:00.605 11:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:00.864 [ 00:21:00.864 { 00:21:00.864 "name": "BaseBdev3", 00:21:00.864 "aliases": [ 00:21:00.864 "68de0501-b634-480a-a2ad-9669e578dd51" 00:21:00.864 ], 00:21:00.864 "product_name": "Malloc disk", 00:21:00.864 "block_size": 512, 00:21:00.864 "num_blocks": 65536, 00:21:00.864 "uuid": "68de0501-b634-480a-a2ad-9669e578dd51", 00:21:00.864 "assigned_rate_limits": { 00:21:00.864 "rw_ios_per_sec": 0, 00:21:00.864 "rw_mbytes_per_sec": 0, 00:21:00.864 "r_mbytes_per_sec": 0, 00:21:00.864 "w_mbytes_per_sec": 0 00:21:00.864 }, 00:21:00.864 "claimed": false, 00:21:00.864 "zoned": false, 00:21:00.864 "supported_io_types": { 00:21:00.864 "read": true, 00:21:00.864 "write": true, 00:21:00.864 "unmap": true, 00:21:00.864 "write_zeroes": true, 00:21:00.864 "flush": true, 00:21:00.864 "reset": true, 00:21:00.864 "compare": false, 00:21:00.864 "compare_and_write": false, 00:21:00.864 "abort": true, 00:21:00.864 "nvme_admin": false, 00:21:00.864 "nvme_io": false 00:21:00.864 }, 00:21:00.864 "memory_domains": [ 00:21:00.864 { 00:21:00.864 "dma_device_id": "system", 00:21:00.864 "dma_device_type": 1 00:21:00.864 }, 00:21:00.864 { 00:21:00.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.864 "dma_device_type": 2 00:21:00.864 } 00:21:00.864 ], 00:21:00.864 "driver_specific": {} 00:21:00.864 } 00:21:00.864 ] 00:21:00.864 11:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:21:00.864 11:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:00.864 11:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:00.864 11:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:01.122 [2024-06-10 11:44:33.152518] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:01.122 [2024-06-10 11:44:33.152780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:01.122 [2024-06-10 11:44:33.152925] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:01.122 [2024-06-10 11:44:33.155211] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:01.122 11:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:01.122 11:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:01.122 11:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:01.122 11:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:01.122 11:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:01.122 11:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:01.122 11:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:01.122 11:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:01.122 11:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:01.122 11:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:01.122 11:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.122 11:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.690 11:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:01.690 "name": "Existed_Raid", 00:21:01.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.690 "strip_size_kb": 64, 00:21:01.690 "state": "configuring", 00:21:01.690 "raid_level": "concat", 00:21:01.690 "superblock": false, 00:21:01.690 "num_base_bdevs": 3, 00:21:01.690 "num_base_bdevs_discovered": 2, 00:21:01.690 "num_base_bdevs_operational": 3, 00:21:01.690 "base_bdevs_list": [ 00:21:01.690 { 00:21:01.690 "name": "BaseBdev1", 00:21:01.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.690 "is_configured": false, 00:21:01.690 "data_offset": 0, 00:21:01.690 "data_size": 0 00:21:01.690 }, 00:21:01.690 { 00:21:01.690 "name": "BaseBdev2", 00:21:01.690 "uuid": "1434d844-71c2-4345-aabf-a1f00fe0dedd", 00:21:01.690 "is_configured": true, 00:21:01.690 "data_offset": 0, 00:21:01.690 "data_size": 65536 00:21:01.690 }, 00:21:01.690 { 00:21:01.690 "name": "BaseBdev3", 00:21:01.690 "uuid": "68de0501-b634-480a-a2ad-9669e578dd51", 00:21:01.690 "is_configured": true, 00:21:01.690 "data_offset": 0, 00:21:01.690 "data_size": 65536 00:21:01.690 } 00:21:01.690 ] 00:21:01.690 }' 00:21:01.690 11:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:01.690 11:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.259 11:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:02.259 [2024-06-10 11:44:34.268728] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:02.259 11:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:02.259 11:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:02.259 11:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:02.259 11:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:02.259 11:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:02.259 11:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:02.259 11:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.259 11:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.259 11:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.259 11:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.259 11:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.259 11:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.517 11:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:02.517 "name": "Existed_Raid", 00:21:02.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.517 "strip_size_kb": 64, 00:21:02.517 "state": "configuring", 00:21:02.517 "raid_level": "concat", 00:21:02.517 "superblock": false, 00:21:02.517 "num_base_bdevs": 3, 00:21:02.517 "num_base_bdevs_discovered": 1, 00:21:02.517 "num_base_bdevs_operational": 3, 00:21:02.517 "base_bdevs_list": [ 00:21:02.517 { 00:21:02.517 "name": "BaseBdev1", 00:21:02.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.517 "is_configured": false, 00:21:02.517 "data_offset": 0, 00:21:02.517 "data_size": 0 00:21:02.517 }, 00:21:02.517 { 00:21:02.517 "name": null, 00:21:02.517 "uuid": "1434d844-71c2-4345-aabf-a1f00fe0dedd", 00:21:02.517 "is_configured": false, 00:21:02.517 "data_offset": 0, 00:21:02.517 "data_size": 65536 00:21:02.517 }, 00:21:02.517 { 00:21:02.517 "name": "BaseBdev3", 00:21:02.517 "uuid": "68de0501-b634-480a-a2ad-9669e578dd51", 00:21:02.517 "is_configured": true, 00:21:02.517 "data_offset": 0, 00:21:02.517 "data_size": 65536 00:21:02.517 } 00:21:02.517 ] 00:21:02.517 }' 00:21:02.517 11:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:02.517 11:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.452 11:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.452 11:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:03.452 11:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:03.452 11:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:03.710 [2024-06-10 11:44:35.709686] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:03.710 BaseBdev1 00:21:03.710 11:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:03.710 11:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:21:03.710 11:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:03.710 11:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:21:03.710 11:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:03.710 11:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:03.710 11:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:03.968 11:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:04.226 [ 00:21:04.226 { 00:21:04.226 "name": "BaseBdev1", 00:21:04.226 "aliases": [ 00:21:04.226 "f80ece47-1a11-4f5c-b1af-948f82d0498e" 00:21:04.226 ], 00:21:04.226 "product_name": "Malloc disk", 00:21:04.226 "block_size": 512, 00:21:04.226 "num_blocks": 65536, 00:21:04.226 "uuid": "f80ece47-1a11-4f5c-b1af-948f82d0498e", 00:21:04.226 "assigned_rate_limits": { 00:21:04.226 "rw_ios_per_sec": 0, 00:21:04.226 "rw_mbytes_per_sec": 0, 00:21:04.226 "r_mbytes_per_sec": 0, 00:21:04.226 "w_mbytes_per_sec": 0 00:21:04.226 }, 00:21:04.226 "claimed": true, 00:21:04.226 "claim_type": "exclusive_write", 00:21:04.226 "zoned": false, 00:21:04.226 "supported_io_types": { 00:21:04.226 "read": true, 00:21:04.226 "write": true, 00:21:04.226 "unmap": true, 00:21:04.226 "write_zeroes": true, 00:21:04.226 "flush": true, 00:21:04.226 "reset": true, 00:21:04.226 "compare": false, 00:21:04.226 "compare_and_write": false, 00:21:04.226 "abort": true, 00:21:04.226 "nvme_admin": false, 00:21:04.226 "nvme_io": false 00:21:04.226 }, 00:21:04.226 "memory_domains": [ 00:21:04.226 { 00:21:04.226 "dma_device_id": "system", 00:21:04.226 "dma_device_type": 1 00:21:04.226 }, 00:21:04.226 { 00:21:04.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.226 "dma_device_type": 2 00:21:04.226 } 00:21:04.226 ], 00:21:04.226 "driver_specific": {} 00:21:04.226 } 00:21:04.226 ] 00:21:04.226 11:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:21:04.226 11:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:04.226 11:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:04.226 11:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:04.226 11:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:04.226 11:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:04.226 11:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:04.226 11:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:04.226 11:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:04.226 11:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:04.226 11:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:04.226 11:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.226 11:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:04.483 11:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:04.483 "name": "Existed_Raid", 00:21:04.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.483 "strip_size_kb": 64, 00:21:04.483 "state": "configuring", 00:21:04.483 "raid_level": "concat", 00:21:04.483 "superblock": false, 00:21:04.483 "num_base_bdevs": 3, 00:21:04.483 "num_base_bdevs_discovered": 2, 00:21:04.483 "num_base_bdevs_operational": 3, 00:21:04.483 "base_bdevs_list": [ 00:21:04.483 { 00:21:04.483 "name": "BaseBdev1", 00:21:04.483 "uuid": "f80ece47-1a11-4f5c-b1af-948f82d0498e", 00:21:04.483 "is_configured": true, 00:21:04.483 "data_offset": 0, 00:21:04.483 "data_size": 65536 00:21:04.483 }, 00:21:04.483 { 00:21:04.483 "name": null, 00:21:04.483 "uuid": "1434d844-71c2-4345-aabf-a1f00fe0dedd", 00:21:04.483 "is_configured": false, 00:21:04.483 "data_offset": 0, 00:21:04.483 "data_size": 65536 00:21:04.483 }, 00:21:04.483 { 00:21:04.483 "name": "BaseBdev3", 00:21:04.483 "uuid": "68de0501-b634-480a-a2ad-9669e578dd51", 00:21:04.483 "is_configured": true, 00:21:04.483 "data_offset": 0, 00:21:04.483 "data_size": 65536 00:21:04.483 } 00:21:04.483 ] 00:21:04.483 }' 00:21:04.483 11:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:04.483 11:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.048 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.048 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:05.305 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:05.305 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:05.563 [2024-06-10 11:44:37.570929] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:05.563 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:05.563 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:05.563 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:05.563 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:05.563 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:05.563 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:05.563 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:05.563 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:05.563 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:05.563 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:05.564 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.564 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.878 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:05.878 "name": "Existed_Raid", 00:21:05.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.878 "strip_size_kb": 64, 00:21:05.878 "state": "configuring", 00:21:05.878 "raid_level": "concat", 00:21:05.878 "superblock": false, 00:21:05.878 "num_base_bdevs": 3, 00:21:05.878 "num_base_bdevs_discovered": 1, 00:21:05.878 "num_base_bdevs_operational": 3, 00:21:05.878 "base_bdevs_list": [ 00:21:05.878 { 00:21:05.878 "name": "BaseBdev1", 00:21:05.878 "uuid": "f80ece47-1a11-4f5c-b1af-948f82d0498e", 00:21:05.878 "is_configured": true, 00:21:05.878 "data_offset": 0, 00:21:05.878 "data_size": 65536 00:21:05.878 }, 00:21:05.878 { 00:21:05.878 "name": null, 00:21:05.878 "uuid": "1434d844-71c2-4345-aabf-a1f00fe0dedd", 00:21:05.878 "is_configured": false, 00:21:05.878 "data_offset": 0, 00:21:05.878 "data_size": 65536 00:21:05.878 }, 00:21:05.878 { 00:21:05.878 "name": null, 00:21:05.878 "uuid": "68de0501-b634-480a-a2ad-9669e578dd51", 00:21:05.878 "is_configured": false, 00:21:05.878 "data_offset": 0, 00:21:05.878 "data_size": 65536 00:21:05.878 } 00:21:05.878 ] 00:21:05.878 }' 00:21:05.878 11:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:05.878 11:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.444 11:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.445 11:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:06.702 11:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:06.702 11:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:06.960 [2024-06-10 11:44:38.939235] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:06.960 11:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:06.960 11:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:06.960 11:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:06.960 11:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:06.960 11:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:06.960 11:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:06.960 11:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:06.960 11:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:06.960 11:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:06.960 11:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:06.960 11:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.960 11:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:07.217 11:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:07.217 "name": "Existed_Raid", 00:21:07.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.217 "strip_size_kb": 64, 00:21:07.217 "state": "configuring", 00:21:07.217 "raid_level": "concat", 00:21:07.217 "superblock": false, 00:21:07.217 "num_base_bdevs": 3, 00:21:07.217 "num_base_bdevs_discovered": 2, 00:21:07.217 "num_base_bdevs_operational": 3, 00:21:07.217 "base_bdevs_list": [ 00:21:07.217 { 00:21:07.217 "name": "BaseBdev1", 00:21:07.217 "uuid": "f80ece47-1a11-4f5c-b1af-948f82d0498e", 00:21:07.217 "is_configured": true, 00:21:07.217 "data_offset": 0, 00:21:07.217 "data_size": 65536 00:21:07.217 }, 00:21:07.217 { 00:21:07.217 "name": null, 00:21:07.217 "uuid": "1434d844-71c2-4345-aabf-a1f00fe0dedd", 00:21:07.217 "is_configured": false, 00:21:07.217 "data_offset": 0, 00:21:07.217 "data_size": 65536 00:21:07.217 }, 00:21:07.217 { 00:21:07.217 "name": "BaseBdev3", 00:21:07.217 "uuid": "68de0501-b634-480a-a2ad-9669e578dd51", 00:21:07.217 "is_configured": true, 00:21:07.217 "data_offset": 0, 00:21:07.217 "data_size": 65536 00:21:07.217 } 00:21:07.217 ] 00:21:07.217 }' 00:21:07.217 11:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:07.217 11:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.782 11:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:07.782 11:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.347 11:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:08.347 11:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:08.347 [2024-06-10 11:44:40.367550] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:08.606 11:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:08.606 11:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:08.606 11:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:08.606 11:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:08.606 11:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:08.606 11:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:08.606 11:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:08.606 11:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:08.606 11:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:08.606 11:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:08.606 11:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.606 11:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.864 11:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:08.864 "name": "Existed_Raid", 00:21:08.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.864 "strip_size_kb": 64, 00:21:08.864 "state": "configuring", 00:21:08.864 "raid_level": "concat", 00:21:08.864 "superblock": false, 00:21:08.864 "num_base_bdevs": 3, 00:21:08.864 "num_base_bdevs_discovered": 1, 00:21:08.864 "num_base_bdevs_operational": 3, 00:21:08.864 "base_bdevs_list": [ 00:21:08.864 { 00:21:08.864 "name": null, 00:21:08.864 "uuid": "f80ece47-1a11-4f5c-b1af-948f82d0498e", 00:21:08.864 "is_configured": false, 00:21:08.864 "data_offset": 0, 00:21:08.864 "data_size": 65536 00:21:08.864 }, 00:21:08.864 { 00:21:08.864 "name": null, 00:21:08.864 "uuid": "1434d844-71c2-4345-aabf-a1f00fe0dedd", 00:21:08.864 "is_configured": false, 00:21:08.864 "data_offset": 0, 00:21:08.864 "data_size": 65536 00:21:08.864 }, 00:21:08.864 { 00:21:08.864 "name": "BaseBdev3", 00:21:08.864 "uuid": "68de0501-b634-480a-a2ad-9669e578dd51", 00:21:08.864 "is_configured": true, 00:21:08.864 "data_offset": 0, 00:21:08.864 "data_size": 65536 00:21:08.864 } 00:21:08.864 ] 00:21:08.864 }' 00:21:08.864 11:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:08.864 11:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.430 11:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:09.430 11:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.688 11:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:09.688 11:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:09.946 [2024-06-10 11:44:41.847459] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:09.946 11:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:09.946 11:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:09.946 11:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:09.946 11:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:09.946 11:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:09.946 11:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:09.946 11:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.946 11:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.946 11:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.946 11:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.946 11:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.946 11:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.205 11:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:10.205 "name": "Existed_Raid", 00:21:10.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.205 "strip_size_kb": 64, 00:21:10.205 "state": "configuring", 00:21:10.205 "raid_level": "concat", 00:21:10.205 "superblock": false, 00:21:10.205 "num_base_bdevs": 3, 00:21:10.205 "num_base_bdevs_discovered": 2, 00:21:10.205 "num_base_bdevs_operational": 3, 00:21:10.205 "base_bdevs_list": [ 00:21:10.205 { 00:21:10.205 "name": null, 00:21:10.205 "uuid": "f80ece47-1a11-4f5c-b1af-948f82d0498e", 00:21:10.205 "is_configured": false, 00:21:10.205 "data_offset": 0, 00:21:10.205 "data_size": 65536 00:21:10.205 }, 00:21:10.205 { 00:21:10.205 "name": "BaseBdev2", 00:21:10.205 "uuid": "1434d844-71c2-4345-aabf-a1f00fe0dedd", 00:21:10.205 "is_configured": true, 00:21:10.205 "data_offset": 0, 00:21:10.205 "data_size": 65536 00:21:10.205 }, 00:21:10.205 { 00:21:10.205 "name": "BaseBdev3", 00:21:10.205 "uuid": "68de0501-b634-480a-a2ad-9669e578dd51", 00:21:10.205 "is_configured": true, 00:21:10.205 "data_offset": 0, 00:21:10.205 "data_size": 65536 00:21:10.205 } 00:21:10.205 ] 00:21:10.205 }' 00:21:10.205 11:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:10.205 11:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.772 11:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.772 11:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:11.030 11:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:11.030 11:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.030 11:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:11.288 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f80ece47-1a11-4f5c-b1af-948f82d0498e 00:21:11.545 [2024-06-10 11:44:43.436587] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:11.545 [2024-06-10 11:44:43.436817] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:21:11.545 [2024-06-10 11:44:43.436857] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:11.545 [2024-06-10 11:44:43.437074] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:11.545 [2024-06-10 11:44:43.437442] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:21:11.545 [2024-06-10 11:44:43.437549] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:21:11.545 [2024-06-10 11:44:43.437870] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.545 NewBaseBdev 00:21:11.545 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:11.545 11:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:21:11.545 11:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:11.545 11:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:21:11.545 11:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:11.545 11:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:11.545 11:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:11.802 11:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:12.060 [ 00:21:12.060 { 00:21:12.060 "name": "NewBaseBdev", 00:21:12.060 "aliases": [ 00:21:12.060 "f80ece47-1a11-4f5c-b1af-948f82d0498e" 00:21:12.060 ], 00:21:12.060 "product_name": "Malloc disk", 00:21:12.060 "block_size": 512, 00:21:12.060 "num_blocks": 65536, 00:21:12.060 "uuid": "f80ece47-1a11-4f5c-b1af-948f82d0498e", 00:21:12.060 "assigned_rate_limits": { 00:21:12.060 "rw_ios_per_sec": 0, 00:21:12.060 "rw_mbytes_per_sec": 0, 00:21:12.060 "r_mbytes_per_sec": 0, 00:21:12.060 "w_mbytes_per_sec": 0 00:21:12.060 }, 00:21:12.060 "claimed": true, 00:21:12.060 "claim_type": "exclusive_write", 00:21:12.060 "zoned": false, 00:21:12.060 "supported_io_types": { 00:21:12.060 "read": true, 00:21:12.060 "write": true, 00:21:12.060 "unmap": true, 00:21:12.060 "write_zeroes": true, 00:21:12.060 "flush": true, 00:21:12.060 "reset": true, 00:21:12.060 "compare": false, 00:21:12.060 "compare_and_write": false, 00:21:12.060 "abort": true, 00:21:12.060 "nvme_admin": false, 00:21:12.060 "nvme_io": false 00:21:12.060 }, 00:21:12.060 "memory_domains": [ 00:21:12.060 { 00:21:12.060 "dma_device_id": "system", 00:21:12.060 "dma_device_type": 1 00:21:12.060 }, 00:21:12.060 { 00:21:12.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.060 "dma_device_type": 2 00:21:12.060 } 00:21:12.060 ], 00:21:12.060 "driver_specific": {} 00:21:12.060 } 00:21:12.060 ] 00:21:12.060 11:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:21:12.060 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:21:12.060 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:12.060 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:12.060 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:12.060 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:12.060 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:12.060 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:12.060 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:12.060 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:12.060 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:12.060 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.060 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.317 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:12.317 "name": "Existed_Raid", 00:21:12.317 "uuid": "4ab14b46-d68f-4368-a930-0be899b7ae89", 00:21:12.317 "strip_size_kb": 64, 00:21:12.317 "state": "online", 00:21:12.317 "raid_level": "concat", 00:21:12.317 "superblock": false, 00:21:12.317 "num_base_bdevs": 3, 00:21:12.317 "num_base_bdevs_discovered": 3, 00:21:12.317 "num_base_bdevs_operational": 3, 00:21:12.317 "base_bdevs_list": [ 00:21:12.317 { 00:21:12.317 "name": "NewBaseBdev", 00:21:12.317 "uuid": "f80ece47-1a11-4f5c-b1af-948f82d0498e", 00:21:12.317 "is_configured": true, 00:21:12.317 "data_offset": 0, 00:21:12.317 "data_size": 65536 00:21:12.317 }, 00:21:12.317 { 00:21:12.317 "name": "BaseBdev2", 00:21:12.317 "uuid": "1434d844-71c2-4345-aabf-a1f00fe0dedd", 00:21:12.317 "is_configured": true, 00:21:12.317 "data_offset": 0, 00:21:12.317 "data_size": 65536 00:21:12.317 }, 00:21:12.317 { 00:21:12.317 "name": "BaseBdev3", 00:21:12.317 "uuid": "68de0501-b634-480a-a2ad-9669e578dd51", 00:21:12.317 "is_configured": true, 00:21:12.317 "data_offset": 0, 00:21:12.317 "data_size": 65536 00:21:12.317 } 00:21:12.317 ] 00:21:12.317 }' 00:21:12.317 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:12.317 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.881 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:12.881 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:12.881 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:12.881 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:12.881 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:12.881 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:12.881 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:12.881 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:13.139 [2024-06-10 11:44:45.081259] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:13.139 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:13.139 "name": "Existed_Raid", 00:21:13.139 "aliases": [ 00:21:13.139 "4ab14b46-d68f-4368-a930-0be899b7ae89" 00:21:13.139 ], 00:21:13.139 "product_name": "Raid Volume", 00:21:13.139 "block_size": 512, 00:21:13.139 "num_blocks": 196608, 00:21:13.139 "uuid": "4ab14b46-d68f-4368-a930-0be899b7ae89", 00:21:13.139 "assigned_rate_limits": { 00:21:13.139 "rw_ios_per_sec": 0, 00:21:13.139 "rw_mbytes_per_sec": 0, 00:21:13.139 "r_mbytes_per_sec": 0, 00:21:13.139 "w_mbytes_per_sec": 0 00:21:13.139 }, 00:21:13.139 "claimed": false, 00:21:13.139 "zoned": false, 00:21:13.139 "supported_io_types": { 00:21:13.139 "read": true, 00:21:13.139 "write": true, 00:21:13.139 "unmap": true, 00:21:13.139 "write_zeroes": true, 00:21:13.139 "flush": true, 00:21:13.139 "reset": true, 00:21:13.139 "compare": false, 00:21:13.139 "compare_and_write": false, 00:21:13.139 "abort": false, 00:21:13.139 "nvme_admin": false, 00:21:13.139 "nvme_io": false 00:21:13.139 }, 00:21:13.139 "memory_domains": [ 00:21:13.139 { 00:21:13.139 "dma_device_id": "system", 00:21:13.139 "dma_device_type": 1 00:21:13.139 }, 00:21:13.139 { 00:21:13.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.139 "dma_device_type": 2 00:21:13.139 }, 00:21:13.139 { 00:21:13.139 "dma_device_id": "system", 00:21:13.139 "dma_device_type": 1 00:21:13.139 }, 00:21:13.139 { 00:21:13.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.139 "dma_device_type": 2 00:21:13.139 }, 00:21:13.139 { 00:21:13.139 "dma_device_id": "system", 00:21:13.139 "dma_device_type": 1 00:21:13.139 }, 00:21:13.139 { 00:21:13.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.139 "dma_device_type": 2 00:21:13.139 } 00:21:13.139 ], 00:21:13.139 "driver_specific": { 00:21:13.139 "raid": { 00:21:13.139 "uuid": "4ab14b46-d68f-4368-a930-0be899b7ae89", 00:21:13.139 "strip_size_kb": 64, 00:21:13.139 "state": "online", 00:21:13.139 "raid_level": "concat", 00:21:13.139 "superblock": false, 00:21:13.139 "num_base_bdevs": 3, 00:21:13.139 "num_base_bdevs_discovered": 3, 00:21:13.139 "num_base_bdevs_operational": 3, 00:21:13.139 "base_bdevs_list": [ 00:21:13.139 { 00:21:13.139 "name": "NewBaseBdev", 00:21:13.139 "uuid": "f80ece47-1a11-4f5c-b1af-948f82d0498e", 00:21:13.139 "is_configured": true, 00:21:13.139 "data_offset": 0, 00:21:13.139 "data_size": 65536 00:21:13.139 }, 00:21:13.139 { 00:21:13.139 "name": "BaseBdev2", 00:21:13.139 "uuid": "1434d844-71c2-4345-aabf-a1f00fe0dedd", 00:21:13.139 "is_configured": true, 00:21:13.139 "data_offset": 0, 00:21:13.139 "data_size": 65536 00:21:13.139 }, 00:21:13.139 { 00:21:13.139 "name": "BaseBdev3", 00:21:13.139 "uuid": "68de0501-b634-480a-a2ad-9669e578dd51", 00:21:13.139 "is_configured": true, 00:21:13.139 "data_offset": 0, 00:21:13.139 "data_size": 65536 00:21:13.139 } 00:21:13.139 ] 00:21:13.139 } 00:21:13.139 } 00:21:13.139 }' 00:21:13.139 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:13.139 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:13.139 BaseBdev2 00:21:13.139 BaseBdev3' 00:21:13.139 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:13.139 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:13.139 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:13.396 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:13.396 "name": "NewBaseBdev", 00:21:13.396 "aliases": [ 00:21:13.396 "f80ece47-1a11-4f5c-b1af-948f82d0498e" 00:21:13.396 ], 00:21:13.396 "product_name": "Malloc disk", 00:21:13.396 "block_size": 512, 00:21:13.396 "num_blocks": 65536, 00:21:13.396 "uuid": "f80ece47-1a11-4f5c-b1af-948f82d0498e", 00:21:13.396 "assigned_rate_limits": { 00:21:13.396 "rw_ios_per_sec": 0, 00:21:13.396 "rw_mbytes_per_sec": 0, 00:21:13.396 "r_mbytes_per_sec": 0, 00:21:13.396 "w_mbytes_per_sec": 0 00:21:13.396 }, 00:21:13.396 "claimed": true, 00:21:13.396 "claim_type": "exclusive_write", 00:21:13.396 "zoned": false, 00:21:13.396 "supported_io_types": { 00:21:13.396 "read": true, 00:21:13.396 "write": true, 00:21:13.396 "unmap": true, 00:21:13.396 "write_zeroes": true, 00:21:13.396 "flush": true, 00:21:13.396 "reset": true, 00:21:13.396 "compare": false, 00:21:13.396 "compare_and_write": false, 00:21:13.396 "abort": true, 00:21:13.396 "nvme_admin": false, 00:21:13.396 "nvme_io": false 00:21:13.396 }, 00:21:13.396 "memory_domains": [ 00:21:13.396 { 00:21:13.396 "dma_device_id": "system", 00:21:13.396 "dma_device_type": 1 00:21:13.396 }, 00:21:13.396 { 00:21:13.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.396 "dma_device_type": 2 00:21:13.396 } 00:21:13.396 ], 00:21:13.396 "driver_specific": {} 00:21:13.396 }' 00:21:13.396 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.653 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.653 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:13.653 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:13.653 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:13.653 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:13.653 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:13.653 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:13.653 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:13.653 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.910 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.910 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:13.910 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:13.910 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:13.910 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:14.167 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:14.167 "name": "BaseBdev2", 00:21:14.167 "aliases": [ 00:21:14.167 "1434d844-71c2-4345-aabf-a1f00fe0dedd" 00:21:14.167 ], 00:21:14.167 "product_name": "Malloc disk", 00:21:14.167 "block_size": 512, 00:21:14.167 "num_blocks": 65536, 00:21:14.167 "uuid": "1434d844-71c2-4345-aabf-a1f00fe0dedd", 00:21:14.167 "assigned_rate_limits": { 00:21:14.167 "rw_ios_per_sec": 0, 00:21:14.167 "rw_mbytes_per_sec": 0, 00:21:14.167 "r_mbytes_per_sec": 0, 00:21:14.167 "w_mbytes_per_sec": 0 00:21:14.167 }, 00:21:14.167 "claimed": true, 00:21:14.167 "claim_type": "exclusive_write", 00:21:14.167 "zoned": false, 00:21:14.167 "supported_io_types": { 00:21:14.167 "read": true, 00:21:14.167 "write": true, 00:21:14.167 "unmap": true, 00:21:14.167 "write_zeroes": true, 00:21:14.167 "flush": true, 00:21:14.167 "reset": true, 00:21:14.167 "compare": false, 00:21:14.167 "compare_and_write": false, 00:21:14.167 "abort": true, 00:21:14.167 "nvme_admin": false, 00:21:14.167 "nvme_io": false 00:21:14.167 }, 00:21:14.167 "memory_domains": [ 00:21:14.167 { 00:21:14.167 "dma_device_id": "system", 00:21:14.167 "dma_device_type": 1 00:21:14.167 }, 00:21:14.167 { 00:21:14.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.167 "dma_device_type": 2 00:21:14.167 } 00:21:14.167 ], 00:21:14.167 "driver_specific": {} 00:21:14.167 }' 00:21:14.167 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:14.167 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:14.167 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:14.167 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:14.167 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:14.424 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:14.424 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:14.424 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:14.424 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:14.424 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:14.424 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:14.424 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:14.424 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:14.424 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:14.424 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:14.682 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:14.682 "name": "BaseBdev3", 00:21:14.682 "aliases": [ 00:21:14.682 "68de0501-b634-480a-a2ad-9669e578dd51" 00:21:14.682 ], 00:21:14.682 "product_name": "Malloc disk", 00:21:14.682 "block_size": 512, 00:21:14.682 "num_blocks": 65536, 00:21:14.682 "uuid": "68de0501-b634-480a-a2ad-9669e578dd51", 00:21:14.682 "assigned_rate_limits": { 00:21:14.682 "rw_ios_per_sec": 0, 00:21:14.682 "rw_mbytes_per_sec": 0, 00:21:14.682 "r_mbytes_per_sec": 0, 00:21:14.682 "w_mbytes_per_sec": 0 00:21:14.682 }, 00:21:14.682 "claimed": true, 00:21:14.682 "claim_type": "exclusive_write", 00:21:14.682 "zoned": false, 00:21:14.682 "supported_io_types": { 00:21:14.682 "read": true, 00:21:14.682 "write": true, 00:21:14.682 "unmap": true, 00:21:14.682 "write_zeroes": true, 00:21:14.682 "flush": true, 00:21:14.682 "reset": true, 00:21:14.682 "compare": false, 00:21:14.682 "compare_and_write": false, 00:21:14.682 "abort": true, 00:21:14.682 "nvme_admin": false, 00:21:14.682 "nvme_io": false 00:21:14.682 }, 00:21:14.682 "memory_domains": [ 00:21:14.682 { 00:21:14.682 "dma_device_id": "system", 00:21:14.682 "dma_device_type": 1 00:21:14.682 }, 00:21:14.682 { 00:21:14.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.682 "dma_device_type": 2 00:21:14.682 } 00:21:14.682 ], 00:21:14.682 "driver_specific": {} 00:21:14.682 }' 00:21:14.682 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:14.942 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:14.942 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:14.942 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:14.942 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:14.942 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:14.942 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:14.942 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:14.942 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:14.942 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:15.200 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:15.200 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:15.200 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:15.457 [2024-06-10 11:44:47.335388] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:15.457 [2024-06-10 11:44:47.335589] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:15.457 [2024-06-10 11:44:47.335799] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:15.457 [2024-06-10 11:44:47.335986] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:15.457 [2024-06-10 11:44:47.336085] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:21:15.457 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 129513 00:21:15.457 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 129513 ']' 00:21:15.457 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 129513 00:21:15.457 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:21:15.457 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:15.457 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 129513 00:21:15.457 killing process with pid 129513 00:21:15.457 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:15.457 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:15.457 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 129513' 00:21:15.457 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 129513 00:21:15.457 [2024-06-10 11:44:47.382364] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:15.457 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 129513 00:21:15.714 [2024-06-10 11:44:47.690813] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:17.087 ************************************ 00:21:17.087 END TEST raid_state_function_test 00:21:17.087 ************************************ 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:21:17.087 00:21:17.087 real 0m31.602s 00:21:17.087 user 0m57.176s 00:21:17.087 sys 0m4.704s 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.087 11:44:49 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:21:17.087 11:44:49 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:21:17.087 11:44:49 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:17.087 11:44:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:17.087 ************************************ 00:21:17.087 START TEST raid_state_function_test_sb 00:21:17.087 ************************************ 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 3 true 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:17.087 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:17.344 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:17.344 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:21:17.344 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:17.344 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:17.344 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:17.344 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:17.344 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=130511 00:21:17.344 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:17.344 Process raid pid: 130511 00:21:17.344 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 130511' 00:21:17.344 11:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 130511 /var/tmp/spdk-raid.sock 00:21:17.344 11:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 130511 ']' 00:21:17.345 11:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:17.345 11:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:17.345 11:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:17.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:17.345 11:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:17.345 11:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.345 [2024-06-10 11:44:49.227118] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:21:17.345 [2024-06-10 11:44:49.227547] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.602 [2024-06-10 11:44:49.403552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.602 [2024-06-10 11:44:49.614880] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.861 [2024-06-10 11:44:49.836675] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:18.426 11:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:18.426 11:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:21:18.426 11:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:18.426 [2024-06-10 11:44:50.433943] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:18.426 [2024-06-10 11:44:50.434244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:18.426 [2024-06-10 11:44:50.434352] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:18.426 [2024-06-10 11:44:50.434491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:18.426 [2024-06-10 11:44:50.434571] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:18.426 [2024-06-10 11:44:50.434623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:18.426 11:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:18.426 11:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:18.426 11:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:18.426 11:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:18.426 11:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:18.426 11:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:18.426 11:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:18.426 11:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:18.426 11:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:18.426 11:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:18.426 11:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.426 11:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.683 11:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:18.683 "name": "Existed_Raid", 00:21:18.683 "uuid": "447c025d-7269-4813-8795-eeab8ba43d00", 00:21:18.683 "strip_size_kb": 64, 00:21:18.683 "state": "configuring", 00:21:18.683 "raid_level": "concat", 00:21:18.683 "superblock": true, 00:21:18.683 "num_base_bdevs": 3, 00:21:18.683 "num_base_bdevs_discovered": 0, 00:21:18.683 "num_base_bdevs_operational": 3, 00:21:18.683 "base_bdevs_list": [ 00:21:18.683 { 00:21:18.683 "name": "BaseBdev1", 00:21:18.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.683 "is_configured": false, 00:21:18.684 "data_offset": 0, 00:21:18.684 "data_size": 0 00:21:18.684 }, 00:21:18.684 { 00:21:18.684 "name": "BaseBdev2", 00:21:18.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.684 "is_configured": false, 00:21:18.684 "data_offset": 0, 00:21:18.684 "data_size": 0 00:21:18.684 }, 00:21:18.684 { 00:21:18.684 "name": "BaseBdev3", 00:21:18.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.684 "is_configured": false, 00:21:18.684 "data_offset": 0, 00:21:18.684 "data_size": 0 00:21:18.684 } 00:21:18.684 ] 00:21:18.684 }' 00:21:18.684 11:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:18.684 11:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.247 11:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:19.504 [2024-06-10 11:44:51.498018] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:19.504 [2024-06-10 11:44:51.498204] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:21:19.504 11:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:20.070 [2024-06-10 11:44:51.822111] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:20.070 [2024-06-10 11:44:51.822369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:20.070 [2024-06-10 11:44:51.822464] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:20.070 [2024-06-10 11:44:51.822517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:20.070 [2024-06-10 11:44:51.822546] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:20.070 [2024-06-10 11:44:51.822592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:20.070 11:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:20.332 [2024-06-10 11:44:52.144060] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:20.332 BaseBdev1 00:21:20.332 11:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:20.332 11:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:21:20.332 11:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:20.332 11:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:21:20.332 11:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:20.332 11:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:20.332 11:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:20.607 11:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:20.865 [ 00:21:20.865 { 00:21:20.865 "name": "BaseBdev1", 00:21:20.865 "aliases": [ 00:21:20.865 "19054649-e308-4161-b26f-8100b6384ac2" 00:21:20.865 ], 00:21:20.865 "product_name": "Malloc disk", 00:21:20.865 "block_size": 512, 00:21:20.865 "num_blocks": 65536, 00:21:20.865 "uuid": "19054649-e308-4161-b26f-8100b6384ac2", 00:21:20.865 "assigned_rate_limits": { 00:21:20.865 "rw_ios_per_sec": 0, 00:21:20.865 "rw_mbytes_per_sec": 0, 00:21:20.865 "r_mbytes_per_sec": 0, 00:21:20.865 "w_mbytes_per_sec": 0 00:21:20.865 }, 00:21:20.865 "claimed": true, 00:21:20.865 "claim_type": "exclusive_write", 00:21:20.865 "zoned": false, 00:21:20.865 "supported_io_types": { 00:21:20.865 "read": true, 00:21:20.865 "write": true, 00:21:20.865 "unmap": true, 00:21:20.865 "write_zeroes": true, 00:21:20.865 "flush": true, 00:21:20.865 "reset": true, 00:21:20.865 "compare": false, 00:21:20.865 "compare_and_write": false, 00:21:20.865 "abort": true, 00:21:20.865 "nvme_admin": false, 00:21:20.865 "nvme_io": false 00:21:20.865 }, 00:21:20.865 "memory_domains": [ 00:21:20.865 { 00:21:20.865 "dma_device_id": "system", 00:21:20.865 "dma_device_type": 1 00:21:20.865 }, 00:21:20.865 { 00:21:20.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.865 "dma_device_type": 2 00:21:20.865 } 00:21:20.865 ], 00:21:20.865 "driver_specific": {} 00:21:20.865 } 00:21:20.865 ] 00:21:20.865 11:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:21:20.865 11:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:20.865 11:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:20.865 11:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:20.865 11:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:20.865 11:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:20.865 11:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:20.865 11:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:20.865 11:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:20.865 11:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:20.865 11:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:20.865 11:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.865 11:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:20.865 11:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:20.865 "name": "Existed_Raid", 00:21:20.865 "uuid": "b81430a8-d4f6-42ef-8920-67ae284b4d6f", 00:21:20.865 "strip_size_kb": 64, 00:21:20.865 "state": "configuring", 00:21:20.865 "raid_level": "concat", 00:21:20.865 "superblock": true, 00:21:20.865 "num_base_bdevs": 3, 00:21:20.865 "num_base_bdevs_discovered": 1, 00:21:20.865 "num_base_bdevs_operational": 3, 00:21:20.865 "base_bdevs_list": [ 00:21:20.865 { 00:21:20.865 "name": "BaseBdev1", 00:21:20.865 "uuid": "19054649-e308-4161-b26f-8100b6384ac2", 00:21:20.865 "is_configured": true, 00:21:20.865 "data_offset": 2048, 00:21:20.865 "data_size": 63488 00:21:20.865 }, 00:21:20.865 { 00:21:20.865 "name": "BaseBdev2", 00:21:20.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.865 "is_configured": false, 00:21:20.866 "data_offset": 0, 00:21:20.866 "data_size": 0 00:21:20.866 }, 00:21:20.866 { 00:21:20.866 "name": "BaseBdev3", 00:21:20.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.866 "is_configured": false, 00:21:20.866 "data_offset": 0, 00:21:20.866 "data_size": 0 00:21:20.866 } 00:21:20.866 ] 00:21:20.866 }' 00:21:20.866 11:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:20.866 11:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.798 11:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:21.798 [2024-06-10 11:44:53.776471] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:21.798 [2024-06-10 11:44:53.776742] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:21:21.798 11:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:22.056 [2024-06-10 11:44:54.032566] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:22.056 [2024-06-10 11:44:54.034992] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:22.056 [2024-06-10 11:44:54.035213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:22.056 [2024-06-10 11:44:54.035314] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:22.056 [2024-06-10 11:44:54.035395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:22.056 11:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:22.056 11:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:22.056 11:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:22.056 11:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:22.056 11:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:22.056 11:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:22.056 11:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:22.056 11:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:22.056 11:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:22.056 11:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:22.056 11:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:22.056 11:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:22.056 11:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.056 11:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:22.313 11:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:22.313 "name": "Existed_Raid", 00:21:22.313 "uuid": "241e99cf-9425-45c1-abdd-fe7531bfa50a", 00:21:22.313 "strip_size_kb": 64, 00:21:22.313 "state": "configuring", 00:21:22.313 "raid_level": "concat", 00:21:22.313 "superblock": true, 00:21:22.313 "num_base_bdevs": 3, 00:21:22.313 "num_base_bdevs_discovered": 1, 00:21:22.313 "num_base_bdevs_operational": 3, 00:21:22.313 "base_bdevs_list": [ 00:21:22.313 { 00:21:22.314 "name": "BaseBdev1", 00:21:22.314 "uuid": "19054649-e308-4161-b26f-8100b6384ac2", 00:21:22.314 "is_configured": true, 00:21:22.314 "data_offset": 2048, 00:21:22.314 "data_size": 63488 00:21:22.314 }, 00:21:22.314 { 00:21:22.314 "name": "BaseBdev2", 00:21:22.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.314 "is_configured": false, 00:21:22.314 "data_offset": 0, 00:21:22.314 "data_size": 0 00:21:22.314 }, 00:21:22.314 { 00:21:22.314 "name": "BaseBdev3", 00:21:22.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.314 "is_configured": false, 00:21:22.314 "data_offset": 0, 00:21:22.314 "data_size": 0 00:21:22.314 } 00:21:22.314 ] 00:21:22.314 }' 00:21:22.314 11:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:22.314 11:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.878 11:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:23.456 [2024-06-10 11:44:55.192867] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:23.456 BaseBdev2 00:21:23.456 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:23.456 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:21:23.456 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:23.456 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:21:23.456 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:23.456 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:23.457 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:23.457 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:23.714 [ 00:21:23.714 { 00:21:23.714 "name": "BaseBdev2", 00:21:23.714 "aliases": [ 00:21:23.714 "7f28bdd4-d20f-4fd4-912e-04e8cdc330ef" 00:21:23.714 ], 00:21:23.714 "product_name": "Malloc disk", 00:21:23.714 "block_size": 512, 00:21:23.714 "num_blocks": 65536, 00:21:23.714 "uuid": "7f28bdd4-d20f-4fd4-912e-04e8cdc330ef", 00:21:23.714 "assigned_rate_limits": { 00:21:23.714 "rw_ios_per_sec": 0, 00:21:23.714 "rw_mbytes_per_sec": 0, 00:21:23.714 "r_mbytes_per_sec": 0, 00:21:23.714 "w_mbytes_per_sec": 0 00:21:23.714 }, 00:21:23.714 "claimed": true, 00:21:23.714 "claim_type": "exclusive_write", 00:21:23.714 "zoned": false, 00:21:23.714 "supported_io_types": { 00:21:23.714 "read": true, 00:21:23.714 "write": true, 00:21:23.714 "unmap": true, 00:21:23.714 "write_zeroes": true, 00:21:23.714 "flush": true, 00:21:23.714 "reset": true, 00:21:23.714 "compare": false, 00:21:23.714 "compare_and_write": false, 00:21:23.714 "abort": true, 00:21:23.714 "nvme_admin": false, 00:21:23.714 "nvme_io": false 00:21:23.714 }, 00:21:23.714 "memory_domains": [ 00:21:23.714 { 00:21:23.714 "dma_device_id": "system", 00:21:23.714 "dma_device_type": 1 00:21:23.714 }, 00:21:23.714 { 00:21:23.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.714 "dma_device_type": 2 00:21:23.714 } 00:21:23.714 ], 00:21:23.714 "driver_specific": {} 00:21:23.714 } 00:21:23.714 ] 00:21:23.714 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:21:23.714 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:23.714 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:23.714 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:23.714 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:23.714 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:23.714 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:23.714 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:23.714 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:23.714 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:23.714 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:23.714 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:23.714 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:23.714 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.714 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.972 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:23.972 "name": "Existed_Raid", 00:21:23.972 "uuid": "241e99cf-9425-45c1-abdd-fe7531bfa50a", 00:21:23.972 "strip_size_kb": 64, 00:21:23.972 "state": "configuring", 00:21:23.972 "raid_level": "concat", 00:21:23.972 "superblock": true, 00:21:23.972 "num_base_bdevs": 3, 00:21:23.972 "num_base_bdevs_discovered": 2, 00:21:23.972 "num_base_bdevs_operational": 3, 00:21:23.972 "base_bdevs_list": [ 00:21:23.972 { 00:21:23.972 "name": "BaseBdev1", 00:21:23.972 "uuid": "19054649-e308-4161-b26f-8100b6384ac2", 00:21:23.972 "is_configured": true, 00:21:23.972 "data_offset": 2048, 00:21:23.972 "data_size": 63488 00:21:23.972 }, 00:21:23.972 { 00:21:23.972 "name": "BaseBdev2", 00:21:23.972 "uuid": "7f28bdd4-d20f-4fd4-912e-04e8cdc330ef", 00:21:23.972 "is_configured": true, 00:21:23.972 "data_offset": 2048, 00:21:23.972 "data_size": 63488 00:21:23.972 }, 00:21:23.972 { 00:21:23.972 "name": "BaseBdev3", 00:21:23.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.972 "is_configured": false, 00:21:23.972 "data_offset": 0, 00:21:23.972 "data_size": 0 00:21:23.972 } 00:21:23.972 ] 00:21:23.972 }' 00:21:23.972 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:23.972 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.536 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:24.795 [2024-06-10 11:44:56.840212] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:24.795 [2024-06-10 11:44:56.840692] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:21:24.795 [2024-06-10 11:44:56.840844] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:24.795 [2024-06-10 11:44:56.841029] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:24.795 [2024-06-10 11:44:56.841414] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:21:24.795 [2024-06-10 11:44:56.841547] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:21:24.795 BaseBdev3 00:21:24.795 [2024-06-10 11:44:56.841831] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.053 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:25.053 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:21:25.053 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:25.053 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:21:25.053 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:25.054 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:25.054 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:25.054 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:25.311 [ 00:21:25.311 { 00:21:25.311 "name": "BaseBdev3", 00:21:25.311 "aliases": [ 00:21:25.311 "0ae9aad9-19bc-4982-89e3-a616727abd1a" 00:21:25.311 ], 00:21:25.311 "product_name": "Malloc disk", 00:21:25.311 "block_size": 512, 00:21:25.311 "num_blocks": 65536, 00:21:25.311 "uuid": "0ae9aad9-19bc-4982-89e3-a616727abd1a", 00:21:25.311 "assigned_rate_limits": { 00:21:25.311 "rw_ios_per_sec": 0, 00:21:25.311 "rw_mbytes_per_sec": 0, 00:21:25.311 "r_mbytes_per_sec": 0, 00:21:25.311 "w_mbytes_per_sec": 0 00:21:25.311 }, 00:21:25.311 "claimed": true, 00:21:25.311 "claim_type": "exclusive_write", 00:21:25.311 "zoned": false, 00:21:25.311 "supported_io_types": { 00:21:25.311 "read": true, 00:21:25.311 "write": true, 00:21:25.311 "unmap": true, 00:21:25.311 "write_zeroes": true, 00:21:25.311 "flush": true, 00:21:25.311 "reset": true, 00:21:25.311 "compare": false, 00:21:25.311 "compare_and_write": false, 00:21:25.311 "abort": true, 00:21:25.311 "nvme_admin": false, 00:21:25.311 "nvme_io": false 00:21:25.311 }, 00:21:25.311 "memory_domains": [ 00:21:25.311 { 00:21:25.311 "dma_device_id": "system", 00:21:25.311 "dma_device_type": 1 00:21:25.311 }, 00:21:25.311 { 00:21:25.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.311 "dma_device_type": 2 00:21:25.311 } 00:21:25.311 ], 00:21:25.311 "driver_specific": {} 00:21:25.311 } 00:21:25.311 ] 00:21:25.311 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:21:25.312 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:25.312 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:25.312 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:21:25.312 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:25.312 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:25.312 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:25.312 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:25.312 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:25.312 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:25.312 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:25.312 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:25.312 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:25.312 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.312 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.569 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:25.569 "name": "Existed_Raid", 00:21:25.569 "uuid": "241e99cf-9425-45c1-abdd-fe7531bfa50a", 00:21:25.569 "strip_size_kb": 64, 00:21:25.569 "state": "online", 00:21:25.569 "raid_level": "concat", 00:21:25.569 "superblock": true, 00:21:25.569 "num_base_bdevs": 3, 00:21:25.569 "num_base_bdevs_discovered": 3, 00:21:25.569 "num_base_bdevs_operational": 3, 00:21:25.569 "base_bdevs_list": [ 00:21:25.569 { 00:21:25.569 "name": "BaseBdev1", 00:21:25.569 "uuid": "19054649-e308-4161-b26f-8100b6384ac2", 00:21:25.569 "is_configured": true, 00:21:25.569 "data_offset": 2048, 00:21:25.569 "data_size": 63488 00:21:25.569 }, 00:21:25.569 { 00:21:25.569 "name": "BaseBdev2", 00:21:25.569 "uuid": "7f28bdd4-d20f-4fd4-912e-04e8cdc330ef", 00:21:25.569 "is_configured": true, 00:21:25.569 "data_offset": 2048, 00:21:25.569 "data_size": 63488 00:21:25.569 }, 00:21:25.569 { 00:21:25.569 "name": "BaseBdev3", 00:21:25.569 "uuid": "0ae9aad9-19bc-4982-89e3-a616727abd1a", 00:21:25.569 "is_configured": true, 00:21:25.569 "data_offset": 2048, 00:21:25.569 "data_size": 63488 00:21:25.569 } 00:21:25.569 ] 00:21:25.569 }' 00:21:25.569 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:25.569 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.134 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:26.134 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:26.134 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:26.134 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:26.134 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:26.134 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:26.134 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:26.134 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:26.392 [2024-06-10 11:44:58.423377] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:26.392 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:26.392 "name": "Existed_Raid", 00:21:26.392 "aliases": [ 00:21:26.392 "241e99cf-9425-45c1-abdd-fe7531bfa50a" 00:21:26.392 ], 00:21:26.392 "product_name": "Raid Volume", 00:21:26.392 "block_size": 512, 00:21:26.392 "num_blocks": 190464, 00:21:26.392 "uuid": "241e99cf-9425-45c1-abdd-fe7531bfa50a", 00:21:26.392 "assigned_rate_limits": { 00:21:26.392 "rw_ios_per_sec": 0, 00:21:26.392 "rw_mbytes_per_sec": 0, 00:21:26.392 "r_mbytes_per_sec": 0, 00:21:26.392 "w_mbytes_per_sec": 0 00:21:26.392 }, 00:21:26.392 "claimed": false, 00:21:26.392 "zoned": false, 00:21:26.392 "supported_io_types": { 00:21:26.392 "read": true, 00:21:26.392 "write": true, 00:21:26.392 "unmap": true, 00:21:26.392 "write_zeroes": true, 00:21:26.392 "flush": true, 00:21:26.392 "reset": true, 00:21:26.392 "compare": false, 00:21:26.392 "compare_and_write": false, 00:21:26.392 "abort": false, 00:21:26.392 "nvme_admin": false, 00:21:26.392 "nvme_io": false 00:21:26.392 }, 00:21:26.392 "memory_domains": [ 00:21:26.392 { 00:21:26.392 "dma_device_id": "system", 00:21:26.392 "dma_device_type": 1 00:21:26.392 }, 00:21:26.392 { 00:21:26.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.392 "dma_device_type": 2 00:21:26.392 }, 00:21:26.392 { 00:21:26.392 "dma_device_id": "system", 00:21:26.392 "dma_device_type": 1 00:21:26.392 }, 00:21:26.392 { 00:21:26.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.392 "dma_device_type": 2 00:21:26.392 }, 00:21:26.392 { 00:21:26.392 "dma_device_id": "system", 00:21:26.392 "dma_device_type": 1 00:21:26.392 }, 00:21:26.392 { 00:21:26.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.392 "dma_device_type": 2 00:21:26.392 } 00:21:26.392 ], 00:21:26.392 "driver_specific": { 00:21:26.392 "raid": { 00:21:26.392 "uuid": "241e99cf-9425-45c1-abdd-fe7531bfa50a", 00:21:26.392 "strip_size_kb": 64, 00:21:26.392 "state": "online", 00:21:26.392 "raid_level": "concat", 00:21:26.392 "superblock": true, 00:21:26.392 "num_base_bdevs": 3, 00:21:26.392 "num_base_bdevs_discovered": 3, 00:21:26.392 "num_base_bdevs_operational": 3, 00:21:26.392 "base_bdevs_list": [ 00:21:26.392 { 00:21:26.392 "name": "BaseBdev1", 00:21:26.392 "uuid": "19054649-e308-4161-b26f-8100b6384ac2", 00:21:26.392 "is_configured": true, 00:21:26.392 "data_offset": 2048, 00:21:26.392 "data_size": 63488 00:21:26.392 }, 00:21:26.392 { 00:21:26.392 "name": "BaseBdev2", 00:21:26.392 "uuid": "7f28bdd4-d20f-4fd4-912e-04e8cdc330ef", 00:21:26.392 "is_configured": true, 00:21:26.392 "data_offset": 2048, 00:21:26.392 "data_size": 63488 00:21:26.393 }, 00:21:26.393 { 00:21:26.393 "name": "BaseBdev3", 00:21:26.393 "uuid": "0ae9aad9-19bc-4982-89e3-a616727abd1a", 00:21:26.393 "is_configured": true, 00:21:26.393 "data_offset": 2048, 00:21:26.393 "data_size": 63488 00:21:26.393 } 00:21:26.393 ] 00:21:26.393 } 00:21:26.393 } 00:21:26.393 }' 00:21:26.393 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:26.651 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:26.651 BaseBdev2 00:21:26.651 BaseBdev3' 00:21:26.651 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:26.651 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:26.651 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:26.910 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:26.910 "name": "BaseBdev1", 00:21:26.910 "aliases": [ 00:21:26.910 "19054649-e308-4161-b26f-8100b6384ac2" 00:21:26.910 ], 00:21:26.910 "product_name": "Malloc disk", 00:21:26.910 "block_size": 512, 00:21:26.910 "num_blocks": 65536, 00:21:26.910 "uuid": "19054649-e308-4161-b26f-8100b6384ac2", 00:21:26.910 "assigned_rate_limits": { 00:21:26.910 "rw_ios_per_sec": 0, 00:21:26.910 "rw_mbytes_per_sec": 0, 00:21:26.910 "r_mbytes_per_sec": 0, 00:21:26.910 "w_mbytes_per_sec": 0 00:21:26.910 }, 00:21:26.910 "claimed": true, 00:21:26.910 "claim_type": "exclusive_write", 00:21:26.910 "zoned": false, 00:21:26.910 "supported_io_types": { 00:21:26.910 "read": true, 00:21:26.910 "write": true, 00:21:26.910 "unmap": true, 00:21:26.910 "write_zeroes": true, 00:21:26.910 "flush": true, 00:21:26.910 "reset": true, 00:21:26.910 "compare": false, 00:21:26.910 "compare_and_write": false, 00:21:26.910 "abort": true, 00:21:26.910 "nvme_admin": false, 00:21:26.910 "nvme_io": false 00:21:26.910 }, 00:21:26.910 "memory_domains": [ 00:21:26.910 { 00:21:26.910 "dma_device_id": "system", 00:21:26.910 "dma_device_type": 1 00:21:26.910 }, 00:21:26.910 { 00:21:26.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.910 "dma_device_type": 2 00:21:26.910 } 00:21:26.910 ], 00:21:26.910 "driver_specific": {} 00:21:26.910 }' 00:21:26.910 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:26.910 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:26.910 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:26.910 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:26.910 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:26.910 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:26.910 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:26.910 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:27.173 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:27.173 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:27.173 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:27.173 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:27.173 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:27.173 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:27.173 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:27.443 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:27.443 "name": "BaseBdev2", 00:21:27.443 "aliases": [ 00:21:27.443 "7f28bdd4-d20f-4fd4-912e-04e8cdc330ef" 00:21:27.443 ], 00:21:27.443 "product_name": "Malloc disk", 00:21:27.443 "block_size": 512, 00:21:27.443 "num_blocks": 65536, 00:21:27.443 "uuid": "7f28bdd4-d20f-4fd4-912e-04e8cdc330ef", 00:21:27.443 "assigned_rate_limits": { 00:21:27.443 "rw_ios_per_sec": 0, 00:21:27.443 "rw_mbytes_per_sec": 0, 00:21:27.443 "r_mbytes_per_sec": 0, 00:21:27.443 "w_mbytes_per_sec": 0 00:21:27.443 }, 00:21:27.443 "claimed": true, 00:21:27.443 "claim_type": "exclusive_write", 00:21:27.443 "zoned": false, 00:21:27.443 "supported_io_types": { 00:21:27.443 "read": true, 00:21:27.443 "write": true, 00:21:27.443 "unmap": true, 00:21:27.443 "write_zeroes": true, 00:21:27.443 "flush": true, 00:21:27.443 "reset": true, 00:21:27.443 "compare": false, 00:21:27.443 "compare_and_write": false, 00:21:27.443 "abort": true, 00:21:27.443 "nvme_admin": false, 00:21:27.443 "nvme_io": false 00:21:27.443 }, 00:21:27.443 "memory_domains": [ 00:21:27.443 { 00:21:27.443 "dma_device_id": "system", 00:21:27.443 "dma_device_type": 1 00:21:27.443 }, 00:21:27.443 { 00:21:27.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.443 "dma_device_type": 2 00:21:27.443 } 00:21:27.443 ], 00:21:27.443 "driver_specific": {} 00:21:27.443 }' 00:21:27.443 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:27.443 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:27.702 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:27.702 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:27.702 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:27.702 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:27.702 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:27.702 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:27.702 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:27.702 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:27.702 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:28.038 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:28.038 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:28.038 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:28.038 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:28.296 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:28.296 "name": "BaseBdev3", 00:21:28.296 "aliases": [ 00:21:28.296 "0ae9aad9-19bc-4982-89e3-a616727abd1a" 00:21:28.296 ], 00:21:28.296 "product_name": "Malloc disk", 00:21:28.296 "block_size": 512, 00:21:28.296 "num_blocks": 65536, 00:21:28.296 "uuid": "0ae9aad9-19bc-4982-89e3-a616727abd1a", 00:21:28.296 "assigned_rate_limits": { 00:21:28.296 "rw_ios_per_sec": 0, 00:21:28.296 "rw_mbytes_per_sec": 0, 00:21:28.296 "r_mbytes_per_sec": 0, 00:21:28.296 "w_mbytes_per_sec": 0 00:21:28.296 }, 00:21:28.296 "claimed": true, 00:21:28.296 "claim_type": "exclusive_write", 00:21:28.296 "zoned": false, 00:21:28.296 "supported_io_types": { 00:21:28.296 "read": true, 00:21:28.296 "write": true, 00:21:28.296 "unmap": true, 00:21:28.296 "write_zeroes": true, 00:21:28.296 "flush": true, 00:21:28.296 "reset": true, 00:21:28.296 "compare": false, 00:21:28.296 "compare_and_write": false, 00:21:28.296 "abort": true, 00:21:28.296 "nvme_admin": false, 00:21:28.296 "nvme_io": false 00:21:28.296 }, 00:21:28.296 "memory_domains": [ 00:21:28.296 { 00:21:28.296 "dma_device_id": "system", 00:21:28.296 "dma_device_type": 1 00:21:28.296 }, 00:21:28.296 { 00:21:28.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.296 "dma_device_type": 2 00:21:28.296 } 00:21:28.296 ], 00:21:28.296 "driver_specific": {} 00:21:28.296 }' 00:21:28.296 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:28.296 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:28.296 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:28.296 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:28.297 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:28.297 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:28.297 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:28.297 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:28.554 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:28.554 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:28.554 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:28.554 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:28.554 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:28.812 [2024-06-10 11:45:00.727689] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:28.812 [2024-06-10 11:45:00.727942] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:28.812 [2024-06-10 11:45:00.728109] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:28.812 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:28.812 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:21:28.812 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:28.812 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:21:28.812 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:28.812 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:21:28.812 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:28.812 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:28.812 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:28.812 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:28.812 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:28.812 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:28.812 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:28.812 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:28.812 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:29.070 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.070 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.070 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:29.070 "name": "Existed_Raid", 00:21:29.070 "uuid": "241e99cf-9425-45c1-abdd-fe7531bfa50a", 00:21:29.070 "strip_size_kb": 64, 00:21:29.070 "state": "offline", 00:21:29.070 "raid_level": "concat", 00:21:29.070 "superblock": true, 00:21:29.070 "num_base_bdevs": 3, 00:21:29.070 "num_base_bdevs_discovered": 2, 00:21:29.070 "num_base_bdevs_operational": 2, 00:21:29.070 "base_bdevs_list": [ 00:21:29.070 { 00:21:29.070 "name": null, 00:21:29.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.070 "is_configured": false, 00:21:29.070 "data_offset": 2048, 00:21:29.070 "data_size": 63488 00:21:29.070 }, 00:21:29.070 { 00:21:29.070 "name": "BaseBdev2", 00:21:29.070 "uuid": "7f28bdd4-d20f-4fd4-912e-04e8cdc330ef", 00:21:29.070 "is_configured": true, 00:21:29.070 "data_offset": 2048, 00:21:29.070 "data_size": 63488 00:21:29.070 }, 00:21:29.070 { 00:21:29.070 "name": "BaseBdev3", 00:21:29.070 "uuid": "0ae9aad9-19bc-4982-89e3-a616727abd1a", 00:21:29.070 "is_configured": true, 00:21:29.070 "data_offset": 2048, 00:21:29.070 "data_size": 63488 00:21:29.070 } 00:21:29.070 ] 00:21:29.070 }' 00:21:29.070 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:29.070 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.004 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:30.004 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:30.004 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:30.004 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.004 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:30.004 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:30.004 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:30.267 [2024-06-10 11:45:02.261409] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:30.563 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:30.563 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:30.563 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.563 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:30.821 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:30.821 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:30.821 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:31.079 [2024-06-10 11:45:02.896696] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:31.079 [2024-06-10 11:45:02.897018] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:21:31.079 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:31.079 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:31.079 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.079 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:31.338 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:31.338 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:31.338 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:21:31.338 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:31.338 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:31.338 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:31.596 BaseBdev2 00:21:31.596 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:31.596 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:21:31.596 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:31.596 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:21:31.596 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:31.596 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:31.596 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:31.854 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:32.419 [ 00:21:32.419 { 00:21:32.419 "name": "BaseBdev2", 00:21:32.419 "aliases": [ 00:21:32.419 "efe6f80a-1814-44eb-97df-748c5c4c3f87" 00:21:32.419 ], 00:21:32.419 "product_name": "Malloc disk", 00:21:32.419 "block_size": 512, 00:21:32.419 "num_blocks": 65536, 00:21:32.419 "uuid": "efe6f80a-1814-44eb-97df-748c5c4c3f87", 00:21:32.419 "assigned_rate_limits": { 00:21:32.419 "rw_ios_per_sec": 0, 00:21:32.419 "rw_mbytes_per_sec": 0, 00:21:32.419 "r_mbytes_per_sec": 0, 00:21:32.419 "w_mbytes_per_sec": 0 00:21:32.419 }, 00:21:32.419 "claimed": false, 00:21:32.419 "zoned": false, 00:21:32.419 "supported_io_types": { 00:21:32.419 "read": true, 00:21:32.420 "write": true, 00:21:32.420 "unmap": true, 00:21:32.420 "write_zeroes": true, 00:21:32.420 "flush": true, 00:21:32.420 "reset": true, 00:21:32.420 "compare": false, 00:21:32.420 "compare_and_write": false, 00:21:32.420 "abort": true, 00:21:32.420 "nvme_admin": false, 00:21:32.420 "nvme_io": false 00:21:32.420 }, 00:21:32.420 "memory_domains": [ 00:21:32.420 { 00:21:32.420 "dma_device_id": "system", 00:21:32.420 "dma_device_type": 1 00:21:32.420 }, 00:21:32.420 { 00:21:32.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.420 "dma_device_type": 2 00:21:32.420 } 00:21:32.420 ], 00:21:32.420 "driver_specific": {} 00:21:32.420 } 00:21:32.420 ] 00:21:32.420 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:21:32.420 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:32.420 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:32.420 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:32.677 BaseBdev3 00:21:32.677 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:32.677 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:21:32.677 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:32.677 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:21:32.677 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:32.677 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:32.677 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:32.936 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:33.193 [ 00:21:33.193 { 00:21:33.193 "name": "BaseBdev3", 00:21:33.193 "aliases": [ 00:21:33.193 "2d715b03-d1d5-48c8-99bc-301c90588f5d" 00:21:33.193 ], 00:21:33.193 "product_name": "Malloc disk", 00:21:33.193 "block_size": 512, 00:21:33.193 "num_blocks": 65536, 00:21:33.193 "uuid": "2d715b03-d1d5-48c8-99bc-301c90588f5d", 00:21:33.193 "assigned_rate_limits": { 00:21:33.193 "rw_ios_per_sec": 0, 00:21:33.193 "rw_mbytes_per_sec": 0, 00:21:33.193 "r_mbytes_per_sec": 0, 00:21:33.193 "w_mbytes_per_sec": 0 00:21:33.193 }, 00:21:33.193 "claimed": false, 00:21:33.193 "zoned": false, 00:21:33.193 "supported_io_types": { 00:21:33.193 "read": true, 00:21:33.193 "write": true, 00:21:33.193 "unmap": true, 00:21:33.193 "write_zeroes": true, 00:21:33.193 "flush": true, 00:21:33.193 "reset": true, 00:21:33.193 "compare": false, 00:21:33.193 "compare_and_write": false, 00:21:33.193 "abort": true, 00:21:33.193 "nvme_admin": false, 00:21:33.193 "nvme_io": false 00:21:33.193 }, 00:21:33.193 "memory_domains": [ 00:21:33.193 { 00:21:33.193 "dma_device_id": "system", 00:21:33.193 "dma_device_type": 1 00:21:33.193 }, 00:21:33.193 { 00:21:33.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.193 "dma_device_type": 2 00:21:33.193 } 00:21:33.193 ], 00:21:33.193 "driver_specific": {} 00:21:33.193 } 00:21:33.193 ] 00:21:33.450 11:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:21:33.450 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:33.450 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:33.450 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:33.707 [2024-06-10 11:45:05.513764] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:33.707 [2024-06-10 11:45:05.514112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:33.707 [2024-06-10 11:45:05.514263] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:33.707 [2024-06-10 11:45:05.516635] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:33.707 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:33.707 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:33.707 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:33.707 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:33.707 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:33.707 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:33.707 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:33.707 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:33.707 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:33.707 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:33.707 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.707 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.966 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:33.966 "name": "Existed_Raid", 00:21:33.966 "uuid": "f536d007-7cfc-497a-ba9e-ab402ec67b91", 00:21:33.966 "strip_size_kb": 64, 00:21:33.966 "state": "configuring", 00:21:33.966 "raid_level": "concat", 00:21:33.966 "superblock": true, 00:21:33.966 "num_base_bdevs": 3, 00:21:33.966 "num_base_bdevs_discovered": 2, 00:21:33.966 "num_base_bdevs_operational": 3, 00:21:33.966 "base_bdevs_list": [ 00:21:33.966 { 00:21:33.966 "name": "BaseBdev1", 00:21:33.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.966 "is_configured": false, 00:21:33.966 "data_offset": 0, 00:21:33.966 "data_size": 0 00:21:33.966 }, 00:21:33.966 { 00:21:33.966 "name": "BaseBdev2", 00:21:33.966 "uuid": "efe6f80a-1814-44eb-97df-748c5c4c3f87", 00:21:33.966 "is_configured": true, 00:21:33.966 "data_offset": 2048, 00:21:33.966 "data_size": 63488 00:21:33.966 }, 00:21:33.966 { 00:21:33.966 "name": "BaseBdev3", 00:21:33.966 "uuid": "2d715b03-d1d5-48c8-99bc-301c90588f5d", 00:21:33.966 "is_configured": true, 00:21:33.966 "data_offset": 2048, 00:21:33.966 "data_size": 63488 00:21:33.966 } 00:21:33.966 ] 00:21:33.966 }' 00:21:33.966 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:33.966 11:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.531 11:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:34.789 [2024-06-10 11:45:06.714118] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:34.789 11:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:34.789 11:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:34.789 11:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:34.789 11:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:34.789 11:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:34.789 11:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:34.789 11:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:34.789 11:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:34.789 11:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:34.789 11:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:34.789 11:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.789 11:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.357 11:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:35.357 "name": "Existed_Raid", 00:21:35.357 "uuid": "f536d007-7cfc-497a-ba9e-ab402ec67b91", 00:21:35.357 "strip_size_kb": 64, 00:21:35.357 "state": "configuring", 00:21:35.357 "raid_level": "concat", 00:21:35.357 "superblock": true, 00:21:35.357 "num_base_bdevs": 3, 00:21:35.357 "num_base_bdevs_discovered": 1, 00:21:35.357 "num_base_bdevs_operational": 3, 00:21:35.357 "base_bdevs_list": [ 00:21:35.357 { 00:21:35.357 "name": "BaseBdev1", 00:21:35.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.357 "is_configured": false, 00:21:35.357 "data_offset": 0, 00:21:35.357 "data_size": 0 00:21:35.357 }, 00:21:35.357 { 00:21:35.357 "name": null, 00:21:35.357 "uuid": "efe6f80a-1814-44eb-97df-748c5c4c3f87", 00:21:35.357 "is_configured": false, 00:21:35.357 "data_offset": 2048, 00:21:35.357 "data_size": 63488 00:21:35.357 }, 00:21:35.357 { 00:21:35.357 "name": "BaseBdev3", 00:21:35.357 "uuid": "2d715b03-d1d5-48c8-99bc-301c90588f5d", 00:21:35.357 "is_configured": true, 00:21:35.357 "data_offset": 2048, 00:21:35.357 "data_size": 63488 00:21:35.357 } 00:21:35.357 ] 00:21:35.357 }' 00:21:35.357 11:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:35.357 11:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.922 11:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.922 11:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:36.179 11:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:36.179 11:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:36.438 [2024-06-10 11:45:08.472341] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:36.438 BaseBdev1 00:21:36.438 11:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:36.438 11:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:21:36.438 11:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:36.438 11:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:21:36.438 11:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:36.438 11:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:36.438 11:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:37.004 11:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:37.004 [ 00:21:37.004 { 00:21:37.004 "name": "BaseBdev1", 00:21:37.004 "aliases": [ 00:21:37.004 "7a4439d8-d1e6-47e9-91f0-4dc79889d4c6" 00:21:37.004 ], 00:21:37.004 "product_name": "Malloc disk", 00:21:37.004 "block_size": 512, 00:21:37.004 "num_blocks": 65536, 00:21:37.004 "uuid": "7a4439d8-d1e6-47e9-91f0-4dc79889d4c6", 00:21:37.004 "assigned_rate_limits": { 00:21:37.004 "rw_ios_per_sec": 0, 00:21:37.004 "rw_mbytes_per_sec": 0, 00:21:37.004 "r_mbytes_per_sec": 0, 00:21:37.004 "w_mbytes_per_sec": 0 00:21:37.004 }, 00:21:37.004 "claimed": true, 00:21:37.004 "claim_type": "exclusive_write", 00:21:37.004 "zoned": false, 00:21:37.004 "supported_io_types": { 00:21:37.004 "read": true, 00:21:37.004 "write": true, 00:21:37.004 "unmap": true, 00:21:37.004 "write_zeroes": true, 00:21:37.004 "flush": true, 00:21:37.004 "reset": true, 00:21:37.004 "compare": false, 00:21:37.004 "compare_and_write": false, 00:21:37.004 "abort": true, 00:21:37.004 "nvme_admin": false, 00:21:37.004 "nvme_io": false 00:21:37.004 }, 00:21:37.004 "memory_domains": [ 00:21:37.004 { 00:21:37.004 "dma_device_id": "system", 00:21:37.004 "dma_device_type": 1 00:21:37.004 }, 00:21:37.004 { 00:21:37.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.004 "dma_device_type": 2 00:21:37.004 } 00:21:37.004 ], 00:21:37.004 "driver_specific": {} 00:21:37.004 } 00:21:37.004 ] 00:21:37.004 11:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:21:37.004 11:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:37.004 11:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:37.004 11:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:37.004 11:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:37.004 11:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:37.004 11:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:37.004 11:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:37.004 11:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:37.004 11:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:37.004 11:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:37.004 11:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.004 11:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.262 11:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:37.262 "name": "Existed_Raid", 00:21:37.262 "uuid": "f536d007-7cfc-497a-ba9e-ab402ec67b91", 00:21:37.262 "strip_size_kb": 64, 00:21:37.262 "state": "configuring", 00:21:37.262 "raid_level": "concat", 00:21:37.262 "superblock": true, 00:21:37.262 "num_base_bdevs": 3, 00:21:37.262 "num_base_bdevs_discovered": 2, 00:21:37.262 "num_base_bdevs_operational": 3, 00:21:37.262 "base_bdevs_list": [ 00:21:37.262 { 00:21:37.262 "name": "BaseBdev1", 00:21:37.262 "uuid": "7a4439d8-d1e6-47e9-91f0-4dc79889d4c6", 00:21:37.262 "is_configured": true, 00:21:37.262 "data_offset": 2048, 00:21:37.262 "data_size": 63488 00:21:37.262 }, 00:21:37.262 { 00:21:37.262 "name": null, 00:21:37.262 "uuid": "efe6f80a-1814-44eb-97df-748c5c4c3f87", 00:21:37.262 "is_configured": false, 00:21:37.262 "data_offset": 2048, 00:21:37.262 "data_size": 63488 00:21:37.262 }, 00:21:37.262 { 00:21:37.262 "name": "BaseBdev3", 00:21:37.262 "uuid": "2d715b03-d1d5-48c8-99bc-301c90588f5d", 00:21:37.262 "is_configured": true, 00:21:37.262 "data_offset": 2048, 00:21:37.262 "data_size": 63488 00:21:37.262 } 00:21:37.262 ] 00:21:37.262 }' 00:21:37.262 11:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:37.263 11:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.198 11:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:38.198 11:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.198 11:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:38.198 11:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:38.456 [2024-06-10 11:45:10.420926] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:38.456 11:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:38.456 11:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:38.456 11:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:38.456 11:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:38.456 11:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:38.456 11:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:38.456 11:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:38.456 11:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:38.456 11:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:38.456 11:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:38.456 11:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.456 11:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.715 11:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.715 "name": "Existed_Raid", 00:21:38.715 "uuid": "f536d007-7cfc-497a-ba9e-ab402ec67b91", 00:21:38.715 "strip_size_kb": 64, 00:21:38.715 "state": "configuring", 00:21:38.715 "raid_level": "concat", 00:21:38.715 "superblock": true, 00:21:38.715 "num_base_bdevs": 3, 00:21:38.715 "num_base_bdevs_discovered": 1, 00:21:38.715 "num_base_bdevs_operational": 3, 00:21:38.715 "base_bdevs_list": [ 00:21:38.715 { 00:21:38.715 "name": "BaseBdev1", 00:21:38.715 "uuid": "7a4439d8-d1e6-47e9-91f0-4dc79889d4c6", 00:21:38.715 "is_configured": true, 00:21:38.715 "data_offset": 2048, 00:21:38.715 "data_size": 63488 00:21:38.715 }, 00:21:38.715 { 00:21:38.715 "name": null, 00:21:38.715 "uuid": "efe6f80a-1814-44eb-97df-748c5c4c3f87", 00:21:38.715 "is_configured": false, 00:21:38.715 "data_offset": 2048, 00:21:38.715 "data_size": 63488 00:21:38.715 }, 00:21:38.715 { 00:21:38.715 "name": null, 00:21:38.715 "uuid": "2d715b03-d1d5-48c8-99bc-301c90588f5d", 00:21:38.715 "is_configured": false, 00:21:38.715 "data_offset": 2048, 00:21:38.715 "data_size": 63488 00:21:38.715 } 00:21:38.715 ] 00:21:38.715 }' 00:21:38.715 11:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.715 11:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.650 11:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:39.650 11:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.651 11:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:39.651 11:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:40.218 [2024-06-10 11:45:11.985288] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:40.218 11:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:40.218 11:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:40.218 11:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:40.218 11:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:40.218 11:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:40.218 11:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:40.218 11:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.218 11:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.218 11:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.218 11:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.218 11:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.218 11:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.476 11:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:40.476 "name": "Existed_Raid", 00:21:40.476 "uuid": "f536d007-7cfc-497a-ba9e-ab402ec67b91", 00:21:40.476 "strip_size_kb": 64, 00:21:40.476 "state": "configuring", 00:21:40.476 "raid_level": "concat", 00:21:40.476 "superblock": true, 00:21:40.476 "num_base_bdevs": 3, 00:21:40.476 "num_base_bdevs_discovered": 2, 00:21:40.476 "num_base_bdevs_operational": 3, 00:21:40.476 "base_bdevs_list": [ 00:21:40.476 { 00:21:40.476 "name": "BaseBdev1", 00:21:40.476 "uuid": "7a4439d8-d1e6-47e9-91f0-4dc79889d4c6", 00:21:40.476 "is_configured": true, 00:21:40.476 "data_offset": 2048, 00:21:40.476 "data_size": 63488 00:21:40.476 }, 00:21:40.476 { 00:21:40.476 "name": null, 00:21:40.476 "uuid": "efe6f80a-1814-44eb-97df-748c5c4c3f87", 00:21:40.476 "is_configured": false, 00:21:40.476 "data_offset": 2048, 00:21:40.476 "data_size": 63488 00:21:40.476 }, 00:21:40.476 { 00:21:40.476 "name": "BaseBdev3", 00:21:40.476 "uuid": "2d715b03-d1d5-48c8-99bc-301c90588f5d", 00:21:40.476 "is_configured": true, 00:21:40.476 "data_offset": 2048, 00:21:40.476 "data_size": 63488 00:21:40.476 } 00:21:40.476 ] 00:21:40.476 }' 00:21:40.476 11:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:40.476 11:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.042 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.042 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:41.300 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:41.300 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:41.558 [2024-06-10 11:45:13.485825] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:41.816 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:41.816 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:41.816 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:41.816 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:41.816 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:41.816 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:41.816 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:41.816 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:41.816 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:41.816 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:41.816 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.816 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.074 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:42.074 "name": "Existed_Raid", 00:21:42.074 "uuid": "f536d007-7cfc-497a-ba9e-ab402ec67b91", 00:21:42.074 "strip_size_kb": 64, 00:21:42.074 "state": "configuring", 00:21:42.074 "raid_level": "concat", 00:21:42.074 "superblock": true, 00:21:42.074 "num_base_bdevs": 3, 00:21:42.074 "num_base_bdevs_discovered": 1, 00:21:42.074 "num_base_bdevs_operational": 3, 00:21:42.074 "base_bdevs_list": [ 00:21:42.074 { 00:21:42.074 "name": null, 00:21:42.074 "uuid": "7a4439d8-d1e6-47e9-91f0-4dc79889d4c6", 00:21:42.074 "is_configured": false, 00:21:42.074 "data_offset": 2048, 00:21:42.074 "data_size": 63488 00:21:42.074 }, 00:21:42.074 { 00:21:42.074 "name": null, 00:21:42.074 "uuid": "efe6f80a-1814-44eb-97df-748c5c4c3f87", 00:21:42.074 "is_configured": false, 00:21:42.074 "data_offset": 2048, 00:21:42.074 "data_size": 63488 00:21:42.074 }, 00:21:42.074 { 00:21:42.074 "name": "BaseBdev3", 00:21:42.074 "uuid": "2d715b03-d1d5-48c8-99bc-301c90588f5d", 00:21:42.074 "is_configured": true, 00:21:42.074 "data_offset": 2048, 00:21:42.074 "data_size": 63488 00:21:42.074 } 00:21:42.074 ] 00:21:42.074 }' 00:21:42.074 11:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:42.074 11:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.640 11:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:42.640 11:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.899 11:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:42.899 11:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:43.157 [2024-06-10 11:45:15.061949] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:43.157 11:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:43.157 11:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:43.157 11:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:43.157 11:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:43.157 11:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:43.157 11:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:43.157 11:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:43.157 11:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:43.157 11:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:43.157 11:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:43.157 11:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.157 11:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.443 11:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:43.443 "name": "Existed_Raid", 00:21:43.443 "uuid": "f536d007-7cfc-497a-ba9e-ab402ec67b91", 00:21:43.443 "strip_size_kb": 64, 00:21:43.443 "state": "configuring", 00:21:43.443 "raid_level": "concat", 00:21:43.443 "superblock": true, 00:21:43.443 "num_base_bdevs": 3, 00:21:43.444 "num_base_bdevs_discovered": 2, 00:21:43.444 "num_base_bdevs_operational": 3, 00:21:43.444 "base_bdevs_list": [ 00:21:43.444 { 00:21:43.444 "name": null, 00:21:43.444 "uuid": "7a4439d8-d1e6-47e9-91f0-4dc79889d4c6", 00:21:43.444 "is_configured": false, 00:21:43.444 "data_offset": 2048, 00:21:43.444 "data_size": 63488 00:21:43.444 }, 00:21:43.444 { 00:21:43.444 "name": "BaseBdev2", 00:21:43.444 "uuid": "efe6f80a-1814-44eb-97df-748c5c4c3f87", 00:21:43.444 "is_configured": true, 00:21:43.444 "data_offset": 2048, 00:21:43.444 "data_size": 63488 00:21:43.444 }, 00:21:43.444 { 00:21:43.444 "name": "BaseBdev3", 00:21:43.444 "uuid": "2d715b03-d1d5-48c8-99bc-301c90588f5d", 00:21:43.444 "is_configured": true, 00:21:43.444 "data_offset": 2048, 00:21:43.444 "data_size": 63488 00:21:43.444 } 00:21:43.444 ] 00:21:43.444 }' 00:21:43.444 11:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:43.444 11:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.020 11:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:44.021 11:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.279 11:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:44.279 11:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:44.279 11:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.537 11:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7a4439d8-d1e6-47e9-91f0-4dc79889d4c6 00:21:44.795 [2024-06-10 11:45:16.686428] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:44.795 [2024-06-10 11:45:16.686990] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:21:44.795 [2024-06-10 11:45:16.687134] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:44.795 [2024-06-10 11:45:16.687441] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:44.795 [2024-06-10 11:45:16.687968] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:21:44.795 [2024-06-10 11:45:16.688107] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:21:44.795 [2024-06-10 11:45:16.688420] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.795 NewBaseBdev 00:21:44.795 11:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:44.795 11:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:21:44.795 11:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:44.795 11:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:21:44.795 11:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:44.795 11:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:44.795 11:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:45.053 11:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:45.311 [ 00:21:45.311 { 00:21:45.311 "name": "NewBaseBdev", 00:21:45.311 "aliases": [ 00:21:45.311 "7a4439d8-d1e6-47e9-91f0-4dc79889d4c6" 00:21:45.311 ], 00:21:45.311 "product_name": "Malloc disk", 00:21:45.311 "block_size": 512, 00:21:45.311 "num_blocks": 65536, 00:21:45.311 "uuid": "7a4439d8-d1e6-47e9-91f0-4dc79889d4c6", 00:21:45.311 "assigned_rate_limits": { 00:21:45.311 "rw_ios_per_sec": 0, 00:21:45.311 "rw_mbytes_per_sec": 0, 00:21:45.311 "r_mbytes_per_sec": 0, 00:21:45.311 "w_mbytes_per_sec": 0 00:21:45.311 }, 00:21:45.311 "claimed": true, 00:21:45.311 "claim_type": "exclusive_write", 00:21:45.311 "zoned": false, 00:21:45.311 "supported_io_types": { 00:21:45.311 "read": true, 00:21:45.311 "write": true, 00:21:45.311 "unmap": true, 00:21:45.311 "write_zeroes": true, 00:21:45.311 "flush": true, 00:21:45.311 "reset": true, 00:21:45.311 "compare": false, 00:21:45.311 "compare_and_write": false, 00:21:45.311 "abort": true, 00:21:45.311 "nvme_admin": false, 00:21:45.311 "nvme_io": false 00:21:45.311 }, 00:21:45.311 "memory_domains": [ 00:21:45.311 { 00:21:45.311 "dma_device_id": "system", 00:21:45.311 "dma_device_type": 1 00:21:45.311 }, 00:21:45.311 { 00:21:45.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.311 "dma_device_type": 2 00:21:45.311 } 00:21:45.311 ], 00:21:45.311 "driver_specific": {} 00:21:45.311 } 00:21:45.311 ] 00:21:45.311 11:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:21:45.311 11:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:21:45.311 11:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:45.311 11:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:45.311 11:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:45.311 11:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:45.311 11:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:45.311 11:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:45.311 11:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:45.311 11:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:45.311 11:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:45.311 11:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.311 11:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.569 11:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.569 "name": "Existed_Raid", 00:21:45.569 "uuid": "f536d007-7cfc-497a-ba9e-ab402ec67b91", 00:21:45.569 "strip_size_kb": 64, 00:21:45.569 "state": "online", 00:21:45.569 "raid_level": "concat", 00:21:45.569 "superblock": true, 00:21:45.569 "num_base_bdevs": 3, 00:21:45.569 "num_base_bdevs_discovered": 3, 00:21:45.569 "num_base_bdevs_operational": 3, 00:21:45.569 "base_bdevs_list": [ 00:21:45.569 { 00:21:45.569 "name": "NewBaseBdev", 00:21:45.569 "uuid": "7a4439d8-d1e6-47e9-91f0-4dc79889d4c6", 00:21:45.569 "is_configured": true, 00:21:45.569 "data_offset": 2048, 00:21:45.569 "data_size": 63488 00:21:45.569 }, 00:21:45.569 { 00:21:45.569 "name": "BaseBdev2", 00:21:45.569 "uuid": "efe6f80a-1814-44eb-97df-748c5c4c3f87", 00:21:45.569 "is_configured": true, 00:21:45.569 "data_offset": 2048, 00:21:45.569 "data_size": 63488 00:21:45.569 }, 00:21:45.569 { 00:21:45.569 "name": "BaseBdev3", 00:21:45.569 "uuid": "2d715b03-d1d5-48c8-99bc-301c90588f5d", 00:21:45.569 "is_configured": true, 00:21:45.569 "data_offset": 2048, 00:21:45.569 "data_size": 63488 00:21:45.569 } 00:21:45.569 ] 00:21:45.569 }' 00:21:45.569 11:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.569 11:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.134 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:46.134 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:46.134 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:46.134 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:46.134 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:46.134 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:46.135 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:46.135 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:46.392 [2024-06-10 11:45:18.299534] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:46.392 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:46.392 "name": "Existed_Raid", 00:21:46.392 "aliases": [ 00:21:46.392 "f536d007-7cfc-497a-ba9e-ab402ec67b91" 00:21:46.392 ], 00:21:46.392 "product_name": "Raid Volume", 00:21:46.392 "block_size": 512, 00:21:46.392 "num_blocks": 190464, 00:21:46.392 "uuid": "f536d007-7cfc-497a-ba9e-ab402ec67b91", 00:21:46.392 "assigned_rate_limits": { 00:21:46.392 "rw_ios_per_sec": 0, 00:21:46.392 "rw_mbytes_per_sec": 0, 00:21:46.392 "r_mbytes_per_sec": 0, 00:21:46.392 "w_mbytes_per_sec": 0 00:21:46.392 }, 00:21:46.392 "claimed": false, 00:21:46.392 "zoned": false, 00:21:46.392 "supported_io_types": { 00:21:46.392 "read": true, 00:21:46.392 "write": true, 00:21:46.392 "unmap": true, 00:21:46.392 "write_zeroes": true, 00:21:46.392 "flush": true, 00:21:46.392 "reset": true, 00:21:46.392 "compare": false, 00:21:46.392 "compare_and_write": false, 00:21:46.392 "abort": false, 00:21:46.392 "nvme_admin": false, 00:21:46.392 "nvme_io": false 00:21:46.392 }, 00:21:46.392 "memory_domains": [ 00:21:46.392 { 00:21:46.392 "dma_device_id": "system", 00:21:46.392 "dma_device_type": 1 00:21:46.392 }, 00:21:46.392 { 00:21:46.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.392 "dma_device_type": 2 00:21:46.392 }, 00:21:46.392 { 00:21:46.392 "dma_device_id": "system", 00:21:46.392 "dma_device_type": 1 00:21:46.392 }, 00:21:46.392 { 00:21:46.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.392 "dma_device_type": 2 00:21:46.392 }, 00:21:46.392 { 00:21:46.392 "dma_device_id": "system", 00:21:46.392 "dma_device_type": 1 00:21:46.392 }, 00:21:46.392 { 00:21:46.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.392 "dma_device_type": 2 00:21:46.392 } 00:21:46.392 ], 00:21:46.392 "driver_specific": { 00:21:46.392 "raid": { 00:21:46.392 "uuid": "f536d007-7cfc-497a-ba9e-ab402ec67b91", 00:21:46.392 "strip_size_kb": 64, 00:21:46.392 "state": "online", 00:21:46.392 "raid_level": "concat", 00:21:46.392 "superblock": true, 00:21:46.392 "num_base_bdevs": 3, 00:21:46.392 "num_base_bdevs_discovered": 3, 00:21:46.392 "num_base_bdevs_operational": 3, 00:21:46.392 "base_bdevs_list": [ 00:21:46.392 { 00:21:46.392 "name": "NewBaseBdev", 00:21:46.392 "uuid": "7a4439d8-d1e6-47e9-91f0-4dc79889d4c6", 00:21:46.392 "is_configured": true, 00:21:46.393 "data_offset": 2048, 00:21:46.393 "data_size": 63488 00:21:46.393 }, 00:21:46.393 { 00:21:46.393 "name": "BaseBdev2", 00:21:46.393 "uuid": "efe6f80a-1814-44eb-97df-748c5c4c3f87", 00:21:46.393 "is_configured": true, 00:21:46.393 "data_offset": 2048, 00:21:46.393 "data_size": 63488 00:21:46.393 }, 00:21:46.393 { 00:21:46.393 "name": "BaseBdev3", 00:21:46.393 "uuid": "2d715b03-d1d5-48c8-99bc-301c90588f5d", 00:21:46.393 "is_configured": true, 00:21:46.393 "data_offset": 2048, 00:21:46.393 "data_size": 63488 00:21:46.393 } 00:21:46.393 ] 00:21:46.393 } 00:21:46.393 } 00:21:46.393 }' 00:21:46.393 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:46.393 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:46.393 BaseBdev2 00:21:46.393 BaseBdev3' 00:21:46.393 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:46.393 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:46.393 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:46.651 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:46.651 "name": "NewBaseBdev", 00:21:46.651 "aliases": [ 00:21:46.651 "7a4439d8-d1e6-47e9-91f0-4dc79889d4c6" 00:21:46.651 ], 00:21:46.651 "product_name": "Malloc disk", 00:21:46.651 "block_size": 512, 00:21:46.651 "num_blocks": 65536, 00:21:46.651 "uuid": "7a4439d8-d1e6-47e9-91f0-4dc79889d4c6", 00:21:46.651 "assigned_rate_limits": { 00:21:46.651 "rw_ios_per_sec": 0, 00:21:46.651 "rw_mbytes_per_sec": 0, 00:21:46.651 "r_mbytes_per_sec": 0, 00:21:46.651 "w_mbytes_per_sec": 0 00:21:46.651 }, 00:21:46.651 "claimed": true, 00:21:46.651 "claim_type": "exclusive_write", 00:21:46.651 "zoned": false, 00:21:46.651 "supported_io_types": { 00:21:46.651 "read": true, 00:21:46.651 "write": true, 00:21:46.651 "unmap": true, 00:21:46.651 "write_zeroes": true, 00:21:46.651 "flush": true, 00:21:46.651 "reset": true, 00:21:46.651 "compare": false, 00:21:46.651 "compare_and_write": false, 00:21:46.651 "abort": true, 00:21:46.651 "nvme_admin": false, 00:21:46.651 "nvme_io": false 00:21:46.651 }, 00:21:46.651 "memory_domains": [ 00:21:46.651 { 00:21:46.651 "dma_device_id": "system", 00:21:46.651 "dma_device_type": 1 00:21:46.651 }, 00:21:46.651 { 00:21:46.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.651 "dma_device_type": 2 00:21:46.651 } 00:21:46.651 ], 00:21:46.651 "driver_specific": {} 00:21:46.651 }' 00:21:46.651 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:46.651 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:46.909 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:46.909 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:46.909 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:46.909 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:46.909 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:46.909 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:46.909 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:46.909 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:47.167 11:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:47.167 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:47.167 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:47.167 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:47.167 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:47.426 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:47.426 "name": "BaseBdev2", 00:21:47.426 "aliases": [ 00:21:47.426 "efe6f80a-1814-44eb-97df-748c5c4c3f87" 00:21:47.426 ], 00:21:47.426 "product_name": "Malloc disk", 00:21:47.426 "block_size": 512, 00:21:47.426 "num_blocks": 65536, 00:21:47.426 "uuid": "efe6f80a-1814-44eb-97df-748c5c4c3f87", 00:21:47.426 "assigned_rate_limits": { 00:21:47.426 "rw_ios_per_sec": 0, 00:21:47.426 "rw_mbytes_per_sec": 0, 00:21:47.426 "r_mbytes_per_sec": 0, 00:21:47.426 "w_mbytes_per_sec": 0 00:21:47.426 }, 00:21:47.426 "claimed": true, 00:21:47.426 "claim_type": "exclusive_write", 00:21:47.426 "zoned": false, 00:21:47.426 "supported_io_types": { 00:21:47.426 "read": true, 00:21:47.426 "write": true, 00:21:47.426 "unmap": true, 00:21:47.426 "write_zeroes": true, 00:21:47.426 "flush": true, 00:21:47.426 "reset": true, 00:21:47.426 "compare": false, 00:21:47.426 "compare_and_write": false, 00:21:47.426 "abort": true, 00:21:47.426 "nvme_admin": false, 00:21:47.426 "nvme_io": false 00:21:47.426 }, 00:21:47.426 "memory_domains": [ 00:21:47.426 { 00:21:47.426 "dma_device_id": "system", 00:21:47.426 "dma_device_type": 1 00:21:47.426 }, 00:21:47.426 { 00:21:47.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.426 "dma_device_type": 2 00:21:47.426 } 00:21:47.426 ], 00:21:47.426 "driver_specific": {} 00:21:47.426 }' 00:21:47.426 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:47.426 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:47.684 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:47.684 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:47.684 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:47.684 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:47.685 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:47.685 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:47.685 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:47.685 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:47.943 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:47.943 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:47.943 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:47.943 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:47.943 11:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:48.200 11:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:48.200 "name": "BaseBdev3", 00:21:48.200 "aliases": [ 00:21:48.200 "2d715b03-d1d5-48c8-99bc-301c90588f5d" 00:21:48.200 ], 00:21:48.200 "product_name": "Malloc disk", 00:21:48.200 "block_size": 512, 00:21:48.200 "num_blocks": 65536, 00:21:48.200 "uuid": "2d715b03-d1d5-48c8-99bc-301c90588f5d", 00:21:48.200 "assigned_rate_limits": { 00:21:48.200 "rw_ios_per_sec": 0, 00:21:48.200 "rw_mbytes_per_sec": 0, 00:21:48.200 "r_mbytes_per_sec": 0, 00:21:48.200 "w_mbytes_per_sec": 0 00:21:48.200 }, 00:21:48.200 "claimed": true, 00:21:48.200 "claim_type": "exclusive_write", 00:21:48.200 "zoned": false, 00:21:48.200 "supported_io_types": { 00:21:48.200 "read": true, 00:21:48.200 "write": true, 00:21:48.201 "unmap": true, 00:21:48.201 "write_zeroes": true, 00:21:48.201 "flush": true, 00:21:48.201 "reset": true, 00:21:48.201 "compare": false, 00:21:48.201 "compare_and_write": false, 00:21:48.201 "abort": true, 00:21:48.201 "nvme_admin": false, 00:21:48.201 "nvme_io": false 00:21:48.201 }, 00:21:48.201 "memory_domains": [ 00:21:48.201 { 00:21:48.201 "dma_device_id": "system", 00:21:48.201 "dma_device_type": 1 00:21:48.201 }, 00:21:48.201 { 00:21:48.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.201 "dma_device_type": 2 00:21:48.201 } 00:21:48.201 ], 00:21:48.201 "driver_specific": {} 00:21:48.201 }' 00:21:48.201 11:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:48.201 11:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:48.201 11:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:48.201 11:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:48.201 11:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:48.201 11:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:48.201 11:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:48.201 11:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:48.459 11:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:48.459 11:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:48.459 11:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:48.459 11:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:48.459 11:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:48.716 [2024-06-10 11:45:20.671791] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:48.716 [2024-06-10 11:45:20.672051] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:48.716 [2024-06-10 11:45:20.672240] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:48.716 [2024-06-10 11:45:20.672396] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:48.716 [2024-06-10 11:45:20.672495] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:21:48.716 11:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 130511 00:21:48.716 11:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 130511 ']' 00:21:48.716 11:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 130511 00:21:48.716 11:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:21:48.716 11:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:48.716 11:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 130511 00:21:48.716 11:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:48.716 11:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:48.716 11:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 130511' 00:21:48.716 killing process with pid 130511 00:21:48.716 11:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 130511 00:21:48.716 [2024-06-10 11:45:20.714025] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:48.716 11:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 130511 00:21:49.282 [2024-06-10 11:45:21.059139] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:50.653 11:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:21:50.653 ************************************ 00:21:50.653 END TEST raid_state_function_test_sb 00:21:50.653 ************************************ 00:21:50.653 00:21:50.653 real 0m33.439s 00:21:50.653 user 1m0.855s 00:21:50.653 sys 0m4.294s 00:21:50.653 11:45:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:50.653 11:45:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.653 11:45:22 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:21:50.653 11:45:22 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:21:50.653 11:45:22 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:50.653 11:45:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:50.653 ************************************ 00:21:50.653 START TEST raid_superblock_test 00:21:50.653 ************************************ 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test concat 3 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=131529 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 131529 /var/tmp/spdk-raid.sock 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 131529 ']' 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:50.653 11:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:50.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:50.654 11:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:50.654 11:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.912 [2024-06-10 11:45:22.716932] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:21:50.912 [2024-06-10 11:45:22.717447] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131529 ] 00:21:50.912 [2024-06-10 11:45:22.911920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.170 [2024-06-10 11:45:23.154732] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.428 [2024-06-10 11:45:23.371344] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:51.686 11:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:51.686 11:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:21:51.686 11:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:21:51.686 11:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:51.686 11:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:21:51.686 11:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:21:51.686 11:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:51.686 11:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:51.686 11:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:51.686 11:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:51.686 11:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:51.944 malloc1 00:21:51.944 11:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:52.202 [2024-06-10 11:45:24.148226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:52.202 [2024-06-10 11:45:24.148573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:52.202 [2024-06-10 11:45:24.148720] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:21:52.202 [2024-06-10 11:45:24.148828] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:52.202 [2024-06-10 11:45:24.151729] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:52.202 [2024-06-10 11:45:24.151959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:52.202 pt1 00:21:52.202 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:52.202 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:52.202 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:21:52.202 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:21:52.202 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:52.202 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:52.202 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:52.202 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:52.203 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:52.460 malloc2 00:21:52.460 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:53.027 [2024-06-10 11:45:24.817401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:53.027 [2024-06-10 11:45:24.817792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.027 [2024-06-10 11:45:24.818035] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:53.027 [2024-06-10 11:45:24.818206] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.027 [2024-06-10 11:45:24.821128] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.027 [2024-06-10 11:45:24.821346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:53.027 pt2 00:21:53.027 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:53.027 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:53.027 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:21:53.027 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:21:53.027 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:53.027 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:53.027 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:53.027 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:53.027 11:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:53.285 malloc3 00:21:53.286 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:53.543 [2024-06-10 11:45:25.385348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:53.543 [2024-06-10 11:45:25.385720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.543 [2024-06-10 11:45:25.385896] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:53.543 [2024-06-10 11:45:25.386028] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.543 [2024-06-10 11:45:25.389131] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.544 [2024-06-10 11:45:25.389374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:53.544 pt3 00:21:53.544 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:53.544 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:53.544 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:21:53.802 [2024-06-10 11:45:25.625834] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:53.802 [2024-06-10 11:45:25.628384] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:53.802 [2024-06-10 11:45:25.628653] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:53.802 [2024-06-10 11:45:25.629015] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:21:53.802 [2024-06-10 11:45:25.629142] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:53.802 [2024-06-10 11:45:25.629372] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:53.802 [2024-06-10 11:45:25.629864] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:21:53.802 [2024-06-10 11:45:25.629989] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:21:53.802 [2024-06-10 11:45:25.630326] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:53.802 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:53.802 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:53.802 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:53.802 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:53.802 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:53.802 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:53.802 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:53.802 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:53.802 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:53.802 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:53.802 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.802 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.060 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:54.060 "name": "raid_bdev1", 00:21:54.060 "uuid": "ee32fb19-9cdc-4410-aef3-8837d2ca42c4", 00:21:54.060 "strip_size_kb": 64, 00:21:54.060 "state": "online", 00:21:54.060 "raid_level": "concat", 00:21:54.060 "superblock": true, 00:21:54.060 "num_base_bdevs": 3, 00:21:54.060 "num_base_bdevs_discovered": 3, 00:21:54.061 "num_base_bdevs_operational": 3, 00:21:54.061 "base_bdevs_list": [ 00:21:54.061 { 00:21:54.061 "name": "pt1", 00:21:54.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:54.061 "is_configured": true, 00:21:54.061 "data_offset": 2048, 00:21:54.061 "data_size": 63488 00:21:54.061 }, 00:21:54.061 { 00:21:54.061 "name": "pt2", 00:21:54.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:54.061 "is_configured": true, 00:21:54.061 "data_offset": 2048, 00:21:54.061 "data_size": 63488 00:21:54.061 }, 00:21:54.061 { 00:21:54.061 "name": "pt3", 00:21:54.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:54.061 "is_configured": true, 00:21:54.061 "data_offset": 2048, 00:21:54.061 "data_size": 63488 00:21:54.061 } 00:21:54.061 ] 00:21:54.061 }' 00:21:54.061 11:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:54.061 11:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.626 11:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:54.626 11:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:54.626 11:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:54.626 11:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:54.626 11:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:54.626 11:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:54.626 11:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:54.626 11:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:54.626 [2024-06-10 11:45:26.638832] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:54.626 11:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:54.626 "name": "raid_bdev1", 00:21:54.626 "aliases": [ 00:21:54.626 "ee32fb19-9cdc-4410-aef3-8837d2ca42c4" 00:21:54.626 ], 00:21:54.626 "product_name": "Raid Volume", 00:21:54.626 "block_size": 512, 00:21:54.626 "num_blocks": 190464, 00:21:54.626 "uuid": "ee32fb19-9cdc-4410-aef3-8837d2ca42c4", 00:21:54.626 "assigned_rate_limits": { 00:21:54.626 "rw_ios_per_sec": 0, 00:21:54.626 "rw_mbytes_per_sec": 0, 00:21:54.626 "r_mbytes_per_sec": 0, 00:21:54.626 "w_mbytes_per_sec": 0 00:21:54.626 }, 00:21:54.626 "claimed": false, 00:21:54.626 "zoned": false, 00:21:54.626 "supported_io_types": { 00:21:54.626 "read": true, 00:21:54.626 "write": true, 00:21:54.626 "unmap": true, 00:21:54.626 "write_zeroes": true, 00:21:54.626 "flush": true, 00:21:54.626 "reset": true, 00:21:54.626 "compare": false, 00:21:54.626 "compare_and_write": false, 00:21:54.626 "abort": false, 00:21:54.626 "nvme_admin": false, 00:21:54.626 "nvme_io": false 00:21:54.626 }, 00:21:54.626 "memory_domains": [ 00:21:54.626 { 00:21:54.626 "dma_device_id": "system", 00:21:54.626 "dma_device_type": 1 00:21:54.626 }, 00:21:54.626 { 00:21:54.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.626 "dma_device_type": 2 00:21:54.626 }, 00:21:54.626 { 00:21:54.626 "dma_device_id": "system", 00:21:54.626 "dma_device_type": 1 00:21:54.626 }, 00:21:54.626 { 00:21:54.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.626 "dma_device_type": 2 00:21:54.626 }, 00:21:54.626 { 00:21:54.626 "dma_device_id": "system", 00:21:54.626 "dma_device_type": 1 00:21:54.626 }, 00:21:54.626 { 00:21:54.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.626 "dma_device_type": 2 00:21:54.626 } 00:21:54.626 ], 00:21:54.626 "driver_specific": { 00:21:54.626 "raid": { 00:21:54.626 "uuid": "ee32fb19-9cdc-4410-aef3-8837d2ca42c4", 00:21:54.626 "strip_size_kb": 64, 00:21:54.626 "state": "online", 00:21:54.626 "raid_level": "concat", 00:21:54.626 "superblock": true, 00:21:54.626 "num_base_bdevs": 3, 00:21:54.626 "num_base_bdevs_discovered": 3, 00:21:54.626 "num_base_bdevs_operational": 3, 00:21:54.626 "base_bdevs_list": [ 00:21:54.626 { 00:21:54.626 "name": "pt1", 00:21:54.626 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:54.626 "is_configured": true, 00:21:54.626 "data_offset": 2048, 00:21:54.626 "data_size": 63488 00:21:54.626 }, 00:21:54.626 { 00:21:54.626 "name": "pt2", 00:21:54.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:54.626 "is_configured": true, 00:21:54.626 "data_offset": 2048, 00:21:54.626 "data_size": 63488 00:21:54.626 }, 00:21:54.626 { 00:21:54.626 "name": "pt3", 00:21:54.626 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:54.626 "is_configured": true, 00:21:54.626 "data_offset": 2048, 00:21:54.626 "data_size": 63488 00:21:54.626 } 00:21:54.626 ] 00:21:54.626 } 00:21:54.626 } 00:21:54.626 }' 00:21:54.626 11:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:54.920 11:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:54.920 pt2 00:21:54.920 pt3' 00:21:54.920 11:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:54.920 11:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:54.920 11:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:55.178 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:55.178 "name": "pt1", 00:21:55.178 "aliases": [ 00:21:55.178 "00000000-0000-0000-0000-000000000001" 00:21:55.178 ], 00:21:55.178 "product_name": "passthru", 00:21:55.178 "block_size": 512, 00:21:55.178 "num_blocks": 65536, 00:21:55.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:55.178 "assigned_rate_limits": { 00:21:55.178 "rw_ios_per_sec": 0, 00:21:55.178 "rw_mbytes_per_sec": 0, 00:21:55.178 "r_mbytes_per_sec": 0, 00:21:55.178 "w_mbytes_per_sec": 0 00:21:55.178 }, 00:21:55.178 "claimed": true, 00:21:55.178 "claim_type": "exclusive_write", 00:21:55.178 "zoned": false, 00:21:55.178 "supported_io_types": { 00:21:55.178 "read": true, 00:21:55.178 "write": true, 00:21:55.178 "unmap": true, 00:21:55.178 "write_zeroes": true, 00:21:55.178 "flush": true, 00:21:55.178 "reset": true, 00:21:55.178 "compare": false, 00:21:55.178 "compare_and_write": false, 00:21:55.178 "abort": true, 00:21:55.178 "nvme_admin": false, 00:21:55.178 "nvme_io": false 00:21:55.178 }, 00:21:55.178 "memory_domains": [ 00:21:55.178 { 00:21:55.178 "dma_device_id": "system", 00:21:55.178 "dma_device_type": 1 00:21:55.178 }, 00:21:55.178 { 00:21:55.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.178 "dma_device_type": 2 00:21:55.178 } 00:21:55.178 ], 00:21:55.178 "driver_specific": { 00:21:55.178 "passthru": { 00:21:55.178 "name": "pt1", 00:21:55.178 "base_bdev_name": "malloc1" 00:21:55.178 } 00:21:55.178 } 00:21:55.178 }' 00:21:55.178 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:55.178 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:55.178 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:55.178 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:55.178 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:55.178 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:55.178 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:55.178 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:55.436 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:55.436 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:55.436 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:55.436 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:55.436 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:55.436 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:55.436 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:55.694 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:55.694 "name": "pt2", 00:21:55.694 "aliases": [ 00:21:55.694 "00000000-0000-0000-0000-000000000002" 00:21:55.694 ], 00:21:55.694 "product_name": "passthru", 00:21:55.694 "block_size": 512, 00:21:55.694 "num_blocks": 65536, 00:21:55.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:55.694 "assigned_rate_limits": { 00:21:55.694 "rw_ios_per_sec": 0, 00:21:55.694 "rw_mbytes_per_sec": 0, 00:21:55.694 "r_mbytes_per_sec": 0, 00:21:55.694 "w_mbytes_per_sec": 0 00:21:55.694 }, 00:21:55.694 "claimed": true, 00:21:55.694 "claim_type": "exclusive_write", 00:21:55.694 "zoned": false, 00:21:55.694 "supported_io_types": { 00:21:55.694 "read": true, 00:21:55.694 "write": true, 00:21:55.694 "unmap": true, 00:21:55.694 "write_zeroes": true, 00:21:55.694 "flush": true, 00:21:55.694 "reset": true, 00:21:55.694 "compare": false, 00:21:55.694 "compare_and_write": false, 00:21:55.694 "abort": true, 00:21:55.694 "nvme_admin": false, 00:21:55.694 "nvme_io": false 00:21:55.694 }, 00:21:55.694 "memory_domains": [ 00:21:55.694 { 00:21:55.694 "dma_device_id": "system", 00:21:55.694 "dma_device_type": 1 00:21:55.694 }, 00:21:55.694 { 00:21:55.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.694 "dma_device_type": 2 00:21:55.694 } 00:21:55.694 ], 00:21:55.694 "driver_specific": { 00:21:55.694 "passthru": { 00:21:55.694 "name": "pt2", 00:21:55.694 "base_bdev_name": "malloc2" 00:21:55.694 } 00:21:55.694 } 00:21:55.694 }' 00:21:55.694 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:55.694 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:55.694 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:55.694 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:55.952 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:55.952 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:55.952 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:55.952 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:55.952 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:55.952 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:55.952 11:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:56.210 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:56.210 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:56.210 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:56.210 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:56.468 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:56.468 "name": "pt3", 00:21:56.468 "aliases": [ 00:21:56.468 "00000000-0000-0000-0000-000000000003" 00:21:56.468 ], 00:21:56.468 "product_name": "passthru", 00:21:56.468 "block_size": 512, 00:21:56.468 "num_blocks": 65536, 00:21:56.468 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:56.468 "assigned_rate_limits": { 00:21:56.468 "rw_ios_per_sec": 0, 00:21:56.468 "rw_mbytes_per_sec": 0, 00:21:56.468 "r_mbytes_per_sec": 0, 00:21:56.468 "w_mbytes_per_sec": 0 00:21:56.468 }, 00:21:56.468 "claimed": true, 00:21:56.468 "claim_type": "exclusive_write", 00:21:56.468 "zoned": false, 00:21:56.468 "supported_io_types": { 00:21:56.468 "read": true, 00:21:56.468 "write": true, 00:21:56.468 "unmap": true, 00:21:56.468 "write_zeroes": true, 00:21:56.468 "flush": true, 00:21:56.468 "reset": true, 00:21:56.468 "compare": false, 00:21:56.468 "compare_and_write": false, 00:21:56.468 "abort": true, 00:21:56.468 "nvme_admin": false, 00:21:56.468 "nvme_io": false 00:21:56.468 }, 00:21:56.468 "memory_domains": [ 00:21:56.468 { 00:21:56.468 "dma_device_id": "system", 00:21:56.468 "dma_device_type": 1 00:21:56.468 }, 00:21:56.468 { 00:21:56.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.468 "dma_device_type": 2 00:21:56.468 } 00:21:56.468 ], 00:21:56.468 "driver_specific": { 00:21:56.468 "passthru": { 00:21:56.469 "name": "pt3", 00:21:56.469 "base_bdev_name": "malloc3" 00:21:56.469 } 00:21:56.469 } 00:21:56.469 }' 00:21:56.469 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:56.469 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:56.469 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:56.469 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:56.469 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:56.469 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:56.469 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:56.727 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:56.727 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:56.727 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:56.727 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:56.727 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:56.727 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:56.727 11:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:21:57.292 [2024-06-10 11:45:29.071437] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:57.292 11:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=ee32fb19-9cdc-4410-aef3-8837d2ca42c4 00:21:57.292 11:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z ee32fb19-9cdc-4410-aef3-8837d2ca42c4 ']' 00:21:57.292 11:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:57.292 [2024-06-10 11:45:29.319138] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:57.292 [2024-06-10 11:45:29.319393] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:57.292 [2024-06-10 11:45:29.319571] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.292 [2024-06-10 11:45:29.319849] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:57.292 [2024-06-10 11:45:29.319950] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:21:57.292 11:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.292 11:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:21:57.857 11:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:21:57.857 11:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:21:57.857 11:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:57.857 11:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:57.857 11:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:57.857 11:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:58.116 11:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:58.116 11:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:58.374 11:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:58.374 11:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:58.632 11:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:21:58.632 11:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:58.632 11:45:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:21:58.632 11:45:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:58.632 11:45:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.632 11:45:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:58.632 11:45:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.632 11:45:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:58.632 11:45:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.632 11:45:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:58.632 11:45:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.632 11:45:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:58.632 11:45:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:58.890 [2024-06-10 11:45:30.895472] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:58.890 [2024-06-10 11:45:30.898053] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:58.890 [2024-06-10 11:45:30.898326] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:58.890 [2024-06-10 11:45:30.898506] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:58.890 [2024-06-10 11:45:30.898714] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:58.890 [2024-06-10 11:45:30.898804] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:58.890 [2024-06-10 11:45:30.898930] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:58.890 [2024-06-10 11:45:30.898976] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:21:58.890 request: 00:21:58.890 { 00:21:58.890 "name": "raid_bdev1", 00:21:58.890 "raid_level": "concat", 00:21:58.890 "base_bdevs": [ 00:21:58.890 "malloc1", 00:21:58.890 "malloc2", 00:21:58.890 "malloc3" 00:21:58.890 ], 00:21:58.890 "strip_size_kb": 64, 00:21:58.890 "superblock": false, 00:21:58.890 "method": "bdev_raid_create", 00:21:58.890 "req_id": 1 00:21:58.890 } 00:21:58.890 Got JSON-RPC error response 00:21:58.890 response: 00:21:58.890 { 00:21:58.890 "code": -17, 00:21:58.890 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:58.890 } 00:21:58.890 11:45:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:21:58.890 11:45:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:58.890 11:45:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:58.890 11:45:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:58.890 11:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.890 11:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:21:59.454 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:21:59.454 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:21:59.454 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:59.454 [2024-06-10 11:45:31.447018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:59.454 [2024-06-10 11:45:31.447336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.454 [2024-06-10 11:45:31.447483] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:59.454 [2024-06-10 11:45:31.447592] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.454 [2024-06-10 11:45:31.450263] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.454 [2024-06-10 11:45:31.450458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:59.454 [2024-06-10 11:45:31.450700] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:59.454 [2024-06-10 11:45:31.450848] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:59.454 pt1 00:21:59.454 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:21:59.454 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:59.454 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:59.454 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:59.454 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:59.454 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:59.454 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:59.454 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:59.454 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:59.454 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:59.454 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.454 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.712 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:59.712 "name": "raid_bdev1", 00:21:59.712 "uuid": "ee32fb19-9cdc-4410-aef3-8837d2ca42c4", 00:21:59.712 "strip_size_kb": 64, 00:21:59.712 "state": "configuring", 00:21:59.712 "raid_level": "concat", 00:21:59.712 "superblock": true, 00:21:59.712 "num_base_bdevs": 3, 00:21:59.712 "num_base_bdevs_discovered": 1, 00:21:59.712 "num_base_bdevs_operational": 3, 00:21:59.712 "base_bdevs_list": [ 00:21:59.712 { 00:21:59.712 "name": "pt1", 00:21:59.712 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:59.712 "is_configured": true, 00:21:59.712 "data_offset": 2048, 00:21:59.712 "data_size": 63488 00:21:59.712 }, 00:21:59.712 { 00:21:59.712 "name": null, 00:21:59.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:59.712 "is_configured": false, 00:21:59.712 "data_offset": 2048, 00:21:59.712 "data_size": 63488 00:21:59.712 }, 00:21:59.712 { 00:21:59.712 "name": null, 00:21:59.712 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:59.712 "is_configured": false, 00:21:59.712 "data_offset": 2048, 00:21:59.712 "data_size": 63488 00:21:59.712 } 00:21:59.712 ] 00:21:59.712 }' 00:21:59.712 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:59.712 11:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.645 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:22:00.645 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:00.645 [2024-06-10 11:45:32.667416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:00.645 [2024-06-10 11:45:32.667832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:00.645 [2024-06-10 11:45:32.667921] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:00.645 [2024-06-10 11:45:32.668090] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:00.645 [2024-06-10 11:45:32.668654] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:00.645 [2024-06-10 11:45:32.668738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:00.645 [2024-06-10 11:45:32.668896] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:00.645 [2024-06-10 11:45:32.668953] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:00.645 pt2 00:22:00.645 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:01.210 [2024-06-10 11:45:33.007525] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:01.210 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:22:01.210 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:01.210 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:01.210 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:01.210 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:01.210 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:01.210 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:01.210 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:01.210 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:01.210 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:01.210 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.210 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.468 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:01.468 "name": "raid_bdev1", 00:22:01.468 "uuid": "ee32fb19-9cdc-4410-aef3-8837d2ca42c4", 00:22:01.468 "strip_size_kb": 64, 00:22:01.468 "state": "configuring", 00:22:01.468 "raid_level": "concat", 00:22:01.468 "superblock": true, 00:22:01.468 "num_base_bdevs": 3, 00:22:01.468 "num_base_bdevs_discovered": 1, 00:22:01.468 "num_base_bdevs_operational": 3, 00:22:01.468 "base_bdevs_list": [ 00:22:01.468 { 00:22:01.468 "name": "pt1", 00:22:01.468 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:01.468 "is_configured": true, 00:22:01.468 "data_offset": 2048, 00:22:01.468 "data_size": 63488 00:22:01.468 }, 00:22:01.468 { 00:22:01.468 "name": null, 00:22:01.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:01.468 "is_configured": false, 00:22:01.468 "data_offset": 2048, 00:22:01.468 "data_size": 63488 00:22:01.468 }, 00:22:01.468 { 00:22:01.468 "name": null, 00:22:01.468 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:01.468 "is_configured": false, 00:22:01.468 "data_offset": 2048, 00:22:01.468 "data_size": 63488 00:22:01.468 } 00:22:01.468 ] 00:22:01.468 }' 00:22:01.468 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:01.468 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.034 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:22:02.034 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:02.034 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:02.292 [2024-06-10 11:45:34.167741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:02.292 [2024-06-10 11:45:34.168122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.292 [2024-06-10 11:45:34.168210] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:02.292 [2024-06-10 11:45:34.168414] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.292 [2024-06-10 11:45:34.169021] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.292 [2024-06-10 11:45:34.169201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:02.292 [2024-06-10 11:45:34.169454] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:02.292 [2024-06-10 11:45:34.169596] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:02.292 pt2 00:22:02.292 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:02.292 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:02.292 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:02.549 [2024-06-10 11:45:34.423817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:02.549 [2024-06-10 11:45:34.424136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.549 [2024-06-10 11:45:34.424300] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:22:02.550 [2024-06-10 11:45:34.424439] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.550 [2024-06-10 11:45:34.425098] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.550 [2024-06-10 11:45:34.425272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:02.550 [2024-06-10 11:45:34.425523] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:02.550 [2024-06-10 11:45:34.425660] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:02.550 [2024-06-10 11:45:34.425921] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:22:02.550 [2024-06-10 11:45:34.426043] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:02.550 [2024-06-10 11:45:34.426257] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:02.550 [2024-06-10 11:45:34.426783] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:22:02.550 [2024-06-10 11:45:34.426916] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:22:02.550 [2024-06-10 11:45:34.427197] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.550 pt3 00:22:02.550 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:02.550 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:02.550 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:22:02.550 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:02.550 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:02.550 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:02.550 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:02.550 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:02.550 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:02.550 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:02.550 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:02.550 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:02.550 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.550 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.807 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:02.807 "name": "raid_bdev1", 00:22:02.807 "uuid": "ee32fb19-9cdc-4410-aef3-8837d2ca42c4", 00:22:02.807 "strip_size_kb": 64, 00:22:02.807 "state": "online", 00:22:02.807 "raid_level": "concat", 00:22:02.807 "superblock": true, 00:22:02.807 "num_base_bdevs": 3, 00:22:02.807 "num_base_bdevs_discovered": 3, 00:22:02.807 "num_base_bdevs_operational": 3, 00:22:02.807 "base_bdevs_list": [ 00:22:02.807 { 00:22:02.807 "name": "pt1", 00:22:02.807 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:02.807 "is_configured": true, 00:22:02.807 "data_offset": 2048, 00:22:02.807 "data_size": 63488 00:22:02.807 }, 00:22:02.807 { 00:22:02.807 "name": "pt2", 00:22:02.807 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:02.807 "is_configured": true, 00:22:02.807 "data_offset": 2048, 00:22:02.807 "data_size": 63488 00:22:02.807 }, 00:22:02.807 { 00:22:02.807 "name": "pt3", 00:22:02.807 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:02.807 "is_configured": true, 00:22:02.807 "data_offset": 2048, 00:22:02.807 "data_size": 63488 00:22:02.807 } 00:22:02.807 ] 00:22:02.807 }' 00:22:02.807 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:02.807 11:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.373 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:22:03.373 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:03.373 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:03.373 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:03.373 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:03.373 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:03.373 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:03.373 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:03.632 [2024-06-10 11:45:35.556361] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:03.632 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:03.632 "name": "raid_bdev1", 00:22:03.632 "aliases": [ 00:22:03.632 "ee32fb19-9cdc-4410-aef3-8837d2ca42c4" 00:22:03.632 ], 00:22:03.632 "product_name": "Raid Volume", 00:22:03.632 "block_size": 512, 00:22:03.632 "num_blocks": 190464, 00:22:03.632 "uuid": "ee32fb19-9cdc-4410-aef3-8837d2ca42c4", 00:22:03.632 "assigned_rate_limits": { 00:22:03.632 "rw_ios_per_sec": 0, 00:22:03.632 "rw_mbytes_per_sec": 0, 00:22:03.632 "r_mbytes_per_sec": 0, 00:22:03.632 "w_mbytes_per_sec": 0 00:22:03.633 }, 00:22:03.633 "claimed": false, 00:22:03.633 "zoned": false, 00:22:03.633 "supported_io_types": { 00:22:03.633 "read": true, 00:22:03.633 "write": true, 00:22:03.633 "unmap": true, 00:22:03.633 "write_zeroes": true, 00:22:03.633 "flush": true, 00:22:03.633 "reset": true, 00:22:03.633 "compare": false, 00:22:03.633 "compare_and_write": false, 00:22:03.633 "abort": false, 00:22:03.633 "nvme_admin": false, 00:22:03.633 "nvme_io": false 00:22:03.633 }, 00:22:03.633 "memory_domains": [ 00:22:03.633 { 00:22:03.633 "dma_device_id": "system", 00:22:03.633 "dma_device_type": 1 00:22:03.633 }, 00:22:03.633 { 00:22:03.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.633 "dma_device_type": 2 00:22:03.633 }, 00:22:03.633 { 00:22:03.633 "dma_device_id": "system", 00:22:03.633 "dma_device_type": 1 00:22:03.633 }, 00:22:03.633 { 00:22:03.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.633 "dma_device_type": 2 00:22:03.633 }, 00:22:03.633 { 00:22:03.633 "dma_device_id": "system", 00:22:03.633 "dma_device_type": 1 00:22:03.633 }, 00:22:03.633 { 00:22:03.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.633 "dma_device_type": 2 00:22:03.633 } 00:22:03.633 ], 00:22:03.633 "driver_specific": { 00:22:03.633 "raid": { 00:22:03.633 "uuid": "ee32fb19-9cdc-4410-aef3-8837d2ca42c4", 00:22:03.633 "strip_size_kb": 64, 00:22:03.633 "state": "online", 00:22:03.633 "raid_level": "concat", 00:22:03.633 "superblock": true, 00:22:03.633 "num_base_bdevs": 3, 00:22:03.633 "num_base_bdevs_discovered": 3, 00:22:03.633 "num_base_bdevs_operational": 3, 00:22:03.633 "base_bdevs_list": [ 00:22:03.633 { 00:22:03.633 "name": "pt1", 00:22:03.633 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:03.633 "is_configured": true, 00:22:03.633 "data_offset": 2048, 00:22:03.633 "data_size": 63488 00:22:03.633 }, 00:22:03.633 { 00:22:03.633 "name": "pt2", 00:22:03.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:03.633 "is_configured": true, 00:22:03.633 "data_offset": 2048, 00:22:03.633 "data_size": 63488 00:22:03.633 }, 00:22:03.633 { 00:22:03.633 "name": "pt3", 00:22:03.633 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:03.633 "is_configured": true, 00:22:03.633 "data_offset": 2048, 00:22:03.633 "data_size": 63488 00:22:03.633 } 00:22:03.633 ] 00:22:03.633 } 00:22:03.633 } 00:22:03.633 }' 00:22:03.633 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:03.633 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:03.633 pt2 00:22:03.633 pt3' 00:22:03.633 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:03.633 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:03.633 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:03.938 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:03.938 "name": "pt1", 00:22:03.938 "aliases": [ 00:22:03.938 "00000000-0000-0000-0000-000000000001" 00:22:03.938 ], 00:22:03.938 "product_name": "passthru", 00:22:03.938 "block_size": 512, 00:22:03.938 "num_blocks": 65536, 00:22:03.938 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:03.938 "assigned_rate_limits": { 00:22:03.938 "rw_ios_per_sec": 0, 00:22:03.938 "rw_mbytes_per_sec": 0, 00:22:03.938 "r_mbytes_per_sec": 0, 00:22:03.938 "w_mbytes_per_sec": 0 00:22:03.938 }, 00:22:03.938 "claimed": true, 00:22:03.938 "claim_type": "exclusive_write", 00:22:03.938 "zoned": false, 00:22:03.938 "supported_io_types": { 00:22:03.938 "read": true, 00:22:03.938 "write": true, 00:22:03.938 "unmap": true, 00:22:03.938 "write_zeroes": true, 00:22:03.938 "flush": true, 00:22:03.938 "reset": true, 00:22:03.938 "compare": false, 00:22:03.938 "compare_and_write": false, 00:22:03.938 "abort": true, 00:22:03.938 "nvme_admin": false, 00:22:03.938 "nvme_io": false 00:22:03.938 }, 00:22:03.938 "memory_domains": [ 00:22:03.938 { 00:22:03.938 "dma_device_id": "system", 00:22:03.938 "dma_device_type": 1 00:22:03.938 }, 00:22:03.938 { 00:22:03.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.938 "dma_device_type": 2 00:22:03.938 } 00:22:03.938 ], 00:22:03.938 "driver_specific": { 00:22:03.938 "passthru": { 00:22:03.938 "name": "pt1", 00:22:03.938 "base_bdev_name": "malloc1" 00:22:03.938 } 00:22:03.938 } 00:22:03.938 }' 00:22:03.938 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:03.938 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:04.210 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:04.210 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:04.210 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:04.210 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:04.210 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:04.210 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:04.210 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:04.210 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:04.210 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:04.467 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:04.467 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:04.467 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:04.467 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:04.724 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:04.725 "name": "pt2", 00:22:04.725 "aliases": [ 00:22:04.725 "00000000-0000-0000-0000-000000000002" 00:22:04.725 ], 00:22:04.725 "product_name": "passthru", 00:22:04.725 "block_size": 512, 00:22:04.725 "num_blocks": 65536, 00:22:04.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:04.725 "assigned_rate_limits": { 00:22:04.725 "rw_ios_per_sec": 0, 00:22:04.725 "rw_mbytes_per_sec": 0, 00:22:04.725 "r_mbytes_per_sec": 0, 00:22:04.725 "w_mbytes_per_sec": 0 00:22:04.725 }, 00:22:04.725 "claimed": true, 00:22:04.725 "claim_type": "exclusive_write", 00:22:04.725 "zoned": false, 00:22:04.725 "supported_io_types": { 00:22:04.725 "read": true, 00:22:04.725 "write": true, 00:22:04.725 "unmap": true, 00:22:04.725 "write_zeroes": true, 00:22:04.725 "flush": true, 00:22:04.725 "reset": true, 00:22:04.725 "compare": false, 00:22:04.725 "compare_and_write": false, 00:22:04.725 "abort": true, 00:22:04.725 "nvme_admin": false, 00:22:04.725 "nvme_io": false 00:22:04.725 }, 00:22:04.725 "memory_domains": [ 00:22:04.725 { 00:22:04.725 "dma_device_id": "system", 00:22:04.725 "dma_device_type": 1 00:22:04.725 }, 00:22:04.725 { 00:22:04.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.725 "dma_device_type": 2 00:22:04.725 } 00:22:04.725 ], 00:22:04.725 "driver_specific": { 00:22:04.725 "passthru": { 00:22:04.725 "name": "pt2", 00:22:04.725 "base_bdev_name": "malloc2" 00:22:04.725 } 00:22:04.725 } 00:22:04.725 }' 00:22:04.725 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:04.725 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:04.725 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:04.725 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:04.725 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:04.725 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:04.725 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:04.725 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:04.982 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:04.982 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:04.982 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:04.982 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:04.982 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:04.983 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:04.983 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:05.241 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:05.241 "name": "pt3", 00:22:05.241 "aliases": [ 00:22:05.241 "00000000-0000-0000-0000-000000000003" 00:22:05.241 ], 00:22:05.241 "product_name": "passthru", 00:22:05.241 "block_size": 512, 00:22:05.241 "num_blocks": 65536, 00:22:05.241 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:05.241 "assigned_rate_limits": { 00:22:05.241 "rw_ios_per_sec": 0, 00:22:05.241 "rw_mbytes_per_sec": 0, 00:22:05.241 "r_mbytes_per_sec": 0, 00:22:05.241 "w_mbytes_per_sec": 0 00:22:05.241 }, 00:22:05.241 "claimed": true, 00:22:05.241 "claim_type": "exclusive_write", 00:22:05.241 "zoned": false, 00:22:05.241 "supported_io_types": { 00:22:05.241 "read": true, 00:22:05.241 "write": true, 00:22:05.241 "unmap": true, 00:22:05.241 "write_zeroes": true, 00:22:05.241 "flush": true, 00:22:05.241 "reset": true, 00:22:05.241 "compare": false, 00:22:05.241 "compare_and_write": false, 00:22:05.241 "abort": true, 00:22:05.241 "nvme_admin": false, 00:22:05.241 "nvme_io": false 00:22:05.241 }, 00:22:05.241 "memory_domains": [ 00:22:05.241 { 00:22:05.241 "dma_device_id": "system", 00:22:05.241 "dma_device_type": 1 00:22:05.241 }, 00:22:05.241 { 00:22:05.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.241 "dma_device_type": 2 00:22:05.241 } 00:22:05.241 ], 00:22:05.241 "driver_specific": { 00:22:05.241 "passthru": { 00:22:05.241 "name": "pt3", 00:22:05.241 "base_bdev_name": "malloc3" 00:22:05.241 } 00:22:05.241 } 00:22:05.241 }' 00:22:05.241 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.241 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.241 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:05.241 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.498 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.498 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:05.498 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.498 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.498 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:05.498 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.498 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.756 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:05.756 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:05.756 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:22:06.013 [2024-06-10 11:45:37.964889] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:06.013 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' ee32fb19-9cdc-4410-aef3-8837d2ca42c4 '!=' ee32fb19-9cdc-4410-aef3-8837d2ca42c4 ']' 00:22:06.013 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:22:06.013 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:06.013 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:06.013 11:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 131529 00:22:06.013 11:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 131529 ']' 00:22:06.013 11:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 131529 00:22:06.013 11:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:22:06.013 11:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:06.013 11:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 131529 00:22:06.013 11:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:06.013 11:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:06.013 11:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 131529' 00:22:06.013 killing process with pid 131529 00:22:06.013 11:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 131529 00:22:06.013 [2024-06-10 11:45:38.020731] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:06.013 11:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 131529 00:22:06.013 [2024-06-10 11:45:38.020984] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:06.013 [2024-06-10 11:45:38.021128] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:06.013 [2024-06-10 11:45:38.021215] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:22:06.586 [2024-06-10 11:45:38.362775] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:07.974 11:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:22:07.974 00:22:07.974 real 0m17.229s 00:22:07.974 user 0m30.340s 00:22:07.974 sys 0m2.144s 00:22:07.974 11:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:07.974 11:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.974 ************************************ 00:22:07.974 END TEST raid_superblock_test 00:22:07.974 ************************************ 00:22:07.974 11:45:39 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:22:07.974 11:45:39 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:22:07.974 11:45:39 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:07.974 11:45:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:07.974 ************************************ 00:22:07.974 START TEST raid_read_error_test 00:22:07.974 ************************************ 00:22:07.974 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 3 read 00:22:07.974 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:22:07.974 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:22:07.974 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:22:07.974 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:07.974 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.sNrpXj6s0n 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=132038 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 132038 /var/tmp/spdk-raid.sock 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 132038 ']' 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:07.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:07.975 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.975 [2024-06-10 11:45:40.020646] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:22:07.975 [2024-06-10 11:45:40.021260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132038 ] 00:22:08.232 [2024-06-10 11:45:40.207144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.491 [2024-06-10 11:45:40.482320] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.749 [2024-06-10 11:45:40.773755] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:09.045 11:45:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:09.045 11:45:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:22:09.045 11:45:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:09.045 11:45:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:09.303 BaseBdev1_malloc 00:22:09.303 11:45:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:09.561 true 00:22:09.561 11:45:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:09.819 [2024-06-10 11:45:41.845274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:09.819 [2024-06-10 11:45:41.845654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:09.819 [2024-06-10 11:45:41.845850] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:22:09.819 [2024-06-10 11:45:41.845977] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:09.819 [2024-06-10 11:45:41.848901] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:09.819 [2024-06-10 11:45:41.849133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:09.819 BaseBdev1 00:22:10.076 11:45:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:10.076 11:45:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:10.333 BaseBdev2_malloc 00:22:10.333 11:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:10.591 true 00:22:10.591 11:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:10.849 [2024-06-10 11:45:42.866897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:10.849 [2024-06-10 11:45:42.867375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.849 [2024-06-10 11:45:42.867674] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:10.849 [2024-06-10 11:45:42.867848] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.849 [2024-06-10 11:45:42.871550] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.849 [2024-06-10 11:45:42.871871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:10.849 BaseBdev2 00:22:10.849 11:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:10.849 11:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:11.414 BaseBdev3_malloc 00:22:11.414 11:45:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:11.414 true 00:22:11.414 11:45:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:11.981 [2024-06-10 11:45:43.828408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:11.981 [2024-06-10 11:45:43.828792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.981 [2024-06-10 11:45:43.828951] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:22:11.981 [2024-06-10 11:45:43.829071] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.981 [2024-06-10 11:45:43.831979] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.981 [2024-06-10 11:45:43.832228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:11.981 BaseBdev3 00:22:11.981 11:45:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:22:12.244 [2024-06-10 11:45:44.148756] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:12.244 [2024-06-10 11:45:44.151392] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:12.244 [2024-06-10 11:45:44.151709] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:12.244 [2024-06-10 11:45:44.152141] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:22:12.244 [2024-06-10 11:45:44.152279] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:12.244 [2024-06-10 11:45:44.152487] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:12.244 [2024-06-10 11:45:44.153049] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:22:12.244 [2024-06-10 11:45:44.153181] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:22:12.244 [2024-06-10 11:45:44.153553] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:12.244 11:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:22:12.244 11:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:12.244 11:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:12.244 11:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:12.244 11:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:12.244 11:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:12.244 11:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:12.244 11:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:12.244 11:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:12.244 11:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:12.244 11:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.244 11:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.502 11:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:12.502 "name": "raid_bdev1", 00:22:12.502 "uuid": "d16a348d-98f6-4b1e-9e23-b77c8e4dce64", 00:22:12.502 "strip_size_kb": 64, 00:22:12.502 "state": "online", 00:22:12.502 "raid_level": "concat", 00:22:12.502 "superblock": true, 00:22:12.502 "num_base_bdevs": 3, 00:22:12.502 "num_base_bdevs_discovered": 3, 00:22:12.502 "num_base_bdevs_operational": 3, 00:22:12.502 "base_bdevs_list": [ 00:22:12.502 { 00:22:12.502 "name": "BaseBdev1", 00:22:12.502 "uuid": "8f389129-7d44-5ff2-8b35-268cf46b818a", 00:22:12.502 "is_configured": true, 00:22:12.502 "data_offset": 2048, 00:22:12.502 "data_size": 63488 00:22:12.502 }, 00:22:12.502 { 00:22:12.502 "name": "BaseBdev2", 00:22:12.502 "uuid": "0b70c909-876e-54a5-9619-2fdc228e7ba1", 00:22:12.502 "is_configured": true, 00:22:12.502 "data_offset": 2048, 00:22:12.502 "data_size": 63488 00:22:12.502 }, 00:22:12.502 { 00:22:12.502 "name": "BaseBdev3", 00:22:12.502 "uuid": "4dec39ae-2ad8-56df-86eb-a4db40482ee4", 00:22:12.502 "is_configured": true, 00:22:12.502 "data_offset": 2048, 00:22:12.502 "data_size": 63488 00:22:12.502 } 00:22:12.502 ] 00:22:12.502 }' 00:22:12.502 11:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:12.502 11:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.069 11:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:13.069 11:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:13.326 [2024-06-10 11:45:45.215446] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:14.257 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:14.514 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:14.514 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:22:14.514 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:22:14.515 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:22:14.515 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:14.515 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:14.515 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:14.515 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:14.515 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:14.515 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:14.515 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:14.515 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:14.515 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:14.515 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.515 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.773 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:14.773 "name": "raid_bdev1", 00:22:14.773 "uuid": "d16a348d-98f6-4b1e-9e23-b77c8e4dce64", 00:22:14.773 "strip_size_kb": 64, 00:22:14.773 "state": "online", 00:22:14.773 "raid_level": "concat", 00:22:14.773 "superblock": true, 00:22:14.773 "num_base_bdevs": 3, 00:22:14.773 "num_base_bdevs_discovered": 3, 00:22:14.773 "num_base_bdevs_operational": 3, 00:22:14.773 "base_bdevs_list": [ 00:22:14.773 { 00:22:14.773 "name": "BaseBdev1", 00:22:14.773 "uuid": "8f389129-7d44-5ff2-8b35-268cf46b818a", 00:22:14.773 "is_configured": true, 00:22:14.773 "data_offset": 2048, 00:22:14.773 "data_size": 63488 00:22:14.773 }, 00:22:14.773 { 00:22:14.773 "name": "BaseBdev2", 00:22:14.773 "uuid": "0b70c909-876e-54a5-9619-2fdc228e7ba1", 00:22:14.773 "is_configured": true, 00:22:14.773 "data_offset": 2048, 00:22:14.773 "data_size": 63488 00:22:14.773 }, 00:22:14.773 { 00:22:14.773 "name": "BaseBdev3", 00:22:14.773 "uuid": "4dec39ae-2ad8-56df-86eb-a4db40482ee4", 00:22:14.773 "is_configured": true, 00:22:14.773 "data_offset": 2048, 00:22:14.773 "data_size": 63488 00:22:14.773 } 00:22:14.773 ] 00:22:14.773 }' 00:22:14.773 11:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:14.773 11:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.398 11:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:15.656 [2024-06-10 11:45:47.522080] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:15.657 [2024-06-10 11:45:47.522424] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:15.657 [2024-06-10 11:45:47.525581] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:15.657 [2024-06-10 11:45:47.525849] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:15.657 [2024-06-10 11:45:47.525997] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:15.657 [2024-06-10 11:45:47.526140] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:22:15.657 0 00:22:15.657 11:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 132038 00:22:15.657 11:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 132038 ']' 00:22:15.657 11:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 132038 00:22:15.657 11:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:22:15.657 11:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:15.657 11:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 132038 00:22:15.657 killing process with pid 132038 00:22:15.657 11:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:15.657 11:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:15.657 11:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 132038' 00:22:15.657 11:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 132038 00:22:15.657 11:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 132038 00:22:15.657 [2024-06-10 11:45:47.571393] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:15.915 [2024-06-10 11:45:47.884364] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:17.814 11:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.sNrpXj6s0n 00:22:17.814 11:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:17.814 11:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:17.814 ************************************ 00:22:17.814 END TEST raid_read_error_test 00:22:17.814 ************************************ 00:22:17.814 11:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:22:17.814 11:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:22:17.814 11:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:17.814 11:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:17.814 11:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:22:17.814 00:22:17.814 real 0m9.714s 00:22:17.814 user 0m14.615s 00:22:17.814 sys 0m1.143s 00:22:17.814 11:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:17.814 11:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.814 11:45:49 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:22:17.814 11:45:49 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:22:17.814 11:45:49 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:17.814 11:45:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:17.814 ************************************ 00:22:17.814 START TEST raid_write_error_test 00:22:17.814 ************************************ 00:22:17.814 11:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 3 write 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.osdNdxXGaO 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=132258 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 132258 /var/tmp/spdk-raid.sock 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 132258 ']' 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:17.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:17.815 11:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.815 [2024-06-10 11:45:49.783377] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:22:17.815 [2024-06-10 11:45:49.783905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132258 ] 00:22:18.072 [2024-06-10 11:45:49.974384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.329 [2024-06-10 11:45:50.219906] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.587 [2024-06-10 11:45:50.482086] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:18.844 11:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:18.844 11:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:22:18.844 11:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:18.844 11:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:19.411 BaseBdev1_malloc 00:22:19.411 11:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:19.670 true 00:22:19.670 11:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:19.928 [2024-06-10 11:45:51.849328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:19.928 [2024-06-10 11:45:51.849783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.928 [2024-06-10 11:45:51.849979] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:22:19.928 [2024-06-10 11:45:51.850112] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.928 [2024-06-10 11:45:51.853830] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.928 [2024-06-10 11:45:51.854202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:19.928 BaseBdev1 00:22:19.928 11:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:19.928 11:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:20.184 BaseBdev2_malloc 00:22:20.442 11:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:20.442 true 00:22:20.699 11:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:20.699 [2024-06-10 11:45:52.748526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:20.699 [2024-06-10 11:45:52.748955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.699 [2024-06-10 11:45:52.749142] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:20.699 [2024-06-10 11:45:52.749306] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.699 [2024-06-10 11:45:52.752226] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.699 [2024-06-10 11:45:52.752465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:20.699 BaseBdev2 00:22:20.957 11:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:20.957 11:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:21.214 BaseBdev3_malloc 00:22:21.214 11:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:21.473 true 00:22:21.473 11:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:21.730 [2024-06-10 11:45:53.600711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:21.730 [2024-06-10 11:45:53.601138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.730 [2024-06-10 11:45:53.601236] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:22:21.730 [2024-06-10 11:45:53.601510] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.730 [2024-06-10 11:45:53.604668] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.730 [2024-06-10 11:45:53.604988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:21.730 BaseBdev3 00:22:21.730 11:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:22:21.988 [2024-06-10 11:45:53.965600] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:21.988 [2024-06-10 11:45:53.968348] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:21.988 [2024-06-10 11:45:53.968698] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:21.988 [2024-06-10 11:45:53.969112] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:22:21.988 [2024-06-10 11:45:53.969255] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:21.988 [2024-06-10 11:45:53.969472] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:21.988 [2024-06-10 11:45:53.969976] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:22:21.988 [2024-06-10 11:45:53.970109] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:22:21.988 [2024-06-10 11:45:53.970503] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.988 11:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:22:21.988 11:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:21.988 11:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:21.988 11:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:21.988 11:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:21.988 11:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:21.988 11:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:21.988 11:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:21.988 11:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:21.988 11:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:21.988 11:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.988 11:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.246 11:45:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:22.246 "name": "raid_bdev1", 00:22:22.246 "uuid": "d1e62660-c4ed-4a76-8865-097b40ba8ae6", 00:22:22.246 "strip_size_kb": 64, 00:22:22.246 "state": "online", 00:22:22.246 "raid_level": "concat", 00:22:22.246 "superblock": true, 00:22:22.246 "num_base_bdevs": 3, 00:22:22.246 "num_base_bdevs_discovered": 3, 00:22:22.246 "num_base_bdevs_operational": 3, 00:22:22.246 "base_bdevs_list": [ 00:22:22.246 { 00:22:22.246 "name": "BaseBdev1", 00:22:22.246 "uuid": "49adc719-f3e0-5949-9622-8bad097a0b30", 00:22:22.246 "is_configured": true, 00:22:22.246 "data_offset": 2048, 00:22:22.246 "data_size": 63488 00:22:22.246 }, 00:22:22.246 { 00:22:22.246 "name": "BaseBdev2", 00:22:22.246 "uuid": "f1d571a6-c948-57a5-bf0b-8bf0f90092dc", 00:22:22.246 "is_configured": true, 00:22:22.246 "data_offset": 2048, 00:22:22.246 "data_size": 63488 00:22:22.246 }, 00:22:22.246 { 00:22:22.246 "name": "BaseBdev3", 00:22:22.246 "uuid": "a3011394-2056-57f1-a1bc-5ba026096fe7", 00:22:22.246 "is_configured": true, 00:22:22.246 "data_offset": 2048, 00:22:22.246 "data_size": 63488 00:22:22.246 } 00:22:22.246 ] 00:22:22.246 }' 00:22:22.246 11:45:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:22.246 11:45:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.178 11:45:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:23.178 11:45:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:23.178 [2024-06-10 11:45:55.028715] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:24.111 11:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:24.370 11:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:24.370 11:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:22:24.370 11:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:22:24.370 11:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:22:24.370 11:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:24.370 11:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:24.370 11:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:24.370 11:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:24.370 11:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:24.370 11:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:24.370 11:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:24.370 11:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:24.370 11:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:24.370 11:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.370 11:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.629 11:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:24.629 "name": "raid_bdev1", 00:22:24.629 "uuid": "d1e62660-c4ed-4a76-8865-097b40ba8ae6", 00:22:24.629 "strip_size_kb": 64, 00:22:24.629 "state": "online", 00:22:24.629 "raid_level": "concat", 00:22:24.629 "superblock": true, 00:22:24.629 "num_base_bdevs": 3, 00:22:24.629 "num_base_bdevs_discovered": 3, 00:22:24.629 "num_base_bdevs_operational": 3, 00:22:24.629 "base_bdevs_list": [ 00:22:24.629 { 00:22:24.629 "name": "BaseBdev1", 00:22:24.629 "uuid": "49adc719-f3e0-5949-9622-8bad097a0b30", 00:22:24.629 "is_configured": true, 00:22:24.629 "data_offset": 2048, 00:22:24.629 "data_size": 63488 00:22:24.629 }, 00:22:24.629 { 00:22:24.629 "name": "BaseBdev2", 00:22:24.629 "uuid": "f1d571a6-c948-57a5-bf0b-8bf0f90092dc", 00:22:24.629 "is_configured": true, 00:22:24.629 "data_offset": 2048, 00:22:24.629 "data_size": 63488 00:22:24.629 }, 00:22:24.629 { 00:22:24.629 "name": "BaseBdev3", 00:22:24.629 "uuid": "a3011394-2056-57f1-a1bc-5ba026096fe7", 00:22:24.629 "is_configured": true, 00:22:24.629 "data_offset": 2048, 00:22:24.629 "data_size": 63488 00:22:24.629 } 00:22:24.629 ] 00:22:24.629 }' 00:22:24.629 11:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:24.629 11:45:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.196 11:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:25.454 [2024-06-10 11:45:57.394090] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:25.454 [2024-06-10 11:45:57.394406] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:25.454 [2024-06-10 11:45:57.397742] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:25.454 [2024-06-10 11:45:57.398068] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.454 [2024-06-10 11:45:57.398260] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:25.454 [2024-06-10 11:45:57.398399] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:22:25.454 0 00:22:25.454 11:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 132258 00:22:25.454 11:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 132258 ']' 00:22:25.454 11:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 132258 00:22:25.454 11:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:22:25.454 11:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:25.454 11:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 132258 00:22:25.454 11:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:25.454 11:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:25.454 11:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 132258' 00:22:25.454 killing process with pid 132258 00:22:25.454 11:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 132258 00:22:25.454 11:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 132258 00:22:25.454 [2024-06-10 11:45:57.453950] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:25.713 [2024-06-10 11:45:57.753709] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:27.652 11:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.osdNdxXGaO 00:22:27.652 11:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:27.652 11:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:27.652 ************************************ 00:22:27.652 END TEST raid_write_error_test 00:22:27.652 ************************************ 00:22:27.652 11:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.42 00:22:27.652 11:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:22:27.653 11:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:27.653 11:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:27.653 11:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.42 != \0\.\0\0 ]] 00:22:27.653 00:22:27.653 real 0m9.865s 00:22:27.653 user 0m14.908s 00:22:27.653 sys 0m1.118s 00:22:27.653 11:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:27.653 11:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.653 11:45:59 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:22:27.653 11:45:59 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:22:27.653 11:45:59 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:22:27.653 11:45:59 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:27.653 11:45:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:27.653 ************************************ 00:22:27.653 START TEST raid_state_function_test 00:22:27.653 ************************************ 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 3 false 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=132475 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 132475' 00:22:27.653 Process raid pid: 132475 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 132475 /var/tmp/spdk-raid.sock 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 132475 ']' 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:27.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:27.653 11:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.653 [2024-06-10 11:45:59.687096] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:22:27.653 [2024-06-10 11:45:59.687521] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.911 [2024-06-10 11:45:59.860084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.169 [2024-06-10 11:46:00.095997] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.425 [2024-06-10 11:46:00.339901] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:28.683 11:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:28.683 11:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:22:28.683 11:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:28.941 [2024-06-10 11:46:00.913277] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:28.941 [2024-06-10 11:46:00.913616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:28.941 [2024-06-10 11:46:00.913731] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:28.941 [2024-06-10 11:46:00.913800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:28.941 [2024-06-10 11:46:00.913962] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:28.941 [2024-06-10 11:46:00.914075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:28.941 11:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:28.941 11:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:28.941 11:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:28.941 11:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:28.941 11:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:28.941 11:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:28.941 11:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:28.941 11:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:28.941 11:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:28.941 11:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:28.941 11:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.941 11:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.200 11:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:29.200 "name": "Existed_Raid", 00:22:29.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.200 "strip_size_kb": 0, 00:22:29.200 "state": "configuring", 00:22:29.200 "raid_level": "raid1", 00:22:29.200 "superblock": false, 00:22:29.200 "num_base_bdevs": 3, 00:22:29.200 "num_base_bdevs_discovered": 0, 00:22:29.200 "num_base_bdevs_operational": 3, 00:22:29.200 "base_bdevs_list": [ 00:22:29.200 { 00:22:29.200 "name": "BaseBdev1", 00:22:29.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.200 "is_configured": false, 00:22:29.200 "data_offset": 0, 00:22:29.200 "data_size": 0 00:22:29.200 }, 00:22:29.200 { 00:22:29.200 "name": "BaseBdev2", 00:22:29.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.200 "is_configured": false, 00:22:29.200 "data_offset": 0, 00:22:29.200 "data_size": 0 00:22:29.200 }, 00:22:29.200 { 00:22:29.200 "name": "BaseBdev3", 00:22:29.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.200 "is_configured": false, 00:22:29.200 "data_offset": 0, 00:22:29.200 "data_size": 0 00:22:29.200 } 00:22:29.200 ] 00:22:29.200 }' 00:22:29.200 11:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:29.200 11:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.134 11:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:30.134 [2024-06-10 11:46:02.093416] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:30.134 [2024-06-10 11:46:02.093731] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:22:30.134 11:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:30.391 [2024-06-10 11:46:02.325462] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:30.391 [2024-06-10 11:46:02.325805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:30.391 [2024-06-10 11:46:02.325934] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:30.391 [2024-06-10 11:46:02.325993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:30.391 [2024-06-10 11:46:02.326067] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:30.391 [2024-06-10 11:46:02.326146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:30.391 11:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:30.957 [2024-06-10 11:46:02.733343] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:30.957 BaseBdev1 00:22:30.957 11:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:30.957 11:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:22:30.957 11:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:30.957 11:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:22:30.957 11:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:30.957 11:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:30.957 11:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:31.237 11:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:31.509 [ 00:22:31.509 { 00:22:31.509 "name": "BaseBdev1", 00:22:31.509 "aliases": [ 00:22:31.509 "3eb2662d-b824-48b7-b14a-3bbcda6903ea" 00:22:31.509 ], 00:22:31.509 "product_name": "Malloc disk", 00:22:31.509 "block_size": 512, 00:22:31.509 "num_blocks": 65536, 00:22:31.509 "uuid": "3eb2662d-b824-48b7-b14a-3bbcda6903ea", 00:22:31.509 "assigned_rate_limits": { 00:22:31.509 "rw_ios_per_sec": 0, 00:22:31.509 "rw_mbytes_per_sec": 0, 00:22:31.509 "r_mbytes_per_sec": 0, 00:22:31.509 "w_mbytes_per_sec": 0 00:22:31.509 }, 00:22:31.509 "claimed": true, 00:22:31.509 "claim_type": "exclusive_write", 00:22:31.509 "zoned": false, 00:22:31.509 "supported_io_types": { 00:22:31.509 "read": true, 00:22:31.509 "write": true, 00:22:31.509 "unmap": true, 00:22:31.509 "write_zeroes": true, 00:22:31.509 "flush": true, 00:22:31.509 "reset": true, 00:22:31.509 "compare": false, 00:22:31.509 "compare_and_write": false, 00:22:31.509 "abort": true, 00:22:31.509 "nvme_admin": false, 00:22:31.509 "nvme_io": false 00:22:31.509 }, 00:22:31.509 "memory_domains": [ 00:22:31.509 { 00:22:31.509 "dma_device_id": "system", 00:22:31.509 "dma_device_type": 1 00:22:31.509 }, 00:22:31.509 { 00:22:31.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.509 "dma_device_type": 2 00:22:31.509 } 00:22:31.509 ], 00:22:31.509 "driver_specific": {} 00:22:31.509 } 00:22:31.509 ] 00:22:31.509 11:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:22:31.509 11:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:31.509 11:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:31.509 11:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:31.509 11:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:31.509 11:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:31.509 11:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:31.509 11:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:31.509 11:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:31.509 11:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:31.509 11:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:31.509 11:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.509 11:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.767 11:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:31.767 "name": "Existed_Raid", 00:22:31.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.767 "strip_size_kb": 0, 00:22:31.767 "state": "configuring", 00:22:31.767 "raid_level": "raid1", 00:22:31.767 "superblock": false, 00:22:31.767 "num_base_bdevs": 3, 00:22:31.767 "num_base_bdevs_discovered": 1, 00:22:31.767 "num_base_bdevs_operational": 3, 00:22:31.767 "base_bdevs_list": [ 00:22:31.767 { 00:22:31.767 "name": "BaseBdev1", 00:22:31.767 "uuid": "3eb2662d-b824-48b7-b14a-3bbcda6903ea", 00:22:31.767 "is_configured": true, 00:22:31.767 "data_offset": 0, 00:22:31.767 "data_size": 65536 00:22:31.767 }, 00:22:31.767 { 00:22:31.767 "name": "BaseBdev2", 00:22:31.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.768 "is_configured": false, 00:22:31.768 "data_offset": 0, 00:22:31.768 "data_size": 0 00:22:31.768 }, 00:22:31.768 { 00:22:31.768 "name": "BaseBdev3", 00:22:31.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.768 "is_configured": false, 00:22:31.768 "data_offset": 0, 00:22:31.768 "data_size": 0 00:22:31.768 } 00:22:31.768 ] 00:22:31.768 }' 00:22:31.768 11:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:31.768 11:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.334 11:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:32.592 [2024-06-10 11:46:04.513845] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:32.592 [2024-06-10 11:46:04.514163] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:22:32.592 11:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:32.850 [2024-06-10 11:46:04.797891] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:32.850 [2024-06-10 11:46:04.800517] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:32.850 [2024-06-10 11:46:04.800836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:32.850 [2024-06-10 11:46:04.800972] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:32.850 [2024-06-10 11:46:04.801178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:32.850 11:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:32.850 11:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:32.850 11:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:32.850 11:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:32.850 11:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:32.850 11:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:32.850 11:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:32.850 11:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:32.850 11:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:32.850 11:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:32.850 11:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:32.850 11:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:32.850 11:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.850 11:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.415 11:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.415 "name": "Existed_Raid", 00:22:33.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.415 "strip_size_kb": 0, 00:22:33.415 "state": "configuring", 00:22:33.415 "raid_level": "raid1", 00:22:33.415 "superblock": false, 00:22:33.415 "num_base_bdevs": 3, 00:22:33.415 "num_base_bdevs_discovered": 1, 00:22:33.415 "num_base_bdevs_operational": 3, 00:22:33.415 "base_bdevs_list": [ 00:22:33.415 { 00:22:33.415 "name": "BaseBdev1", 00:22:33.415 "uuid": "3eb2662d-b824-48b7-b14a-3bbcda6903ea", 00:22:33.415 "is_configured": true, 00:22:33.415 "data_offset": 0, 00:22:33.415 "data_size": 65536 00:22:33.415 }, 00:22:33.415 { 00:22:33.415 "name": "BaseBdev2", 00:22:33.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.415 "is_configured": false, 00:22:33.415 "data_offset": 0, 00:22:33.415 "data_size": 0 00:22:33.415 }, 00:22:33.415 { 00:22:33.415 "name": "BaseBdev3", 00:22:33.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.415 "is_configured": false, 00:22:33.415 "data_offset": 0, 00:22:33.415 "data_size": 0 00:22:33.415 } 00:22:33.415 ] 00:22:33.415 }' 00:22:33.415 11:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.415 11:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.981 11:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:34.239 [2024-06-10 11:46:06.078141] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:34.239 BaseBdev2 00:22:34.239 11:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:34.239 11:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:22:34.239 11:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:34.239 11:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:22:34.239 11:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:34.239 11:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:34.239 11:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:34.497 11:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:34.772 [ 00:22:34.772 { 00:22:34.772 "name": "BaseBdev2", 00:22:34.772 "aliases": [ 00:22:34.772 "d8fd552d-4477-45d4-90bf-574808c5fa03" 00:22:34.772 ], 00:22:34.772 "product_name": "Malloc disk", 00:22:34.772 "block_size": 512, 00:22:34.772 "num_blocks": 65536, 00:22:34.772 "uuid": "d8fd552d-4477-45d4-90bf-574808c5fa03", 00:22:34.772 "assigned_rate_limits": { 00:22:34.772 "rw_ios_per_sec": 0, 00:22:34.772 "rw_mbytes_per_sec": 0, 00:22:34.773 "r_mbytes_per_sec": 0, 00:22:34.773 "w_mbytes_per_sec": 0 00:22:34.773 }, 00:22:34.773 "claimed": true, 00:22:34.773 "claim_type": "exclusive_write", 00:22:34.773 "zoned": false, 00:22:34.773 "supported_io_types": { 00:22:34.773 "read": true, 00:22:34.773 "write": true, 00:22:34.773 "unmap": true, 00:22:34.773 "write_zeroes": true, 00:22:34.773 "flush": true, 00:22:34.773 "reset": true, 00:22:34.773 "compare": false, 00:22:34.773 "compare_and_write": false, 00:22:34.773 "abort": true, 00:22:34.773 "nvme_admin": false, 00:22:34.773 "nvme_io": false 00:22:34.773 }, 00:22:34.773 "memory_domains": [ 00:22:34.773 { 00:22:34.773 "dma_device_id": "system", 00:22:34.773 "dma_device_type": 1 00:22:34.773 }, 00:22:34.773 { 00:22:34.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.773 "dma_device_type": 2 00:22:34.773 } 00:22:34.773 ], 00:22:34.773 "driver_specific": {} 00:22:34.773 } 00:22:34.773 ] 00:22:34.773 11:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:22:34.773 11:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:34.773 11:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:34.773 11:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:34.773 11:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:34.773 11:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:34.773 11:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:34.773 11:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:34.773 11:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:34.773 11:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:34.773 11:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:34.773 11:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:34.773 11:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:34.773 11:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.773 11:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:35.030 11:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:35.030 "name": "Existed_Raid", 00:22:35.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.030 "strip_size_kb": 0, 00:22:35.030 "state": "configuring", 00:22:35.030 "raid_level": "raid1", 00:22:35.030 "superblock": false, 00:22:35.030 "num_base_bdevs": 3, 00:22:35.030 "num_base_bdevs_discovered": 2, 00:22:35.030 "num_base_bdevs_operational": 3, 00:22:35.030 "base_bdevs_list": [ 00:22:35.030 { 00:22:35.030 "name": "BaseBdev1", 00:22:35.030 "uuid": "3eb2662d-b824-48b7-b14a-3bbcda6903ea", 00:22:35.030 "is_configured": true, 00:22:35.030 "data_offset": 0, 00:22:35.030 "data_size": 65536 00:22:35.030 }, 00:22:35.030 { 00:22:35.030 "name": "BaseBdev2", 00:22:35.030 "uuid": "d8fd552d-4477-45d4-90bf-574808c5fa03", 00:22:35.030 "is_configured": true, 00:22:35.030 "data_offset": 0, 00:22:35.030 "data_size": 65536 00:22:35.030 }, 00:22:35.030 { 00:22:35.030 "name": "BaseBdev3", 00:22:35.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.030 "is_configured": false, 00:22:35.030 "data_offset": 0, 00:22:35.030 "data_size": 0 00:22:35.030 } 00:22:35.030 ] 00:22:35.030 }' 00:22:35.030 11:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:35.030 11:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.594 11:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:35.594 [2024-06-10 11:46:07.608485] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:35.594 [2024-06-10 11:46:07.608804] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:22:35.594 [2024-06-10 11:46:07.608859] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:35.594 [2024-06-10 11:46:07.609158] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:35.594 [2024-06-10 11:46:07.609717] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:22:35.594 [2024-06-10 11:46:07.609859] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:22:35.594 [2024-06-10 11:46:07.610282] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.594 BaseBdev3 00:22:35.594 11:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:35.594 11:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:22:35.594 11:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:35.594 11:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:22:35.594 11:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:35.594 11:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:35.594 11:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:35.852 11:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:36.109 [ 00:22:36.109 { 00:22:36.109 "name": "BaseBdev3", 00:22:36.109 "aliases": [ 00:22:36.109 "49cdce94-0634-4d52-8aea-1edeb046bd28" 00:22:36.109 ], 00:22:36.109 "product_name": "Malloc disk", 00:22:36.109 "block_size": 512, 00:22:36.109 "num_blocks": 65536, 00:22:36.109 "uuid": "49cdce94-0634-4d52-8aea-1edeb046bd28", 00:22:36.109 "assigned_rate_limits": { 00:22:36.109 "rw_ios_per_sec": 0, 00:22:36.109 "rw_mbytes_per_sec": 0, 00:22:36.109 "r_mbytes_per_sec": 0, 00:22:36.109 "w_mbytes_per_sec": 0 00:22:36.109 }, 00:22:36.109 "claimed": true, 00:22:36.109 "claim_type": "exclusive_write", 00:22:36.109 "zoned": false, 00:22:36.109 "supported_io_types": { 00:22:36.109 "read": true, 00:22:36.109 "write": true, 00:22:36.109 "unmap": true, 00:22:36.109 "write_zeroes": true, 00:22:36.109 "flush": true, 00:22:36.109 "reset": true, 00:22:36.109 "compare": false, 00:22:36.109 "compare_and_write": false, 00:22:36.109 "abort": true, 00:22:36.109 "nvme_admin": false, 00:22:36.109 "nvme_io": false 00:22:36.109 }, 00:22:36.109 "memory_domains": [ 00:22:36.109 { 00:22:36.110 "dma_device_id": "system", 00:22:36.110 "dma_device_type": 1 00:22:36.110 }, 00:22:36.110 { 00:22:36.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.110 "dma_device_type": 2 00:22:36.110 } 00:22:36.110 ], 00:22:36.110 "driver_specific": {} 00:22:36.110 } 00:22:36.110 ] 00:22:36.110 11:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:22:36.110 11:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:36.110 11:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:36.110 11:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:36.110 11:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:36.110 11:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:36.110 11:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:36.110 11:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:36.110 11:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:36.110 11:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:36.110 11:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:36.110 11:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:36.110 11:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:36.110 11:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.110 11:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.367 11:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.367 "name": "Existed_Raid", 00:22:36.367 "uuid": "76158976-60e7-425c-83e1-8b6f333121ac", 00:22:36.367 "strip_size_kb": 0, 00:22:36.367 "state": "online", 00:22:36.367 "raid_level": "raid1", 00:22:36.367 "superblock": false, 00:22:36.367 "num_base_bdevs": 3, 00:22:36.367 "num_base_bdevs_discovered": 3, 00:22:36.367 "num_base_bdevs_operational": 3, 00:22:36.367 "base_bdevs_list": [ 00:22:36.367 { 00:22:36.367 "name": "BaseBdev1", 00:22:36.367 "uuid": "3eb2662d-b824-48b7-b14a-3bbcda6903ea", 00:22:36.367 "is_configured": true, 00:22:36.367 "data_offset": 0, 00:22:36.367 "data_size": 65536 00:22:36.367 }, 00:22:36.367 { 00:22:36.367 "name": "BaseBdev2", 00:22:36.367 "uuid": "d8fd552d-4477-45d4-90bf-574808c5fa03", 00:22:36.367 "is_configured": true, 00:22:36.367 "data_offset": 0, 00:22:36.367 "data_size": 65536 00:22:36.367 }, 00:22:36.367 { 00:22:36.367 "name": "BaseBdev3", 00:22:36.367 "uuid": "49cdce94-0634-4d52-8aea-1edeb046bd28", 00:22:36.367 "is_configured": true, 00:22:36.367 "data_offset": 0, 00:22:36.367 "data_size": 65536 00:22:36.367 } 00:22:36.367 ] 00:22:36.367 }' 00:22:36.367 11:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.367 11:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.298 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:37.298 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:37.298 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:37.298 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:37.298 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:37.298 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:37.298 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:37.298 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:37.555 [2024-06-10 11:46:09.381349] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:37.555 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:37.555 "name": "Existed_Raid", 00:22:37.555 "aliases": [ 00:22:37.555 "76158976-60e7-425c-83e1-8b6f333121ac" 00:22:37.555 ], 00:22:37.555 "product_name": "Raid Volume", 00:22:37.555 "block_size": 512, 00:22:37.555 "num_blocks": 65536, 00:22:37.555 "uuid": "76158976-60e7-425c-83e1-8b6f333121ac", 00:22:37.555 "assigned_rate_limits": { 00:22:37.555 "rw_ios_per_sec": 0, 00:22:37.555 "rw_mbytes_per_sec": 0, 00:22:37.555 "r_mbytes_per_sec": 0, 00:22:37.555 "w_mbytes_per_sec": 0 00:22:37.555 }, 00:22:37.555 "claimed": false, 00:22:37.555 "zoned": false, 00:22:37.555 "supported_io_types": { 00:22:37.555 "read": true, 00:22:37.555 "write": true, 00:22:37.555 "unmap": false, 00:22:37.555 "write_zeroes": true, 00:22:37.555 "flush": false, 00:22:37.555 "reset": true, 00:22:37.555 "compare": false, 00:22:37.555 "compare_and_write": false, 00:22:37.555 "abort": false, 00:22:37.555 "nvme_admin": false, 00:22:37.555 "nvme_io": false 00:22:37.555 }, 00:22:37.555 "memory_domains": [ 00:22:37.555 { 00:22:37.555 "dma_device_id": "system", 00:22:37.555 "dma_device_type": 1 00:22:37.555 }, 00:22:37.555 { 00:22:37.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.555 "dma_device_type": 2 00:22:37.555 }, 00:22:37.555 { 00:22:37.555 "dma_device_id": "system", 00:22:37.555 "dma_device_type": 1 00:22:37.555 }, 00:22:37.555 { 00:22:37.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.555 "dma_device_type": 2 00:22:37.555 }, 00:22:37.555 { 00:22:37.555 "dma_device_id": "system", 00:22:37.555 "dma_device_type": 1 00:22:37.555 }, 00:22:37.555 { 00:22:37.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.555 "dma_device_type": 2 00:22:37.555 } 00:22:37.555 ], 00:22:37.555 "driver_specific": { 00:22:37.555 "raid": { 00:22:37.555 "uuid": "76158976-60e7-425c-83e1-8b6f333121ac", 00:22:37.555 "strip_size_kb": 0, 00:22:37.555 "state": "online", 00:22:37.555 "raid_level": "raid1", 00:22:37.555 "superblock": false, 00:22:37.555 "num_base_bdevs": 3, 00:22:37.555 "num_base_bdevs_discovered": 3, 00:22:37.555 "num_base_bdevs_operational": 3, 00:22:37.555 "base_bdevs_list": [ 00:22:37.555 { 00:22:37.555 "name": "BaseBdev1", 00:22:37.555 "uuid": "3eb2662d-b824-48b7-b14a-3bbcda6903ea", 00:22:37.555 "is_configured": true, 00:22:37.555 "data_offset": 0, 00:22:37.555 "data_size": 65536 00:22:37.555 }, 00:22:37.555 { 00:22:37.555 "name": "BaseBdev2", 00:22:37.555 "uuid": "d8fd552d-4477-45d4-90bf-574808c5fa03", 00:22:37.555 "is_configured": true, 00:22:37.555 "data_offset": 0, 00:22:37.555 "data_size": 65536 00:22:37.555 }, 00:22:37.555 { 00:22:37.555 "name": "BaseBdev3", 00:22:37.555 "uuid": "49cdce94-0634-4d52-8aea-1edeb046bd28", 00:22:37.555 "is_configured": true, 00:22:37.555 "data_offset": 0, 00:22:37.555 "data_size": 65536 00:22:37.555 } 00:22:37.555 ] 00:22:37.555 } 00:22:37.555 } 00:22:37.555 }' 00:22:37.555 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:37.555 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:37.555 BaseBdev2 00:22:37.555 BaseBdev3' 00:22:37.555 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:37.555 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:37.555 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:37.812 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:37.812 "name": "BaseBdev1", 00:22:37.812 "aliases": [ 00:22:37.812 "3eb2662d-b824-48b7-b14a-3bbcda6903ea" 00:22:37.812 ], 00:22:37.812 "product_name": "Malloc disk", 00:22:37.812 "block_size": 512, 00:22:37.812 "num_blocks": 65536, 00:22:37.812 "uuid": "3eb2662d-b824-48b7-b14a-3bbcda6903ea", 00:22:37.812 "assigned_rate_limits": { 00:22:37.812 "rw_ios_per_sec": 0, 00:22:37.812 "rw_mbytes_per_sec": 0, 00:22:37.812 "r_mbytes_per_sec": 0, 00:22:37.812 "w_mbytes_per_sec": 0 00:22:37.812 }, 00:22:37.812 "claimed": true, 00:22:37.812 "claim_type": "exclusive_write", 00:22:37.812 "zoned": false, 00:22:37.812 "supported_io_types": { 00:22:37.812 "read": true, 00:22:37.812 "write": true, 00:22:37.812 "unmap": true, 00:22:37.812 "write_zeroes": true, 00:22:37.812 "flush": true, 00:22:37.812 "reset": true, 00:22:37.812 "compare": false, 00:22:37.812 "compare_and_write": false, 00:22:37.812 "abort": true, 00:22:37.812 "nvme_admin": false, 00:22:37.812 "nvme_io": false 00:22:37.812 }, 00:22:37.812 "memory_domains": [ 00:22:37.812 { 00:22:37.812 "dma_device_id": "system", 00:22:37.812 "dma_device_type": 1 00:22:37.812 }, 00:22:37.812 { 00:22:37.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.812 "dma_device_type": 2 00:22:37.812 } 00:22:37.812 ], 00:22:37.812 "driver_specific": {} 00:22:37.812 }' 00:22:37.812 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:38.069 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:38.069 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:38.069 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:38.069 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:38.069 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:38.069 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:38.069 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:38.326 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:38.326 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:38.326 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:38.326 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:38.326 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:38.326 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:38.327 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:38.609 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:38.609 "name": "BaseBdev2", 00:22:38.609 "aliases": [ 00:22:38.609 "d8fd552d-4477-45d4-90bf-574808c5fa03" 00:22:38.609 ], 00:22:38.609 "product_name": "Malloc disk", 00:22:38.609 "block_size": 512, 00:22:38.609 "num_blocks": 65536, 00:22:38.609 "uuid": "d8fd552d-4477-45d4-90bf-574808c5fa03", 00:22:38.609 "assigned_rate_limits": { 00:22:38.609 "rw_ios_per_sec": 0, 00:22:38.609 "rw_mbytes_per_sec": 0, 00:22:38.609 "r_mbytes_per_sec": 0, 00:22:38.609 "w_mbytes_per_sec": 0 00:22:38.609 }, 00:22:38.609 "claimed": true, 00:22:38.609 "claim_type": "exclusive_write", 00:22:38.609 "zoned": false, 00:22:38.609 "supported_io_types": { 00:22:38.609 "read": true, 00:22:38.609 "write": true, 00:22:38.609 "unmap": true, 00:22:38.609 "write_zeroes": true, 00:22:38.609 "flush": true, 00:22:38.609 "reset": true, 00:22:38.609 "compare": false, 00:22:38.609 "compare_and_write": false, 00:22:38.609 "abort": true, 00:22:38.609 "nvme_admin": false, 00:22:38.609 "nvme_io": false 00:22:38.609 }, 00:22:38.609 "memory_domains": [ 00:22:38.609 { 00:22:38.609 "dma_device_id": "system", 00:22:38.609 "dma_device_type": 1 00:22:38.609 }, 00:22:38.609 { 00:22:38.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.609 "dma_device_type": 2 00:22:38.609 } 00:22:38.609 ], 00:22:38.609 "driver_specific": {} 00:22:38.609 }' 00:22:38.609 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:38.609 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:38.609 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:38.609 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:38.609 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:38.609 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:38.609 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:38.866 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:38.867 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:38.867 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:38.867 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:38.867 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:38.867 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:38.867 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:38.867 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:39.125 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:39.125 "name": "BaseBdev3", 00:22:39.125 "aliases": [ 00:22:39.125 "49cdce94-0634-4d52-8aea-1edeb046bd28" 00:22:39.125 ], 00:22:39.125 "product_name": "Malloc disk", 00:22:39.125 "block_size": 512, 00:22:39.125 "num_blocks": 65536, 00:22:39.125 "uuid": "49cdce94-0634-4d52-8aea-1edeb046bd28", 00:22:39.125 "assigned_rate_limits": { 00:22:39.125 "rw_ios_per_sec": 0, 00:22:39.125 "rw_mbytes_per_sec": 0, 00:22:39.125 "r_mbytes_per_sec": 0, 00:22:39.125 "w_mbytes_per_sec": 0 00:22:39.125 }, 00:22:39.125 "claimed": true, 00:22:39.125 "claim_type": "exclusive_write", 00:22:39.125 "zoned": false, 00:22:39.125 "supported_io_types": { 00:22:39.125 "read": true, 00:22:39.125 "write": true, 00:22:39.125 "unmap": true, 00:22:39.125 "write_zeroes": true, 00:22:39.125 "flush": true, 00:22:39.125 "reset": true, 00:22:39.125 "compare": false, 00:22:39.125 "compare_and_write": false, 00:22:39.125 "abort": true, 00:22:39.125 "nvme_admin": false, 00:22:39.125 "nvme_io": false 00:22:39.125 }, 00:22:39.125 "memory_domains": [ 00:22:39.125 { 00:22:39.125 "dma_device_id": "system", 00:22:39.125 "dma_device_type": 1 00:22:39.125 }, 00:22:39.125 { 00:22:39.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.125 "dma_device_type": 2 00:22:39.125 } 00:22:39.125 ], 00:22:39.125 "driver_specific": {} 00:22:39.125 }' 00:22:39.125 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:39.383 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:39.383 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:39.383 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:39.383 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:39.383 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:39.383 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:39.383 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:39.383 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:39.383 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:39.641 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:39.641 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:39.641 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:39.899 [2024-06-10 11:46:11.793588] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:39.899 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:39.899 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:22:39.899 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:39.899 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:39.899 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:39.899 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:39.899 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:39.899 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:39.899 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:39.899 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:39.899 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:39.899 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:39.899 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:39.899 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:39.899 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:39.899 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.899 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.157 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:40.157 "name": "Existed_Raid", 00:22:40.157 "uuid": "76158976-60e7-425c-83e1-8b6f333121ac", 00:22:40.157 "strip_size_kb": 0, 00:22:40.157 "state": "online", 00:22:40.157 "raid_level": "raid1", 00:22:40.157 "superblock": false, 00:22:40.157 "num_base_bdevs": 3, 00:22:40.157 "num_base_bdevs_discovered": 2, 00:22:40.157 "num_base_bdevs_operational": 2, 00:22:40.157 "base_bdevs_list": [ 00:22:40.157 { 00:22:40.157 "name": null, 00:22:40.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.157 "is_configured": false, 00:22:40.157 "data_offset": 0, 00:22:40.157 "data_size": 65536 00:22:40.157 }, 00:22:40.157 { 00:22:40.157 "name": "BaseBdev2", 00:22:40.157 "uuid": "d8fd552d-4477-45d4-90bf-574808c5fa03", 00:22:40.157 "is_configured": true, 00:22:40.157 "data_offset": 0, 00:22:40.157 "data_size": 65536 00:22:40.157 }, 00:22:40.157 { 00:22:40.157 "name": "BaseBdev3", 00:22:40.157 "uuid": "49cdce94-0634-4d52-8aea-1edeb046bd28", 00:22:40.157 "is_configured": true, 00:22:40.157 "data_offset": 0, 00:22:40.157 "data_size": 65536 00:22:40.157 } 00:22:40.157 ] 00:22:40.157 }' 00:22:40.157 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:40.157 11:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.090 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:41.090 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:41.090 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.090 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:41.347 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:41.347 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:41.347 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:41.604 [2024-06-10 11:46:13.438526] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:41.604 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:41.604 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:41.604 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:41.604 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.862 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:41.862 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:41.862 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:42.120 [2024-06-10 11:46:14.015939] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:42.120 [2024-06-10 11:46:14.016325] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:42.120 [2024-06-10 11:46:14.130625] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:42.120 [2024-06-10 11:46:14.130922] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:42.120 [2024-06-10 11:46:14.131025] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:22:42.120 11:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:42.120 11:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:42.120 11:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:42.120 11:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.687 11:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:42.687 11:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:42.687 11:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:22:42.687 11:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:42.687 11:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:42.687 11:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:42.945 BaseBdev2 00:22:42.945 11:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:42.945 11:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:22:42.945 11:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:42.945 11:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:22:42.945 11:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:42.945 11:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:42.945 11:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:43.202 11:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:43.460 [ 00:22:43.460 { 00:22:43.460 "name": "BaseBdev2", 00:22:43.460 "aliases": [ 00:22:43.460 "7b18e5dc-0235-420b-a52e-664630e18a24" 00:22:43.460 ], 00:22:43.460 "product_name": "Malloc disk", 00:22:43.460 "block_size": 512, 00:22:43.460 "num_blocks": 65536, 00:22:43.460 "uuid": "7b18e5dc-0235-420b-a52e-664630e18a24", 00:22:43.460 "assigned_rate_limits": { 00:22:43.460 "rw_ios_per_sec": 0, 00:22:43.460 "rw_mbytes_per_sec": 0, 00:22:43.460 "r_mbytes_per_sec": 0, 00:22:43.460 "w_mbytes_per_sec": 0 00:22:43.460 }, 00:22:43.460 "claimed": false, 00:22:43.460 "zoned": false, 00:22:43.460 "supported_io_types": { 00:22:43.460 "read": true, 00:22:43.460 "write": true, 00:22:43.460 "unmap": true, 00:22:43.460 "write_zeroes": true, 00:22:43.460 "flush": true, 00:22:43.460 "reset": true, 00:22:43.460 "compare": false, 00:22:43.460 "compare_and_write": false, 00:22:43.460 "abort": true, 00:22:43.460 "nvme_admin": false, 00:22:43.460 "nvme_io": false 00:22:43.460 }, 00:22:43.460 "memory_domains": [ 00:22:43.460 { 00:22:43.460 "dma_device_id": "system", 00:22:43.460 "dma_device_type": 1 00:22:43.460 }, 00:22:43.460 { 00:22:43.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.460 "dma_device_type": 2 00:22:43.460 } 00:22:43.460 ], 00:22:43.460 "driver_specific": {} 00:22:43.460 } 00:22:43.460 ] 00:22:43.460 11:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:22:43.460 11:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:43.460 11:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:43.460 11:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:43.718 BaseBdev3 00:22:43.718 11:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:43.718 11:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:22:43.718 11:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:43.718 11:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:22:43.718 11:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:43.718 11:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:43.718 11:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:43.976 11:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:44.234 [ 00:22:44.234 { 00:22:44.234 "name": "BaseBdev3", 00:22:44.234 "aliases": [ 00:22:44.234 "6dfe0824-a09e-4ff6-9e10-972d4f960a32" 00:22:44.234 ], 00:22:44.234 "product_name": "Malloc disk", 00:22:44.234 "block_size": 512, 00:22:44.234 "num_blocks": 65536, 00:22:44.234 "uuid": "6dfe0824-a09e-4ff6-9e10-972d4f960a32", 00:22:44.234 "assigned_rate_limits": { 00:22:44.234 "rw_ios_per_sec": 0, 00:22:44.234 "rw_mbytes_per_sec": 0, 00:22:44.234 "r_mbytes_per_sec": 0, 00:22:44.234 "w_mbytes_per_sec": 0 00:22:44.234 }, 00:22:44.234 "claimed": false, 00:22:44.234 "zoned": false, 00:22:44.234 "supported_io_types": { 00:22:44.234 "read": true, 00:22:44.234 "write": true, 00:22:44.234 "unmap": true, 00:22:44.234 "write_zeroes": true, 00:22:44.234 "flush": true, 00:22:44.234 "reset": true, 00:22:44.234 "compare": false, 00:22:44.234 "compare_and_write": false, 00:22:44.234 "abort": true, 00:22:44.234 "nvme_admin": false, 00:22:44.234 "nvme_io": false 00:22:44.234 }, 00:22:44.234 "memory_domains": [ 00:22:44.234 { 00:22:44.234 "dma_device_id": "system", 00:22:44.234 "dma_device_type": 1 00:22:44.234 }, 00:22:44.234 { 00:22:44.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.234 "dma_device_type": 2 00:22:44.234 } 00:22:44.234 ], 00:22:44.234 "driver_specific": {} 00:22:44.234 } 00:22:44.234 ] 00:22:44.234 11:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:22:44.234 11:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:44.234 11:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:44.234 11:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:44.492 [2024-06-10 11:46:16.483221] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:44.492 [2024-06-10 11:46:16.483531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:44.492 [2024-06-10 11:46:16.483662] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:44.492 [2024-06-10 11:46:16.486035] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:44.492 11:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:44.492 11:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:44.492 11:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:44.492 11:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:44.492 11:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:44.492 11:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:44.492 11:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.492 11:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.492 11:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.492 11:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.492 11:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.492 11:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.750 11:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:44.750 "name": "Existed_Raid", 00:22:44.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.750 "strip_size_kb": 0, 00:22:44.750 "state": "configuring", 00:22:44.750 "raid_level": "raid1", 00:22:44.750 "superblock": false, 00:22:44.750 "num_base_bdevs": 3, 00:22:44.750 "num_base_bdevs_discovered": 2, 00:22:44.750 "num_base_bdevs_operational": 3, 00:22:44.750 "base_bdevs_list": [ 00:22:44.750 { 00:22:44.750 "name": "BaseBdev1", 00:22:44.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.750 "is_configured": false, 00:22:44.750 "data_offset": 0, 00:22:44.750 "data_size": 0 00:22:44.750 }, 00:22:44.750 { 00:22:44.750 "name": "BaseBdev2", 00:22:44.750 "uuid": "7b18e5dc-0235-420b-a52e-664630e18a24", 00:22:44.750 "is_configured": true, 00:22:44.750 "data_offset": 0, 00:22:44.750 "data_size": 65536 00:22:44.750 }, 00:22:44.750 { 00:22:44.750 "name": "BaseBdev3", 00:22:44.750 "uuid": "6dfe0824-a09e-4ff6-9e10-972d4f960a32", 00:22:44.750 "is_configured": true, 00:22:44.750 "data_offset": 0, 00:22:44.750 "data_size": 65536 00:22:44.750 } 00:22:44.750 ] 00:22:44.750 }' 00:22:44.750 11:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:44.750 11:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.688 11:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:45.688 [2024-06-10 11:46:17.723477] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:45.948 11:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:45.948 11:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:45.948 11:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:45.948 11:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:45.948 11:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:45.948 11:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:45.948 11:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:45.948 11:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:45.948 11:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:45.948 11:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:45.948 11:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:45.948 11:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.948 11:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:45.948 "name": "Existed_Raid", 00:22:45.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.948 "strip_size_kb": 0, 00:22:45.948 "state": "configuring", 00:22:45.948 "raid_level": "raid1", 00:22:45.948 "superblock": false, 00:22:45.948 "num_base_bdevs": 3, 00:22:45.948 "num_base_bdevs_discovered": 1, 00:22:45.948 "num_base_bdevs_operational": 3, 00:22:45.948 "base_bdevs_list": [ 00:22:45.948 { 00:22:45.948 "name": "BaseBdev1", 00:22:45.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.948 "is_configured": false, 00:22:45.948 "data_offset": 0, 00:22:45.948 "data_size": 0 00:22:45.948 }, 00:22:45.948 { 00:22:45.948 "name": null, 00:22:45.948 "uuid": "7b18e5dc-0235-420b-a52e-664630e18a24", 00:22:45.948 "is_configured": false, 00:22:45.948 "data_offset": 0, 00:22:45.948 "data_size": 65536 00:22:45.948 }, 00:22:45.948 { 00:22:45.948 "name": "BaseBdev3", 00:22:45.948 "uuid": "6dfe0824-a09e-4ff6-9e10-972d4f960a32", 00:22:45.948 "is_configured": true, 00:22:45.948 "data_offset": 0, 00:22:45.948 "data_size": 65536 00:22:45.948 } 00:22:45.948 ] 00:22:45.948 }' 00:22:45.948 11:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:45.948 11:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.881 11:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.881 11:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:46.881 11:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:46.881 11:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:47.139 [2024-06-10 11:46:19.116899] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:47.139 BaseBdev1 00:22:47.139 11:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:47.139 11:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:22:47.139 11:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:47.139 11:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:22:47.139 11:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:47.139 11:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:47.139 11:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:47.398 11:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:47.655 [ 00:22:47.655 { 00:22:47.655 "name": "BaseBdev1", 00:22:47.655 "aliases": [ 00:22:47.655 "570818bf-3969-4eb5-a1bf-6d75c24a3345" 00:22:47.655 ], 00:22:47.655 "product_name": "Malloc disk", 00:22:47.655 "block_size": 512, 00:22:47.655 "num_blocks": 65536, 00:22:47.655 "uuid": "570818bf-3969-4eb5-a1bf-6d75c24a3345", 00:22:47.655 "assigned_rate_limits": { 00:22:47.655 "rw_ios_per_sec": 0, 00:22:47.655 "rw_mbytes_per_sec": 0, 00:22:47.655 "r_mbytes_per_sec": 0, 00:22:47.655 "w_mbytes_per_sec": 0 00:22:47.655 }, 00:22:47.655 "claimed": true, 00:22:47.655 "claim_type": "exclusive_write", 00:22:47.655 "zoned": false, 00:22:47.655 "supported_io_types": { 00:22:47.655 "read": true, 00:22:47.655 "write": true, 00:22:47.655 "unmap": true, 00:22:47.655 "write_zeroes": true, 00:22:47.655 "flush": true, 00:22:47.655 "reset": true, 00:22:47.655 "compare": false, 00:22:47.655 "compare_and_write": false, 00:22:47.655 "abort": true, 00:22:47.655 "nvme_admin": false, 00:22:47.655 "nvme_io": false 00:22:47.655 }, 00:22:47.655 "memory_domains": [ 00:22:47.655 { 00:22:47.655 "dma_device_id": "system", 00:22:47.655 "dma_device_type": 1 00:22:47.655 }, 00:22:47.655 { 00:22:47.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.655 "dma_device_type": 2 00:22:47.655 } 00:22:47.655 ], 00:22:47.655 "driver_specific": {} 00:22:47.655 } 00:22:47.655 ] 00:22:47.655 11:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:22:47.655 11:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:47.656 11:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:47.656 11:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:47.656 11:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:47.656 11:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:47.656 11:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:47.656 11:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:47.656 11:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:47.656 11:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:47.656 11:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:47.656 11:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.656 11:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.913 11:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:47.913 "name": "Existed_Raid", 00:22:47.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.913 "strip_size_kb": 0, 00:22:47.913 "state": "configuring", 00:22:47.913 "raid_level": "raid1", 00:22:47.913 "superblock": false, 00:22:47.913 "num_base_bdevs": 3, 00:22:47.913 "num_base_bdevs_discovered": 2, 00:22:47.913 "num_base_bdevs_operational": 3, 00:22:47.913 "base_bdevs_list": [ 00:22:47.913 { 00:22:47.913 "name": "BaseBdev1", 00:22:47.913 "uuid": "570818bf-3969-4eb5-a1bf-6d75c24a3345", 00:22:47.913 "is_configured": true, 00:22:47.913 "data_offset": 0, 00:22:47.913 "data_size": 65536 00:22:47.913 }, 00:22:47.913 { 00:22:47.913 "name": null, 00:22:47.913 "uuid": "7b18e5dc-0235-420b-a52e-664630e18a24", 00:22:47.913 "is_configured": false, 00:22:47.913 "data_offset": 0, 00:22:47.913 "data_size": 65536 00:22:47.913 }, 00:22:47.913 { 00:22:47.913 "name": "BaseBdev3", 00:22:47.913 "uuid": "6dfe0824-a09e-4ff6-9e10-972d4f960a32", 00:22:47.913 "is_configured": true, 00:22:47.913 "data_offset": 0, 00:22:47.913 "data_size": 65536 00:22:47.913 } 00:22:47.913 ] 00:22:47.913 }' 00:22:47.913 11:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:47.913 11:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.478 11:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.478 11:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:48.736 11:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:48.736 11:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:48.993 [2024-06-10 11:46:21.033423] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:48.993 11:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:49.251 11:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:49.251 11:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:49.251 11:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:49.251 11:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:49.251 11:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:49.251 11:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:49.251 11:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:49.251 11:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:49.251 11:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:49.251 11:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.251 11:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.251 11:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:49.251 "name": "Existed_Raid", 00:22:49.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.251 "strip_size_kb": 0, 00:22:49.251 "state": "configuring", 00:22:49.251 "raid_level": "raid1", 00:22:49.251 "superblock": false, 00:22:49.251 "num_base_bdevs": 3, 00:22:49.251 "num_base_bdevs_discovered": 1, 00:22:49.251 "num_base_bdevs_operational": 3, 00:22:49.251 "base_bdevs_list": [ 00:22:49.251 { 00:22:49.251 "name": "BaseBdev1", 00:22:49.251 "uuid": "570818bf-3969-4eb5-a1bf-6d75c24a3345", 00:22:49.251 "is_configured": true, 00:22:49.251 "data_offset": 0, 00:22:49.251 "data_size": 65536 00:22:49.251 }, 00:22:49.251 { 00:22:49.251 "name": null, 00:22:49.251 "uuid": "7b18e5dc-0235-420b-a52e-664630e18a24", 00:22:49.251 "is_configured": false, 00:22:49.251 "data_offset": 0, 00:22:49.251 "data_size": 65536 00:22:49.251 }, 00:22:49.251 { 00:22:49.251 "name": null, 00:22:49.251 "uuid": "6dfe0824-a09e-4ff6-9e10-972d4f960a32", 00:22:49.251 "is_configured": false, 00:22:49.251 "data_offset": 0, 00:22:49.251 "data_size": 65536 00:22:49.251 } 00:22:49.251 ] 00:22:49.251 }' 00:22:49.251 11:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:49.251 11:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.183 11:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.183 11:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:50.440 11:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:50.440 11:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:50.698 [2024-06-10 11:46:22.521768] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:50.698 11:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:50.698 11:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:50.698 11:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:50.698 11:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:50.698 11:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:50.698 11:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:50.698 11:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:50.698 11:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:50.698 11:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:50.698 11:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:50.698 11:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.698 11:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.957 11:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:50.957 "name": "Existed_Raid", 00:22:50.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.957 "strip_size_kb": 0, 00:22:50.957 "state": "configuring", 00:22:50.957 "raid_level": "raid1", 00:22:50.957 "superblock": false, 00:22:50.957 "num_base_bdevs": 3, 00:22:50.957 "num_base_bdevs_discovered": 2, 00:22:50.957 "num_base_bdevs_operational": 3, 00:22:50.957 "base_bdevs_list": [ 00:22:50.957 { 00:22:50.957 "name": "BaseBdev1", 00:22:50.957 "uuid": "570818bf-3969-4eb5-a1bf-6d75c24a3345", 00:22:50.957 "is_configured": true, 00:22:50.957 "data_offset": 0, 00:22:50.957 "data_size": 65536 00:22:50.957 }, 00:22:50.957 { 00:22:50.957 "name": null, 00:22:50.957 "uuid": "7b18e5dc-0235-420b-a52e-664630e18a24", 00:22:50.957 "is_configured": false, 00:22:50.957 "data_offset": 0, 00:22:50.957 "data_size": 65536 00:22:50.957 }, 00:22:50.957 { 00:22:50.957 "name": "BaseBdev3", 00:22:50.957 "uuid": "6dfe0824-a09e-4ff6-9e10-972d4f960a32", 00:22:50.957 "is_configured": true, 00:22:50.957 "data_offset": 0, 00:22:50.957 "data_size": 65536 00:22:50.957 } 00:22:50.957 ] 00:22:50.957 }' 00:22:50.957 11:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:50.957 11:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.523 11:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:51.523 11:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.781 11:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:51.781 11:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:52.039 [2024-06-10 11:46:23.874073] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:52.039 11:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:52.039 11:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:52.039 11:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:52.039 11:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:52.039 11:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:52.039 11:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:52.039 11:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:52.039 11:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:52.039 11:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:52.039 11:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:52.039 11:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.039 11:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.298 11:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:52.298 "name": "Existed_Raid", 00:22:52.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.298 "strip_size_kb": 0, 00:22:52.298 "state": "configuring", 00:22:52.298 "raid_level": "raid1", 00:22:52.298 "superblock": false, 00:22:52.298 "num_base_bdevs": 3, 00:22:52.298 "num_base_bdevs_discovered": 1, 00:22:52.298 "num_base_bdevs_operational": 3, 00:22:52.298 "base_bdevs_list": [ 00:22:52.298 { 00:22:52.298 "name": null, 00:22:52.298 "uuid": "570818bf-3969-4eb5-a1bf-6d75c24a3345", 00:22:52.298 "is_configured": false, 00:22:52.298 "data_offset": 0, 00:22:52.298 "data_size": 65536 00:22:52.298 }, 00:22:52.298 { 00:22:52.298 "name": null, 00:22:52.298 "uuid": "7b18e5dc-0235-420b-a52e-664630e18a24", 00:22:52.298 "is_configured": false, 00:22:52.298 "data_offset": 0, 00:22:52.298 "data_size": 65536 00:22:52.298 }, 00:22:52.298 { 00:22:52.298 "name": "BaseBdev3", 00:22:52.298 "uuid": "6dfe0824-a09e-4ff6-9e10-972d4f960a32", 00:22:52.298 "is_configured": true, 00:22:52.298 "data_offset": 0, 00:22:52.298 "data_size": 65536 00:22:52.298 } 00:22:52.298 ] 00:22:52.298 }' 00:22:52.298 11:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:52.298 11:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.325 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.325 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:53.325 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:53.325 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:53.583 [2024-06-10 11:46:25.554126] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:53.583 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:53.583 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:53.583 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:53.583 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:53.583 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:53.583 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:53.583 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:53.583 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:53.583 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:53.583 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:53.583 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.583 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.840 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:53.840 "name": "Existed_Raid", 00:22:53.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.840 "strip_size_kb": 0, 00:22:53.840 "state": "configuring", 00:22:53.840 "raid_level": "raid1", 00:22:53.840 "superblock": false, 00:22:53.840 "num_base_bdevs": 3, 00:22:53.840 "num_base_bdevs_discovered": 2, 00:22:53.840 "num_base_bdevs_operational": 3, 00:22:53.840 "base_bdevs_list": [ 00:22:53.840 { 00:22:53.840 "name": null, 00:22:53.841 "uuid": "570818bf-3969-4eb5-a1bf-6d75c24a3345", 00:22:53.841 "is_configured": false, 00:22:53.841 "data_offset": 0, 00:22:53.841 "data_size": 65536 00:22:53.841 }, 00:22:53.841 { 00:22:53.841 "name": "BaseBdev2", 00:22:53.841 "uuid": "7b18e5dc-0235-420b-a52e-664630e18a24", 00:22:53.841 "is_configured": true, 00:22:53.841 "data_offset": 0, 00:22:53.841 "data_size": 65536 00:22:53.841 }, 00:22:53.841 { 00:22:53.841 "name": "BaseBdev3", 00:22:53.841 "uuid": "6dfe0824-a09e-4ff6-9e10-972d4f960a32", 00:22:53.841 "is_configured": true, 00:22:53.841 "data_offset": 0, 00:22:53.841 "data_size": 65536 00:22:53.841 } 00:22:53.841 ] 00:22:53.841 }' 00:22:53.841 11:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:53.841 11:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.405 11:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:54.405 11:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.662 11:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:54.662 11:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.662 11:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:54.919 11:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 570818bf-3969-4eb5-a1bf-6d75c24a3345 00:22:55.176 [2024-06-10 11:46:27.185441] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:55.176 [2024-06-10 11:46:27.185760] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:22:55.176 [2024-06-10 11:46:27.185807] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:55.176 [2024-06-10 11:46:27.186026] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:55.176 [2024-06-10 11:46:27.186456] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:22:55.176 [2024-06-10 11:46:27.186575] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:22:55.176 [2024-06-10 11:46:27.186918] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.176 NewBaseBdev 00:22:55.176 11:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:55.176 11:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:22:55.176 11:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:55.176 11:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:22:55.176 11:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:55.176 11:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:55.176 11:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:55.741 11:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:55.741 [ 00:22:55.741 { 00:22:55.741 "name": "NewBaseBdev", 00:22:55.741 "aliases": [ 00:22:55.741 "570818bf-3969-4eb5-a1bf-6d75c24a3345" 00:22:55.741 ], 00:22:55.741 "product_name": "Malloc disk", 00:22:55.741 "block_size": 512, 00:22:55.741 "num_blocks": 65536, 00:22:55.741 "uuid": "570818bf-3969-4eb5-a1bf-6d75c24a3345", 00:22:55.741 "assigned_rate_limits": { 00:22:55.741 "rw_ios_per_sec": 0, 00:22:55.741 "rw_mbytes_per_sec": 0, 00:22:55.741 "r_mbytes_per_sec": 0, 00:22:55.741 "w_mbytes_per_sec": 0 00:22:55.741 }, 00:22:55.741 "claimed": true, 00:22:55.741 "claim_type": "exclusive_write", 00:22:55.741 "zoned": false, 00:22:55.741 "supported_io_types": { 00:22:55.741 "read": true, 00:22:55.741 "write": true, 00:22:55.741 "unmap": true, 00:22:55.741 "write_zeroes": true, 00:22:55.741 "flush": true, 00:22:55.741 "reset": true, 00:22:55.741 "compare": false, 00:22:55.742 "compare_and_write": false, 00:22:55.742 "abort": true, 00:22:55.742 "nvme_admin": false, 00:22:55.742 "nvme_io": false 00:22:55.742 }, 00:22:55.742 "memory_domains": [ 00:22:55.742 { 00:22:55.742 "dma_device_id": "system", 00:22:55.742 "dma_device_type": 1 00:22:55.742 }, 00:22:55.742 { 00:22:55.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.742 "dma_device_type": 2 00:22:55.742 } 00:22:55.742 ], 00:22:55.742 "driver_specific": {} 00:22:55.742 } 00:22:55.742 ] 00:22:55.742 11:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:22:55.742 11:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:55.742 11:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:55.742 11:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:55.742 11:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:55.742 11:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:55.742 11:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:55.742 11:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:55.742 11:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:55.742 11:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:55.742 11:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:55.742 11:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.742 11:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:56.306 11:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:56.306 "name": "Existed_Raid", 00:22:56.306 "uuid": "ed91b708-9793-47bf-9379-77573b83b800", 00:22:56.306 "strip_size_kb": 0, 00:22:56.306 "state": "online", 00:22:56.306 "raid_level": "raid1", 00:22:56.306 "superblock": false, 00:22:56.306 "num_base_bdevs": 3, 00:22:56.306 "num_base_bdevs_discovered": 3, 00:22:56.306 "num_base_bdevs_operational": 3, 00:22:56.306 "base_bdevs_list": [ 00:22:56.306 { 00:22:56.306 "name": "NewBaseBdev", 00:22:56.306 "uuid": "570818bf-3969-4eb5-a1bf-6d75c24a3345", 00:22:56.306 "is_configured": true, 00:22:56.306 "data_offset": 0, 00:22:56.306 "data_size": 65536 00:22:56.306 }, 00:22:56.306 { 00:22:56.306 "name": "BaseBdev2", 00:22:56.306 "uuid": "7b18e5dc-0235-420b-a52e-664630e18a24", 00:22:56.306 "is_configured": true, 00:22:56.306 "data_offset": 0, 00:22:56.306 "data_size": 65536 00:22:56.306 }, 00:22:56.306 { 00:22:56.306 "name": "BaseBdev3", 00:22:56.306 "uuid": "6dfe0824-a09e-4ff6-9e10-972d4f960a32", 00:22:56.306 "is_configured": true, 00:22:56.306 "data_offset": 0, 00:22:56.306 "data_size": 65536 00:22:56.306 } 00:22:56.306 ] 00:22:56.306 }' 00:22:56.306 11:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:56.306 11:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.872 11:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:56.872 11:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:56.872 11:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:56.872 11:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:56.872 11:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:56.872 11:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:56.872 11:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:56.872 11:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:56.872 [2024-06-10 11:46:28.842131] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:56.872 11:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:56.872 "name": "Existed_Raid", 00:22:56.872 "aliases": [ 00:22:56.872 "ed91b708-9793-47bf-9379-77573b83b800" 00:22:56.872 ], 00:22:56.872 "product_name": "Raid Volume", 00:22:56.872 "block_size": 512, 00:22:56.872 "num_blocks": 65536, 00:22:56.872 "uuid": "ed91b708-9793-47bf-9379-77573b83b800", 00:22:56.872 "assigned_rate_limits": { 00:22:56.872 "rw_ios_per_sec": 0, 00:22:56.872 "rw_mbytes_per_sec": 0, 00:22:56.872 "r_mbytes_per_sec": 0, 00:22:56.872 "w_mbytes_per_sec": 0 00:22:56.872 }, 00:22:56.872 "claimed": false, 00:22:56.872 "zoned": false, 00:22:56.872 "supported_io_types": { 00:22:56.872 "read": true, 00:22:56.872 "write": true, 00:22:56.872 "unmap": false, 00:22:56.872 "write_zeroes": true, 00:22:56.872 "flush": false, 00:22:56.872 "reset": true, 00:22:56.872 "compare": false, 00:22:56.872 "compare_and_write": false, 00:22:56.872 "abort": false, 00:22:56.872 "nvme_admin": false, 00:22:56.872 "nvme_io": false 00:22:56.872 }, 00:22:56.872 "memory_domains": [ 00:22:56.872 { 00:22:56.872 "dma_device_id": "system", 00:22:56.872 "dma_device_type": 1 00:22:56.872 }, 00:22:56.872 { 00:22:56.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.872 "dma_device_type": 2 00:22:56.872 }, 00:22:56.872 { 00:22:56.872 "dma_device_id": "system", 00:22:56.872 "dma_device_type": 1 00:22:56.872 }, 00:22:56.872 { 00:22:56.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.872 "dma_device_type": 2 00:22:56.872 }, 00:22:56.872 { 00:22:56.872 "dma_device_id": "system", 00:22:56.872 "dma_device_type": 1 00:22:56.872 }, 00:22:56.872 { 00:22:56.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.872 "dma_device_type": 2 00:22:56.872 } 00:22:56.872 ], 00:22:56.872 "driver_specific": { 00:22:56.872 "raid": { 00:22:56.872 "uuid": "ed91b708-9793-47bf-9379-77573b83b800", 00:22:56.872 "strip_size_kb": 0, 00:22:56.872 "state": "online", 00:22:56.872 "raid_level": "raid1", 00:22:56.872 "superblock": false, 00:22:56.872 "num_base_bdevs": 3, 00:22:56.872 "num_base_bdevs_discovered": 3, 00:22:56.872 "num_base_bdevs_operational": 3, 00:22:56.872 "base_bdevs_list": [ 00:22:56.872 { 00:22:56.872 "name": "NewBaseBdev", 00:22:56.872 "uuid": "570818bf-3969-4eb5-a1bf-6d75c24a3345", 00:22:56.872 "is_configured": true, 00:22:56.872 "data_offset": 0, 00:22:56.872 "data_size": 65536 00:22:56.872 }, 00:22:56.872 { 00:22:56.872 "name": "BaseBdev2", 00:22:56.872 "uuid": "7b18e5dc-0235-420b-a52e-664630e18a24", 00:22:56.872 "is_configured": true, 00:22:56.872 "data_offset": 0, 00:22:56.872 "data_size": 65536 00:22:56.872 }, 00:22:56.872 { 00:22:56.872 "name": "BaseBdev3", 00:22:56.872 "uuid": "6dfe0824-a09e-4ff6-9e10-972d4f960a32", 00:22:56.872 "is_configured": true, 00:22:56.872 "data_offset": 0, 00:22:56.872 "data_size": 65536 00:22:56.872 } 00:22:56.872 ] 00:22:56.872 } 00:22:56.872 } 00:22:56.872 }' 00:22:56.872 11:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:56.872 11:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:56.872 BaseBdev2 00:22:56.872 BaseBdev3' 00:22:56.872 11:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:56.872 11:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:56.872 11:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:57.130 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:57.130 "name": "NewBaseBdev", 00:22:57.130 "aliases": [ 00:22:57.130 "570818bf-3969-4eb5-a1bf-6d75c24a3345" 00:22:57.130 ], 00:22:57.130 "product_name": "Malloc disk", 00:22:57.130 "block_size": 512, 00:22:57.130 "num_blocks": 65536, 00:22:57.130 "uuid": "570818bf-3969-4eb5-a1bf-6d75c24a3345", 00:22:57.130 "assigned_rate_limits": { 00:22:57.130 "rw_ios_per_sec": 0, 00:22:57.130 "rw_mbytes_per_sec": 0, 00:22:57.130 "r_mbytes_per_sec": 0, 00:22:57.130 "w_mbytes_per_sec": 0 00:22:57.130 }, 00:22:57.130 "claimed": true, 00:22:57.130 "claim_type": "exclusive_write", 00:22:57.130 "zoned": false, 00:22:57.130 "supported_io_types": { 00:22:57.130 "read": true, 00:22:57.130 "write": true, 00:22:57.130 "unmap": true, 00:22:57.130 "write_zeroes": true, 00:22:57.130 "flush": true, 00:22:57.130 "reset": true, 00:22:57.130 "compare": false, 00:22:57.130 "compare_and_write": false, 00:22:57.130 "abort": true, 00:22:57.130 "nvme_admin": false, 00:22:57.130 "nvme_io": false 00:22:57.130 }, 00:22:57.130 "memory_domains": [ 00:22:57.130 { 00:22:57.130 "dma_device_id": "system", 00:22:57.130 "dma_device_type": 1 00:22:57.130 }, 00:22:57.130 { 00:22:57.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.130 "dma_device_type": 2 00:22:57.130 } 00:22:57.130 ], 00:22:57.130 "driver_specific": {} 00:22:57.130 }' 00:22:57.130 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.130 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.388 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:57.388 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.388 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.388 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:57.388 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:57.388 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:57.388 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:57.388 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:57.388 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:57.647 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:57.647 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:57.647 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:57.647 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:57.904 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:57.904 "name": "BaseBdev2", 00:22:57.904 "aliases": [ 00:22:57.904 "7b18e5dc-0235-420b-a52e-664630e18a24" 00:22:57.904 ], 00:22:57.904 "product_name": "Malloc disk", 00:22:57.904 "block_size": 512, 00:22:57.904 "num_blocks": 65536, 00:22:57.904 "uuid": "7b18e5dc-0235-420b-a52e-664630e18a24", 00:22:57.904 "assigned_rate_limits": { 00:22:57.904 "rw_ios_per_sec": 0, 00:22:57.904 "rw_mbytes_per_sec": 0, 00:22:57.904 "r_mbytes_per_sec": 0, 00:22:57.904 "w_mbytes_per_sec": 0 00:22:57.904 }, 00:22:57.904 "claimed": true, 00:22:57.904 "claim_type": "exclusive_write", 00:22:57.904 "zoned": false, 00:22:57.904 "supported_io_types": { 00:22:57.904 "read": true, 00:22:57.904 "write": true, 00:22:57.904 "unmap": true, 00:22:57.904 "write_zeroes": true, 00:22:57.904 "flush": true, 00:22:57.904 "reset": true, 00:22:57.904 "compare": false, 00:22:57.904 "compare_and_write": false, 00:22:57.904 "abort": true, 00:22:57.904 "nvme_admin": false, 00:22:57.904 "nvme_io": false 00:22:57.904 }, 00:22:57.904 "memory_domains": [ 00:22:57.904 { 00:22:57.904 "dma_device_id": "system", 00:22:57.904 "dma_device_type": 1 00:22:57.904 }, 00:22:57.904 { 00:22:57.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.904 "dma_device_type": 2 00:22:57.904 } 00:22:57.904 ], 00:22:57.904 "driver_specific": {} 00:22:57.904 }' 00:22:57.904 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.904 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.904 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:57.904 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.904 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.904 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:57.904 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.162 11:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.162 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:58.162 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.162 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.162 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:58.162 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:58.162 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:58.162 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:58.420 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:58.420 "name": "BaseBdev3", 00:22:58.420 "aliases": [ 00:22:58.420 "6dfe0824-a09e-4ff6-9e10-972d4f960a32" 00:22:58.420 ], 00:22:58.420 "product_name": "Malloc disk", 00:22:58.420 "block_size": 512, 00:22:58.420 "num_blocks": 65536, 00:22:58.420 "uuid": "6dfe0824-a09e-4ff6-9e10-972d4f960a32", 00:22:58.420 "assigned_rate_limits": { 00:22:58.420 "rw_ios_per_sec": 0, 00:22:58.420 "rw_mbytes_per_sec": 0, 00:22:58.420 "r_mbytes_per_sec": 0, 00:22:58.420 "w_mbytes_per_sec": 0 00:22:58.420 }, 00:22:58.420 "claimed": true, 00:22:58.420 "claim_type": "exclusive_write", 00:22:58.420 "zoned": false, 00:22:58.420 "supported_io_types": { 00:22:58.420 "read": true, 00:22:58.420 "write": true, 00:22:58.420 "unmap": true, 00:22:58.420 "write_zeroes": true, 00:22:58.420 "flush": true, 00:22:58.420 "reset": true, 00:22:58.420 "compare": false, 00:22:58.420 "compare_and_write": false, 00:22:58.420 "abort": true, 00:22:58.420 "nvme_admin": false, 00:22:58.420 "nvme_io": false 00:22:58.420 }, 00:22:58.420 "memory_domains": [ 00:22:58.420 { 00:22:58.420 "dma_device_id": "system", 00:22:58.420 "dma_device_type": 1 00:22:58.420 }, 00:22:58.420 { 00:22:58.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.420 "dma_device_type": 2 00:22:58.420 } 00:22:58.420 ], 00:22:58.420 "driver_specific": {} 00:22:58.420 }' 00:22:58.421 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.421 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.679 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:58.679 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.679 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.679 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:58.679 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.679 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.679 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:58.679 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.679 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.937 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:58.937 11:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:59.195 [2024-06-10 11:46:31.066318] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:59.195 [2024-06-10 11:46:31.066564] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:59.195 [2024-06-10 11:46:31.066745] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:59.195 [2024-06-10 11:46:31.067137] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:59.195 [2024-06-10 11:46:31.067246] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:22:59.195 11:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 132475 00:22:59.195 11:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 132475 ']' 00:22:59.195 11:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 132475 00:22:59.195 11:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:22:59.195 11:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:59.195 11:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 132475 00:22:59.195 killing process with pid 132475 00:22:59.195 11:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:59.195 11:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:59.195 11:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 132475' 00:22:59.195 11:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 132475 00:22:59.195 [2024-06-10 11:46:31.109369] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:59.195 11:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 132475 00:22:59.452 [2024-06-10 11:46:31.452351] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:01.350 ************************************ 00:23:01.350 END TEST raid_state_function_test 00:23:01.350 ************************************ 00:23:01.350 11:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:23:01.350 00:23:01.350 real 0m33.326s 00:23:01.350 user 1m0.548s 00:23:01.350 sys 0m4.408s 00:23:01.350 11:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:01.350 11:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.350 11:46:32 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:23:01.350 11:46:32 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:23:01.350 11:46:32 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:01.350 11:46:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:01.350 ************************************ 00:23:01.350 START TEST raid_state_function_test_sb 00:23:01.350 ************************************ 00:23:01.350 11:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 3 true 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:23:01.351 11:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:23:01.351 11:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=133493 00:23:01.351 11:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:01.351 11:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 133493' 00:23:01.351 Process raid pid: 133493 00:23:01.351 11:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 133493 /var/tmp/spdk-raid.sock 00:23:01.351 11:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 133493 ']' 00:23:01.351 11:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:01.351 11:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:01.351 11:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:01.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:01.351 11:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:01.351 11:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.351 [2024-06-10 11:46:33.062429] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:23:01.351 [2024-06-10 11:46:33.062831] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.351 [2024-06-10 11:46:33.229035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.609 [2024-06-10 11:46:33.459602] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.867 [2024-06-10 11:46:33.699332] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:02.125 11:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:02.125 11:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:23:02.125 11:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:02.383 [2024-06-10 11:46:34.254364] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:02.383 [2024-06-10 11:46:34.254683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:02.383 [2024-06-10 11:46:34.254792] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:02.383 [2024-06-10 11:46:34.254860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:02.383 [2024-06-10 11:46:34.254931] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:02.383 [2024-06-10 11:46:34.254983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:02.383 11:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:02.383 11:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:02.383 11:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:02.383 11:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:02.383 11:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:02.383 11:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:02.383 11:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:02.383 11:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:02.383 11:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:02.383 11:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:02.383 11:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.383 11:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.654 11:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:02.654 "name": "Existed_Raid", 00:23:02.654 "uuid": "b4217abf-c1fa-4074-9f73-5f2667a93a53", 00:23:02.654 "strip_size_kb": 0, 00:23:02.654 "state": "configuring", 00:23:02.654 "raid_level": "raid1", 00:23:02.654 "superblock": true, 00:23:02.654 "num_base_bdevs": 3, 00:23:02.654 "num_base_bdevs_discovered": 0, 00:23:02.654 "num_base_bdevs_operational": 3, 00:23:02.654 "base_bdevs_list": [ 00:23:02.654 { 00:23:02.654 "name": "BaseBdev1", 00:23:02.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.654 "is_configured": false, 00:23:02.654 "data_offset": 0, 00:23:02.654 "data_size": 0 00:23:02.654 }, 00:23:02.654 { 00:23:02.654 "name": "BaseBdev2", 00:23:02.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.654 "is_configured": false, 00:23:02.654 "data_offset": 0, 00:23:02.654 "data_size": 0 00:23:02.654 }, 00:23:02.654 { 00:23:02.655 "name": "BaseBdev3", 00:23:02.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.655 "is_configured": false, 00:23:02.655 "data_offset": 0, 00:23:02.655 "data_size": 0 00:23:02.655 } 00:23:02.655 ] 00:23:02.655 }' 00:23:02.655 11:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:02.655 11:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.220 11:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:03.478 [2024-06-10 11:46:35.342463] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:03.478 [2024-06-10 11:46:35.342800] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:03.478 11:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:03.735 [2024-06-10 11:46:35.582559] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:03.735 [2024-06-10 11:46:35.582913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:03.735 [2024-06-10 11:46:35.583034] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:03.735 [2024-06-10 11:46:35.583096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:03.735 [2024-06-10 11:46:35.583186] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:03.735 [2024-06-10 11:46:35.583251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:03.735 11:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:03.993 [2024-06-10 11:46:35.865856] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:03.993 BaseBdev1 00:23:03.993 11:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:03.993 11:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:23:03.993 11:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:23:03.993 11:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:23:03.993 11:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:23:03.993 11:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:23:03.993 11:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:04.251 11:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:04.509 [ 00:23:04.509 { 00:23:04.509 "name": "BaseBdev1", 00:23:04.509 "aliases": [ 00:23:04.509 "8ea1d3bf-e1dc-4c78-8ae0-a725096b49b1" 00:23:04.509 ], 00:23:04.509 "product_name": "Malloc disk", 00:23:04.509 "block_size": 512, 00:23:04.509 "num_blocks": 65536, 00:23:04.509 "uuid": "8ea1d3bf-e1dc-4c78-8ae0-a725096b49b1", 00:23:04.509 "assigned_rate_limits": { 00:23:04.509 "rw_ios_per_sec": 0, 00:23:04.509 "rw_mbytes_per_sec": 0, 00:23:04.509 "r_mbytes_per_sec": 0, 00:23:04.509 "w_mbytes_per_sec": 0 00:23:04.509 }, 00:23:04.509 "claimed": true, 00:23:04.509 "claim_type": "exclusive_write", 00:23:04.509 "zoned": false, 00:23:04.509 "supported_io_types": { 00:23:04.509 "read": true, 00:23:04.509 "write": true, 00:23:04.509 "unmap": true, 00:23:04.509 "write_zeroes": true, 00:23:04.509 "flush": true, 00:23:04.509 "reset": true, 00:23:04.509 "compare": false, 00:23:04.509 "compare_and_write": false, 00:23:04.509 "abort": true, 00:23:04.509 "nvme_admin": false, 00:23:04.509 "nvme_io": false 00:23:04.509 }, 00:23:04.509 "memory_domains": [ 00:23:04.509 { 00:23:04.509 "dma_device_id": "system", 00:23:04.509 "dma_device_type": 1 00:23:04.509 }, 00:23:04.509 { 00:23:04.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.509 "dma_device_type": 2 00:23:04.509 } 00:23:04.509 ], 00:23:04.509 "driver_specific": {} 00:23:04.509 } 00:23:04.509 ] 00:23:04.509 11:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:23:04.509 11:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:04.509 11:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:04.509 11:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:04.509 11:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:04.509 11:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:04.509 11:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:04.509 11:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:04.509 11:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:04.509 11:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:04.509 11:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:04.509 11:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.509 11:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.767 11:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:04.767 "name": "Existed_Raid", 00:23:04.767 "uuid": "1906fb16-be54-4a13-94ca-057cb92300af", 00:23:04.767 "strip_size_kb": 0, 00:23:04.767 "state": "configuring", 00:23:04.767 "raid_level": "raid1", 00:23:04.768 "superblock": true, 00:23:04.768 "num_base_bdevs": 3, 00:23:04.768 "num_base_bdevs_discovered": 1, 00:23:04.768 "num_base_bdevs_operational": 3, 00:23:04.768 "base_bdevs_list": [ 00:23:04.768 { 00:23:04.768 "name": "BaseBdev1", 00:23:04.768 "uuid": "8ea1d3bf-e1dc-4c78-8ae0-a725096b49b1", 00:23:04.768 "is_configured": true, 00:23:04.768 "data_offset": 2048, 00:23:04.768 "data_size": 63488 00:23:04.768 }, 00:23:04.768 { 00:23:04.768 "name": "BaseBdev2", 00:23:04.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.768 "is_configured": false, 00:23:04.768 "data_offset": 0, 00:23:04.768 "data_size": 0 00:23:04.768 }, 00:23:04.768 { 00:23:04.768 "name": "BaseBdev3", 00:23:04.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.768 "is_configured": false, 00:23:04.768 "data_offset": 0, 00:23:04.768 "data_size": 0 00:23:04.768 } 00:23:04.768 ] 00:23:04.768 }' 00:23:04.768 11:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:04.768 11:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.332 11:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:05.590 [2024-06-10 11:46:37.570298] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:05.590 [2024-06-10 11:46:37.570569] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:05.590 11:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:05.848 [2024-06-10 11:46:37.822372] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:05.848 [2024-06-10 11:46:37.824889] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:05.848 [2024-06-10 11:46:37.825133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:05.848 [2024-06-10 11:46:37.825241] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:05.848 [2024-06-10 11:46:37.825324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:05.848 11:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:05.848 11:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:05.848 11:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:05.848 11:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:05.848 11:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:05.848 11:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:05.848 11:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:05.848 11:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:05.848 11:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:05.848 11:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:05.848 11:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:05.848 11:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:05.848 11:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.848 11:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:06.107 11:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:06.107 "name": "Existed_Raid", 00:23:06.107 "uuid": "13a112c9-f42e-40f1-8def-68354af7ed1a", 00:23:06.107 "strip_size_kb": 0, 00:23:06.107 "state": "configuring", 00:23:06.107 "raid_level": "raid1", 00:23:06.107 "superblock": true, 00:23:06.107 "num_base_bdevs": 3, 00:23:06.107 "num_base_bdevs_discovered": 1, 00:23:06.107 "num_base_bdevs_operational": 3, 00:23:06.107 "base_bdevs_list": [ 00:23:06.107 { 00:23:06.107 "name": "BaseBdev1", 00:23:06.107 "uuid": "8ea1d3bf-e1dc-4c78-8ae0-a725096b49b1", 00:23:06.107 "is_configured": true, 00:23:06.107 "data_offset": 2048, 00:23:06.107 "data_size": 63488 00:23:06.107 }, 00:23:06.107 { 00:23:06.107 "name": "BaseBdev2", 00:23:06.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.107 "is_configured": false, 00:23:06.107 "data_offset": 0, 00:23:06.107 "data_size": 0 00:23:06.107 }, 00:23:06.107 { 00:23:06.107 "name": "BaseBdev3", 00:23:06.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.107 "is_configured": false, 00:23:06.107 "data_offset": 0, 00:23:06.107 "data_size": 0 00:23:06.107 } 00:23:06.107 ] 00:23:06.107 }' 00:23:06.107 11:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:06.107 11:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.040 11:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:07.297 [2024-06-10 11:46:39.105119] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:07.297 BaseBdev2 00:23:07.297 11:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:07.297 11:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:23:07.297 11:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:23:07.297 11:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:23:07.297 11:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:23:07.297 11:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:23:07.297 11:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:07.554 11:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:07.811 [ 00:23:07.811 { 00:23:07.811 "name": "BaseBdev2", 00:23:07.811 "aliases": [ 00:23:07.811 "52b75e7e-388d-4d88-9dcb-73acc1c8078b" 00:23:07.811 ], 00:23:07.811 "product_name": "Malloc disk", 00:23:07.811 "block_size": 512, 00:23:07.811 "num_blocks": 65536, 00:23:07.811 "uuid": "52b75e7e-388d-4d88-9dcb-73acc1c8078b", 00:23:07.811 "assigned_rate_limits": { 00:23:07.811 "rw_ios_per_sec": 0, 00:23:07.811 "rw_mbytes_per_sec": 0, 00:23:07.811 "r_mbytes_per_sec": 0, 00:23:07.811 "w_mbytes_per_sec": 0 00:23:07.811 }, 00:23:07.811 "claimed": true, 00:23:07.811 "claim_type": "exclusive_write", 00:23:07.811 "zoned": false, 00:23:07.811 "supported_io_types": { 00:23:07.811 "read": true, 00:23:07.811 "write": true, 00:23:07.811 "unmap": true, 00:23:07.811 "write_zeroes": true, 00:23:07.811 "flush": true, 00:23:07.811 "reset": true, 00:23:07.811 "compare": false, 00:23:07.811 "compare_and_write": false, 00:23:07.811 "abort": true, 00:23:07.811 "nvme_admin": false, 00:23:07.811 "nvme_io": false 00:23:07.811 }, 00:23:07.811 "memory_domains": [ 00:23:07.811 { 00:23:07.811 "dma_device_id": "system", 00:23:07.811 "dma_device_type": 1 00:23:07.811 }, 00:23:07.811 { 00:23:07.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.811 "dma_device_type": 2 00:23:07.811 } 00:23:07.811 ], 00:23:07.811 "driver_specific": {} 00:23:07.811 } 00:23:07.811 ] 00:23:07.811 11:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:23:07.811 11:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:07.811 11:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:07.811 11:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:07.811 11:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:07.811 11:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:07.811 11:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:07.811 11:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:07.811 11:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:07.811 11:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:07.811 11:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:07.811 11:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:07.811 11:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:07.811 11:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.811 11:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.068 11:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:08.068 "name": "Existed_Raid", 00:23:08.068 "uuid": "13a112c9-f42e-40f1-8def-68354af7ed1a", 00:23:08.068 "strip_size_kb": 0, 00:23:08.068 "state": "configuring", 00:23:08.068 "raid_level": "raid1", 00:23:08.068 "superblock": true, 00:23:08.068 "num_base_bdevs": 3, 00:23:08.068 "num_base_bdevs_discovered": 2, 00:23:08.068 "num_base_bdevs_operational": 3, 00:23:08.068 "base_bdevs_list": [ 00:23:08.068 { 00:23:08.068 "name": "BaseBdev1", 00:23:08.068 "uuid": "8ea1d3bf-e1dc-4c78-8ae0-a725096b49b1", 00:23:08.068 "is_configured": true, 00:23:08.068 "data_offset": 2048, 00:23:08.068 "data_size": 63488 00:23:08.068 }, 00:23:08.068 { 00:23:08.068 "name": "BaseBdev2", 00:23:08.068 "uuid": "52b75e7e-388d-4d88-9dcb-73acc1c8078b", 00:23:08.068 "is_configured": true, 00:23:08.068 "data_offset": 2048, 00:23:08.068 "data_size": 63488 00:23:08.068 }, 00:23:08.068 { 00:23:08.068 "name": "BaseBdev3", 00:23:08.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.068 "is_configured": false, 00:23:08.068 "data_offset": 0, 00:23:08.068 "data_size": 0 00:23:08.068 } 00:23:08.068 ] 00:23:08.068 }' 00:23:08.068 11:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:08.068 11:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.630 11:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:08.888 [2024-06-10 11:46:40.869640] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:08.888 [2024-06-10 11:46:40.869898] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:23:08.888 [2024-06-10 11:46:40.869913] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:08.888 [2024-06-10 11:46:40.870061] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:08.888 [2024-06-10 11:46:40.870429] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:23:08.888 [2024-06-10 11:46:40.870451] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:23:08.888 [2024-06-10 11:46:40.870622] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:08.888 BaseBdev3 00:23:08.888 11:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:08.888 11:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:23:08.888 11:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:23:08.888 11:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:23:08.888 11:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:23:08.888 11:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:23:08.888 11:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:09.145 11:46:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:09.402 [ 00:23:09.402 { 00:23:09.402 "name": "BaseBdev3", 00:23:09.402 "aliases": [ 00:23:09.402 "7e6be226-39cd-41a7-9174-e4058fe7eebb" 00:23:09.402 ], 00:23:09.402 "product_name": "Malloc disk", 00:23:09.402 "block_size": 512, 00:23:09.402 "num_blocks": 65536, 00:23:09.402 "uuid": "7e6be226-39cd-41a7-9174-e4058fe7eebb", 00:23:09.402 "assigned_rate_limits": { 00:23:09.402 "rw_ios_per_sec": 0, 00:23:09.402 "rw_mbytes_per_sec": 0, 00:23:09.402 "r_mbytes_per_sec": 0, 00:23:09.402 "w_mbytes_per_sec": 0 00:23:09.402 }, 00:23:09.402 "claimed": true, 00:23:09.402 "claim_type": "exclusive_write", 00:23:09.402 "zoned": false, 00:23:09.402 "supported_io_types": { 00:23:09.402 "read": true, 00:23:09.402 "write": true, 00:23:09.402 "unmap": true, 00:23:09.402 "write_zeroes": true, 00:23:09.402 "flush": true, 00:23:09.402 "reset": true, 00:23:09.402 "compare": false, 00:23:09.402 "compare_and_write": false, 00:23:09.402 "abort": true, 00:23:09.402 "nvme_admin": false, 00:23:09.402 "nvme_io": false 00:23:09.402 }, 00:23:09.402 "memory_domains": [ 00:23:09.402 { 00:23:09.402 "dma_device_id": "system", 00:23:09.402 "dma_device_type": 1 00:23:09.402 }, 00:23:09.402 { 00:23:09.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.402 "dma_device_type": 2 00:23:09.402 } 00:23:09.402 ], 00:23:09.402 "driver_specific": {} 00:23:09.402 } 00:23:09.402 ] 00:23:09.402 11:46:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:23:09.402 11:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:09.402 11:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:09.402 11:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:09.402 11:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:09.402 11:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:09.402 11:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:09.402 11:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:09.402 11:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:09.402 11:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:09.402 11:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:09.402 11:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:09.402 11:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:09.402 11:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.402 11:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.659 11:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:09.659 "name": "Existed_Raid", 00:23:09.659 "uuid": "13a112c9-f42e-40f1-8def-68354af7ed1a", 00:23:09.659 "strip_size_kb": 0, 00:23:09.659 "state": "online", 00:23:09.659 "raid_level": "raid1", 00:23:09.659 "superblock": true, 00:23:09.659 "num_base_bdevs": 3, 00:23:09.659 "num_base_bdevs_discovered": 3, 00:23:09.659 "num_base_bdevs_operational": 3, 00:23:09.659 "base_bdevs_list": [ 00:23:09.659 { 00:23:09.659 "name": "BaseBdev1", 00:23:09.659 "uuid": "8ea1d3bf-e1dc-4c78-8ae0-a725096b49b1", 00:23:09.659 "is_configured": true, 00:23:09.659 "data_offset": 2048, 00:23:09.659 "data_size": 63488 00:23:09.659 }, 00:23:09.659 { 00:23:09.659 "name": "BaseBdev2", 00:23:09.659 "uuid": "52b75e7e-388d-4d88-9dcb-73acc1c8078b", 00:23:09.659 "is_configured": true, 00:23:09.659 "data_offset": 2048, 00:23:09.659 "data_size": 63488 00:23:09.659 }, 00:23:09.659 { 00:23:09.659 "name": "BaseBdev3", 00:23:09.659 "uuid": "7e6be226-39cd-41a7-9174-e4058fe7eebb", 00:23:09.659 "is_configured": true, 00:23:09.659 "data_offset": 2048, 00:23:09.659 "data_size": 63488 00:23:09.659 } 00:23:09.659 ] 00:23:09.659 }' 00:23:09.659 11:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:09.659 11:46:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.589 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:10.589 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:10.589 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:10.589 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:10.589 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:10.589 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:10.589 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:10.589 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:10.589 [2024-06-10 11:46:42.550399] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:10.589 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:10.589 "name": "Existed_Raid", 00:23:10.589 "aliases": [ 00:23:10.589 "13a112c9-f42e-40f1-8def-68354af7ed1a" 00:23:10.589 ], 00:23:10.589 "product_name": "Raid Volume", 00:23:10.589 "block_size": 512, 00:23:10.589 "num_blocks": 63488, 00:23:10.589 "uuid": "13a112c9-f42e-40f1-8def-68354af7ed1a", 00:23:10.589 "assigned_rate_limits": { 00:23:10.589 "rw_ios_per_sec": 0, 00:23:10.589 "rw_mbytes_per_sec": 0, 00:23:10.589 "r_mbytes_per_sec": 0, 00:23:10.589 "w_mbytes_per_sec": 0 00:23:10.589 }, 00:23:10.589 "claimed": false, 00:23:10.589 "zoned": false, 00:23:10.589 "supported_io_types": { 00:23:10.589 "read": true, 00:23:10.589 "write": true, 00:23:10.589 "unmap": false, 00:23:10.589 "write_zeroes": true, 00:23:10.589 "flush": false, 00:23:10.589 "reset": true, 00:23:10.589 "compare": false, 00:23:10.589 "compare_and_write": false, 00:23:10.589 "abort": false, 00:23:10.589 "nvme_admin": false, 00:23:10.589 "nvme_io": false 00:23:10.589 }, 00:23:10.589 "memory_domains": [ 00:23:10.589 { 00:23:10.589 "dma_device_id": "system", 00:23:10.589 "dma_device_type": 1 00:23:10.589 }, 00:23:10.589 { 00:23:10.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.589 "dma_device_type": 2 00:23:10.589 }, 00:23:10.589 { 00:23:10.589 "dma_device_id": "system", 00:23:10.589 "dma_device_type": 1 00:23:10.589 }, 00:23:10.589 { 00:23:10.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.589 "dma_device_type": 2 00:23:10.589 }, 00:23:10.589 { 00:23:10.589 "dma_device_id": "system", 00:23:10.589 "dma_device_type": 1 00:23:10.589 }, 00:23:10.589 { 00:23:10.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.589 "dma_device_type": 2 00:23:10.589 } 00:23:10.589 ], 00:23:10.589 "driver_specific": { 00:23:10.589 "raid": { 00:23:10.589 "uuid": "13a112c9-f42e-40f1-8def-68354af7ed1a", 00:23:10.589 "strip_size_kb": 0, 00:23:10.589 "state": "online", 00:23:10.589 "raid_level": "raid1", 00:23:10.589 "superblock": true, 00:23:10.589 "num_base_bdevs": 3, 00:23:10.589 "num_base_bdevs_discovered": 3, 00:23:10.589 "num_base_bdevs_operational": 3, 00:23:10.589 "base_bdevs_list": [ 00:23:10.589 { 00:23:10.589 "name": "BaseBdev1", 00:23:10.589 "uuid": "8ea1d3bf-e1dc-4c78-8ae0-a725096b49b1", 00:23:10.589 "is_configured": true, 00:23:10.589 "data_offset": 2048, 00:23:10.589 "data_size": 63488 00:23:10.589 }, 00:23:10.589 { 00:23:10.589 "name": "BaseBdev2", 00:23:10.589 "uuid": "52b75e7e-388d-4d88-9dcb-73acc1c8078b", 00:23:10.589 "is_configured": true, 00:23:10.589 "data_offset": 2048, 00:23:10.589 "data_size": 63488 00:23:10.589 }, 00:23:10.589 { 00:23:10.589 "name": "BaseBdev3", 00:23:10.589 "uuid": "7e6be226-39cd-41a7-9174-e4058fe7eebb", 00:23:10.589 "is_configured": true, 00:23:10.589 "data_offset": 2048, 00:23:10.589 "data_size": 63488 00:23:10.589 } 00:23:10.589 ] 00:23:10.589 } 00:23:10.589 } 00:23:10.589 }' 00:23:10.589 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:10.589 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:10.589 BaseBdev2 00:23:10.589 BaseBdev3' 00:23:10.589 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:10.589 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:10.589 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:11.154 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:11.154 "name": "BaseBdev1", 00:23:11.154 "aliases": [ 00:23:11.154 "8ea1d3bf-e1dc-4c78-8ae0-a725096b49b1" 00:23:11.154 ], 00:23:11.154 "product_name": "Malloc disk", 00:23:11.154 "block_size": 512, 00:23:11.154 "num_blocks": 65536, 00:23:11.154 "uuid": "8ea1d3bf-e1dc-4c78-8ae0-a725096b49b1", 00:23:11.154 "assigned_rate_limits": { 00:23:11.154 "rw_ios_per_sec": 0, 00:23:11.154 "rw_mbytes_per_sec": 0, 00:23:11.154 "r_mbytes_per_sec": 0, 00:23:11.154 "w_mbytes_per_sec": 0 00:23:11.154 }, 00:23:11.154 "claimed": true, 00:23:11.154 "claim_type": "exclusive_write", 00:23:11.154 "zoned": false, 00:23:11.154 "supported_io_types": { 00:23:11.154 "read": true, 00:23:11.154 "write": true, 00:23:11.154 "unmap": true, 00:23:11.154 "write_zeroes": true, 00:23:11.154 "flush": true, 00:23:11.154 "reset": true, 00:23:11.154 "compare": false, 00:23:11.154 "compare_and_write": false, 00:23:11.154 "abort": true, 00:23:11.154 "nvme_admin": false, 00:23:11.154 "nvme_io": false 00:23:11.154 }, 00:23:11.154 "memory_domains": [ 00:23:11.154 { 00:23:11.154 "dma_device_id": "system", 00:23:11.154 "dma_device_type": 1 00:23:11.154 }, 00:23:11.154 { 00:23:11.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.154 "dma_device_type": 2 00:23:11.154 } 00:23:11.154 ], 00:23:11.154 "driver_specific": {} 00:23:11.154 }' 00:23:11.154 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:11.154 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:11.154 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:11.154 11:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:11.154 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:11.154 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:11.154 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:11.154 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:11.154 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:11.154 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:11.154 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:11.411 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:11.411 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:11.411 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:11.411 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:11.668 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:11.668 "name": "BaseBdev2", 00:23:11.668 "aliases": [ 00:23:11.668 "52b75e7e-388d-4d88-9dcb-73acc1c8078b" 00:23:11.668 ], 00:23:11.668 "product_name": "Malloc disk", 00:23:11.668 "block_size": 512, 00:23:11.668 "num_blocks": 65536, 00:23:11.668 "uuid": "52b75e7e-388d-4d88-9dcb-73acc1c8078b", 00:23:11.668 "assigned_rate_limits": { 00:23:11.668 "rw_ios_per_sec": 0, 00:23:11.668 "rw_mbytes_per_sec": 0, 00:23:11.668 "r_mbytes_per_sec": 0, 00:23:11.668 "w_mbytes_per_sec": 0 00:23:11.668 }, 00:23:11.668 "claimed": true, 00:23:11.668 "claim_type": "exclusive_write", 00:23:11.668 "zoned": false, 00:23:11.668 "supported_io_types": { 00:23:11.668 "read": true, 00:23:11.668 "write": true, 00:23:11.668 "unmap": true, 00:23:11.668 "write_zeroes": true, 00:23:11.668 "flush": true, 00:23:11.668 "reset": true, 00:23:11.668 "compare": false, 00:23:11.668 "compare_and_write": false, 00:23:11.668 "abort": true, 00:23:11.668 "nvme_admin": false, 00:23:11.668 "nvme_io": false 00:23:11.668 }, 00:23:11.668 "memory_domains": [ 00:23:11.668 { 00:23:11.668 "dma_device_id": "system", 00:23:11.668 "dma_device_type": 1 00:23:11.668 }, 00:23:11.668 { 00:23:11.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.668 "dma_device_type": 2 00:23:11.668 } 00:23:11.668 ], 00:23:11.668 "driver_specific": {} 00:23:11.668 }' 00:23:11.668 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:11.668 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:11.668 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:11.668 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:11.668 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:11.926 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:11.926 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:11.926 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:11.926 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:11.926 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:11.926 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:11.926 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:11.926 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:11.926 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:11.926 11:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:12.490 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:12.490 "name": "BaseBdev3", 00:23:12.490 "aliases": [ 00:23:12.490 "7e6be226-39cd-41a7-9174-e4058fe7eebb" 00:23:12.490 ], 00:23:12.490 "product_name": "Malloc disk", 00:23:12.490 "block_size": 512, 00:23:12.490 "num_blocks": 65536, 00:23:12.490 "uuid": "7e6be226-39cd-41a7-9174-e4058fe7eebb", 00:23:12.490 "assigned_rate_limits": { 00:23:12.490 "rw_ios_per_sec": 0, 00:23:12.490 "rw_mbytes_per_sec": 0, 00:23:12.490 "r_mbytes_per_sec": 0, 00:23:12.490 "w_mbytes_per_sec": 0 00:23:12.490 }, 00:23:12.490 "claimed": true, 00:23:12.490 "claim_type": "exclusive_write", 00:23:12.490 "zoned": false, 00:23:12.490 "supported_io_types": { 00:23:12.490 "read": true, 00:23:12.490 "write": true, 00:23:12.490 "unmap": true, 00:23:12.490 "write_zeroes": true, 00:23:12.490 "flush": true, 00:23:12.490 "reset": true, 00:23:12.490 "compare": false, 00:23:12.490 "compare_and_write": false, 00:23:12.490 "abort": true, 00:23:12.490 "nvme_admin": false, 00:23:12.490 "nvme_io": false 00:23:12.490 }, 00:23:12.490 "memory_domains": [ 00:23:12.490 { 00:23:12.490 "dma_device_id": "system", 00:23:12.490 "dma_device_type": 1 00:23:12.490 }, 00:23:12.490 { 00:23:12.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.490 "dma_device_type": 2 00:23:12.490 } 00:23:12.490 ], 00:23:12.490 "driver_specific": {} 00:23:12.490 }' 00:23:12.490 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.490 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.490 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:12.490 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.490 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.490 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:12.490 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.490 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.490 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:12.490 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.490 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.747 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:12.747 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:13.005 [2024-06-10 11:46:44.858726] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:13.005 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:13.005 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:23:13.005 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:13.005 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:23:13.005 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:23:13.005 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:13.005 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:13.005 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:13.005 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:13.005 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:13.005 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:13.005 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:13.005 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:13.005 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:13.005 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:13.005 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.005 11:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:13.262 11:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:13.262 "name": "Existed_Raid", 00:23:13.262 "uuid": "13a112c9-f42e-40f1-8def-68354af7ed1a", 00:23:13.262 "strip_size_kb": 0, 00:23:13.262 "state": "online", 00:23:13.262 "raid_level": "raid1", 00:23:13.262 "superblock": true, 00:23:13.262 "num_base_bdevs": 3, 00:23:13.262 "num_base_bdevs_discovered": 2, 00:23:13.262 "num_base_bdevs_operational": 2, 00:23:13.262 "base_bdevs_list": [ 00:23:13.262 { 00:23:13.262 "name": null, 00:23:13.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.262 "is_configured": false, 00:23:13.262 "data_offset": 2048, 00:23:13.262 "data_size": 63488 00:23:13.262 }, 00:23:13.262 { 00:23:13.262 "name": "BaseBdev2", 00:23:13.262 "uuid": "52b75e7e-388d-4d88-9dcb-73acc1c8078b", 00:23:13.262 "is_configured": true, 00:23:13.262 "data_offset": 2048, 00:23:13.262 "data_size": 63488 00:23:13.262 }, 00:23:13.262 { 00:23:13.262 "name": "BaseBdev3", 00:23:13.262 "uuid": "7e6be226-39cd-41a7-9174-e4058fe7eebb", 00:23:13.262 "is_configured": true, 00:23:13.262 "data_offset": 2048, 00:23:13.262 "data_size": 63488 00:23:13.262 } 00:23:13.262 ] 00:23:13.262 }' 00:23:13.262 11:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:13.262 11:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:13.826 11:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:13.826 11:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:13.826 11:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.826 11:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:14.084 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:14.084 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:14.084 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:14.341 [2024-06-10 11:46:46.368353] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:14.598 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:14.598 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:14.598 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.598 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:14.855 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:14.855 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:14.855 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:15.113 [2024-06-10 11:46:47.001408] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:15.113 [2024-06-10 11:46:47.001770] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:15.113 [2024-06-10 11:46:47.113843] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:15.113 [2024-06-10 11:46:47.114894] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:15.113 [2024-06-10 11:46:47.115292] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:23:15.113 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:15.113 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:15.113 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.113 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:15.370 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:15.370 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:15.370 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:23:15.370 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:15.370 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:15.370 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:15.659 BaseBdev2 00:23:15.659 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:15.660 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:23:15.660 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:23:15.660 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:23:15.660 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:23:15.660 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:23:15.660 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:15.918 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:16.483 [ 00:23:16.483 { 00:23:16.483 "name": "BaseBdev2", 00:23:16.483 "aliases": [ 00:23:16.483 "fa494936-83ca-49c5-a4a5-6d523a196fb9" 00:23:16.483 ], 00:23:16.483 "product_name": "Malloc disk", 00:23:16.483 "block_size": 512, 00:23:16.483 "num_blocks": 65536, 00:23:16.483 "uuid": "fa494936-83ca-49c5-a4a5-6d523a196fb9", 00:23:16.483 "assigned_rate_limits": { 00:23:16.483 "rw_ios_per_sec": 0, 00:23:16.483 "rw_mbytes_per_sec": 0, 00:23:16.483 "r_mbytes_per_sec": 0, 00:23:16.483 "w_mbytes_per_sec": 0 00:23:16.483 }, 00:23:16.483 "claimed": false, 00:23:16.483 "zoned": false, 00:23:16.483 "supported_io_types": { 00:23:16.483 "read": true, 00:23:16.483 "write": true, 00:23:16.483 "unmap": true, 00:23:16.483 "write_zeroes": true, 00:23:16.483 "flush": true, 00:23:16.483 "reset": true, 00:23:16.483 "compare": false, 00:23:16.483 "compare_and_write": false, 00:23:16.483 "abort": true, 00:23:16.483 "nvme_admin": false, 00:23:16.483 "nvme_io": false 00:23:16.483 }, 00:23:16.483 "memory_domains": [ 00:23:16.483 { 00:23:16.483 "dma_device_id": "system", 00:23:16.483 "dma_device_type": 1 00:23:16.483 }, 00:23:16.483 { 00:23:16.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.483 "dma_device_type": 2 00:23:16.483 } 00:23:16.483 ], 00:23:16.483 "driver_specific": {} 00:23:16.483 } 00:23:16.483 ] 00:23:16.483 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:23:16.483 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:16.483 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:16.483 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:16.483 BaseBdev3 00:23:16.483 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:16.483 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:23:16.483 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:23:16.483 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:23:16.483 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:23:16.483 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:23:16.483 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:17.048 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:17.048 [ 00:23:17.048 { 00:23:17.048 "name": "BaseBdev3", 00:23:17.048 "aliases": [ 00:23:17.048 "9cba85c4-5a06-4326-aa27-29ed25262cac" 00:23:17.048 ], 00:23:17.048 "product_name": "Malloc disk", 00:23:17.048 "block_size": 512, 00:23:17.048 "num_blocks": 65536, 00:23:17.048 "uuid": "9cba85c4-5a06-4326-aa27-29ed25262cac", 00:23:17.048 "assigned_rate_limits": { 00:23:17.048 "rw_ios_per_sec": 0, 00:23:17.048 "rw_mbytes_per_sec": 0, 00:23:17.048 "r_mbytes_per_sec": 0, 00:23:17.048 "w_mbytes_per_sec": 0 00:23:17.048 }, 00:23:17.048 "claimed": false, 00:23:17.048 "zoned": false, 00:23:17.048 "supported_io_types": { 00:23:17.048 "read": true, 00:23:17.048 "write": true, 00:23:17.048 "unmap": true, 00:23:17.048 "write_zeroes": true, 00:23:17.048 "flush": true, 00:23:17.048 "reset": true, 00:23:17.048 "compare": false, 00:23:17.048 "compare_and_write": false, 00:23:17.048 "abort": true, 00:23:17.048 "nvme_admin": false, 00:23:17.048 "nvme_io": false 00:23:17.048 }, 00:23:17.048 "memory_domains": [ 00:23:17.048 { 00:23:17.048 "dma_device_id": "system", 00:23:17.048 "dma_device_type": 1 00:23:17.048 }, 00:23:17.048 { 00:23:17.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.048 "dma_device_type": 2 00:23:17.048 } 00:23:17.048 ], 00:23:17.048 "driver_specific": {} 00:23:17.048 } 00:23:17.048 ] 00:23:17.048 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:23:17.048 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:17.048 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:17.048 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:17.306 [2024-06-10 11:46:49.294850] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:17.306 [2024-06-10 11:46:49.294964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:17.306 [2024-06-10 11:46:49.295020] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:17.306 [2024-06-10 11:46:49.297678] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:17.306 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:17.306 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:17.306 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:17.306 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:17.306 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:17.306 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:17.306 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:17.306 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:17.306 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:17.307 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:17.307 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.307 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:17.872 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:17.872 "name": "Existed_Raid", 00:23:17.873 "uuid": "f6c4c22a-b9af-4eff-b8f0-a83a60300194", 00:23:17.873 "strip_size_kb": 0, 00:23:17.873 "state": "configuring", 00:23:17.873 "raid_level": "raid1", 00:23:17.873 "superblock": true, 00:23:17.873 "num_base_bdevs": 3, 00:23:17.873 "num_base_bdevs_discovered": 2, 00:23:17.873 "num_base_bdevs_operational": 3, 00:23:17.873 "base_bdevs_list": [ 00:23:17.873 { 00:23:17.873 "name": "BaseBdev1", 00:23:17.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.873 "is_configured": false, 00:23:17.873 "data_offset": 0, 00:23:17.873 "data_size": 0 00:23:17.873 }, 00:23:17.873 { 00:23:17.873 "name": "BaseBdev2", 00:23:17.873 "uuid": "fa494936-83ca-49c5-a4a5-6d523a196fb9", 00:23:17.873 "is_configured": true, 00:23:17.873 "data_offset": 2048, 00:23:17.873 "data_size": 63488 00:23:17.873 }, 00:23:17.873 { 00:23:17.873 "name": "BaseBdev3", 00:23:17.873 "uuid": "9cba85c4-5a06-4326-aa27-29ed25262cac", 00:23:17.873 "is_configured": true, 00:23:17.873 "data_offset": 2048, 00:23:17.873 "data_size": 63488 00:23:17.873 } 00:23:17.873 ] 00:23:17.873 }' 00:23:17.873 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:17.873 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.439 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:18.697 [2024-06-10 11:46:50.527589] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:18.697 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:18.697 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:18.697 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:18.697 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:18.697 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:18.697 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:18.697 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:18.697 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:18.697 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:18.697 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:18.697 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.697 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:18.954 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:18.954 "name": "Existed_Raid", 00:23:18.954 "uuid": "f6c4c22a-b9af-4eff-b8f0-a83a60300194", 00:23:18.954 "strip_size_kb": 0, 00:23:18.954 "state": "configuring", 00:23:18.954 "raid_level": "raid1", 00:23:18.954 "superblock": true, 00:23:18.954 "num_base_bdevs": 3, 00:23:18.954 "num_base_bdevs_discovered": 1, 00:23:18.954 "num_base_bdevs_operational": 3, 00:23:18.954 "base_bdevs_list": [ 00:23:18.954 { 00:23:18.954 "name": "BaseBdev1", 00:23:18.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.954 "is_configured": false, 00:23:18.954 "data_offset": 0, 00:23:18.954 "data_size": 0 00:23:18.954 }, 00:23:18.954 { 00:23:18.954 "name": null, 00:23:18.954 "uuid": "fa494936-83ca-49c5-a4a5-6d523a196fb9", 00:23:18.954 "is_configured": false, 00:23:18.954 "data_offset": 2048, 00:23:18.954 "data_size": 63488 00:23:18.954 }, 00:23:18.954 { 00:23:18.954 "name": "BaseBdev3", 00:23:18.954 "uuid": "9cba85c4-5a06-4326-aa27-29ed25262cac", 00:23:18.954 "is_configured": true, 00:23:18.954 "data_offset": 2048, 00:23:18.954 "data_size": 63488 00:23:18.954 } 00:23:18.954 ] 00:23:18.954 }' 00:23:18.954 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:18.954 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.518 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.518 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:19.776 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:19.776 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:20.338 [2024-06-10 11:46:52.183560] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:20.338 BaseBdev1 00:23:20.338 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:20.338 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:23:20.338 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:23:20.338 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:23:20.338 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:23:20.338 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:23:20.338 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:20.595 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:20.852 [ 00:23:20.852 { 00:23:20.852 "name": "BaseBdev1", 00:23:20.852 "aliases": [ 00:23:20.852 "1c0aa45e-964c-474b-b403-b3ec048df640" 00:23:20.852 ], 00:23:20.852 "product_name": "Malloc disk", 00:23:20.852 "block_size": 512, 00:23:20.852 "num_blocks": 65536, 00:23:20.852 "uuid": "1c0aa45e-964c-474b-b403-b3ec048df640", 00:23:20.852 "assigned_rate_limits": { 00:23:20.852 "rw_ios_per_sec": 0, 00:23:20.852 "rw_mbytes_per_sec": 0, 00:23:20.852 "r_mbytes_per_sec": 0, 00:23:20.852 "w_mbytes_per_sec": 0 00:23:20.852 }, 00:23:20.852 "claimed": true, 00:23:20.852 "claim_type": "exclusive_write", 00:23:20.852 "zoned": false, 00:23:20.852 "supported_io_types": { 00:23:20.852 "read": true, 00:23:20.852 "write": true, 00:23:20.852 "unmap": true, 00:23:20.852 "write_zeroes": true, 00:23:20.852 "flush": true, 00:23:20.852 "reset": true, 00:23:20.852 "compare": false, 00:23:20.852 "compare_and_write": false, 00:23:20.852 "abort": true, 00:23:20.852 "nvme_admin": false, 00:23:20.852 "nvme_io": false 00:23:20.852 }, 00:23:20.852 "memory_domains": [ 00:23:20.852 { 00:23:20.852 "dma_device_id": "system", 00:23:20.852 "dma_device_type": 1 00:23:20.852 }, 00:23:20.852 { 00:23:20.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.852 "dma_device_type": 2 00:23:20.852 } 00:23:20.852 ], 00:23:20.852 "driver_specific": {} 00:23:20.852 } 00:23:20.852 ] 00:23:20.852 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:23:20.852 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:20.852 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:20.852 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:20.852 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:20.852 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:20.852 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:20.852 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:20.852 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:20.852 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:20.852 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:20.852 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.852 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.109 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:21.109 "name": "Existed_Raid", 00:23:21.109 "uuid": "f6c4c22a-b9af-4eff-b8f0-a83a60300194", 00:23:21.109 "strip_size_kb": 0, 00:23:21.109 "state": "configuring", 00:23:21.109 "raid_level": "raid1", 00:23:21.109 "superblock": true, 00:23:21.109 "num_base_bdevs": 3, 00:23:21.109 "num_base_bdevs_discovered": 2, 00:23:21.109 "num_base_bdevs_operational": 3, 00:23:21.109 "base_bdevs_list": [ 00:23:21.109 { 00:23:21.109 "name": "BaseBdev1", 00:23:21.109 "uuid": "1c0aa45e-964c-474b-b403-b3ec048df640", 00:23:21.109 "is_configured": true, 00:23:21.109 "data_offset": 2048, 00:23:21.109 "data_size": 63488 00:23:21.109 }, 00:23:21.109 { 00:23:21.109 "name": null, 00:23:21.109 "uuid": "fa494936-83ca-49c5-a4a5-6d523a196fb9", 00:23:21.109 "is_configured": false, 00:23:21.109 "data_offset": 2048, 00:23:21.109 "data_size": 63488 00:23:21.109 }, 00:23:21.109 { 00:23:21.109 "name": "BaseBdev3", 00:23:21.109 "uuid": "9cba85c4-5a06-4326-aa27-29ed25262cac", 00:23:21.109 "is_configured": true, 00:23:21.109 "data_offset": 2048, 00:23:21.109 "data_size": 63488 00:23:21.109 } 00:23:21.109 ] 00:23:21.109 }' 00:23:21.109 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:21.109 11:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.674 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:21.674 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.238 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:22.238 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:22.496 [2024-06-10 11:46:54.320219] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:22.496 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:22.496 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:22.496 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:22.496 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:22.496 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:22.496 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:22.496 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.496 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.496 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.496 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.496 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.496 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.800 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.800 "name": "Existed_Raid", 00:23:22.800 "uuid": "f6c4c22a-b9af-4eff-b8f0-a83a60300194", 00:23:22.800 "strip_size_kb": 0, 00:23:22.800 "state": "configuring", 00:23:22.800 "raid_level": "raid1", 00:23:22.800 "superblock": true, 00:23:22.800 "num_base_bdevs": 3, 00:23:22.800 "num_base_bdevs_discovered": 1, 00:23:22.800 "num_base_bdevs_operational": 3, 00:23:22.800 "base_bdevs_list": [ 00:23:22.800 { 00:23:22.800 "name": "BaseBdev1", 00:23:22.800 "uuid": "1c0aa45e-964c-474b-b403-b3ec048df640", 00:23:22.800 "is_configured": true, 00:23:22.800 "data_offset": 2048, 00:23:22.800 "data_size": 63488 00:23:22.800 }, 00:23:22.800 { 00:23:22.800 "name": null, 00:23:22.800 "uuid": "fa494936-83ca-49c5-a4a5-6d523a196fb9", 00:23:22.800 "is_configured": false, 00:23:22.800 "data_offset": 2048, 00:23:22.800 "data_size": 63488 00:23:22.800 }, 00:23:22.800 { 00:23:22.800 "name": null, 00:23:22.800 "uuid": "9cba85c4-5a06-4326-aa27-29ed25262cac", 00:23:22.800 "is_configured": false, 00:23:22.800 "data_offset": 2048, 00:23:22.800 "data_size": 63488 00:23:22.800 } 00:23:22.800 ] 00:23:22.800 }' 00:23:22.800 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.800 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.366 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.366 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:23.624 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:23.624 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:23.881 [2024-06-10 11:46:55.787792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:23.881 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:23.881 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:23.881 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:23.881 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:23.881 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:23.881 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:23.881 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:23.881 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:23.881 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:23.881 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:23.881 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.881 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:24.139 11:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:24.139 "name": "Existed_Raid", 00:23:24.139 "uuid": "f6c4c22a-b9af-4eff-b8f0-a83a60300194", 00:23:24.139 "strip_size_kb": 0, 00:23:24.139 "state": "configuring", 00:23:24.139 "raid_level": "raid1", 00:23:24.139 "superblock": true, 00:23:24.139 "num_base_bdevs": 3, 00:23:24.139 "num_base_bdevs_discovered": 2, 00:23:24.139 "num_base_bdevs_operational": 3, 00:23:24.139 "base_bdevs_list": [ 00:23:24.139 { 00:23:24.139 "name": "BaseBdev1", 00:23:24.139 "uuid": "1c0aa45e-964c-474b-b403-b3ec048df640", 00:23:24.139 "is_configured": true, 00:23:24.139 "data_offset": 2048, 00:23:24.139 "data_size": 63488 00:23:24.139 }, 00:23:24.139 { 00:23:24.139 "name": null, 00:23:24.139 "uuid": "fa494936-83ca-49c5-a4a5-6d523a196fb9", 00:23:24.139 "is_configured": false, 00:23:24.139 "data_offset": 2048, 00:23:24.139 "data_size": 63488 00:23:24.139 }, 00:23:24.139 { 00:23:24.139 "name": "BaseBdev3", 00:23:24.139 "uuid": "9cba85c4-5a06-4326-aa27-29ed25262cac", 00:23:24.139 "is_configured": true, 00:23:24.139 "data_offset": 2048, 00:23:24.139 "data_size": 63488 00:23:24.139 } 00:23:24.139 ] 00:23:24.139 }' 00:23:24.139 11:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:24.139 11:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.705 11:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.705 11:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:24.963 11:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:24.963 11:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:25.221 [2024-06-10 11:46:57.264212] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:25.480 11:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:25.480 11:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:25.480 11:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:25.480 11:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:25.480 11:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:25.480 11:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:25.480 11:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:25.480 11:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:25.480 11:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:25.480 11:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:25.480 11:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:25.480 11:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.738 11:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:25.738 "name": "Existed_Raid", 00:23:25.738 "uuid": "f6c4c22a-b9af-4eff-b8f0-a83a60300194", 00:23:25.738 "strip_size_kb": 0, 00:23:25.738 "state": "configuring", 00:23:25.738 "raid_level": "raid1", 00:23:25.738 "superblock": true, 00:23:25.738 "num_base_bdevs": 3, 00:23:25.738 "num_base_bdevs_discovered": 1, 00:23:25.738 "num_base_bdevs_operational": 3, 00:23:25.738 "base_bdevs_list": [ 00:23:25.738 { 00:23:25.738 "name": null, 00:23:25.738 "uuid": "1c0aa45e-964c-474b-b403-b3ec048df640", 00:23:25.738 "is_configured": false, 00:23:25.738 "data_offset": 2048, 00:23:25.738 "data_size": 63488 00:23:25.738 }, 00:23:25.738 { 00:23:25.738 "name": null, 00:23:25.738 "uuid": "fa494936-83ca-49c5-a4a5-6d523a196fb9", 00:23:25.738 "is_configured": false, 00:23:25.738 "data_offset": 2048, 00:23:25.738 "data_size": 63488 00:23:25.738 }, 00:23:25.738 { 00:23:25.738 "name": "BaseBdev3", 00:23:25.738 "uuid": "9cba85c4-5a06-4326-aa27-29ed25262cac", 00:23:25.738 "is_configured": true, 00:23:25.738 "data_offset": 2048, 00:23:25.738 "data_size": 63488 00:23:25.738 } 00:23:25.738 ] 00:23:25.738 }' 00:23:25.738 11:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:25.738 11:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.304 11:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.304 11:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:26.872 11:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:26.872 11:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:27.131 [2024-06-10 11:46:59.008759] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:27.131 11:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:27.131 11:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:27.131 11:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:27.131 11:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:27.131 11:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:27.131 11:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:27.131 11:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:27.131 11:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:27.131 11:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:27.131 11:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:27.131 11:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.131 11:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.390 11:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:27.390 "name": "Existed_Raid", 00:23:27.390 "uuid": "f6c4c22a-b9af-4eff-b8f0-a83a60300194", 00:23:27.390 "strip_size_kb": 0, 00:23:27.390 "state": "configuring", 00:23:27.390 "raid_level": "raid1", 00:23:27.390 "superblock": true, 00:23:27.390 "num_base_bdevs": 3, 00:23:27.390 "num_base_bdevs_discovered": 2, 00:23:27.390 "num_base_bdevs_operational": 3, 00:23:27.390 "base_bdevs_list": [ 00:23:27.390 { 00:23:27.390 "name": null, 00:23:27.390 "uuid": "1c0aa45e-964c-474b-b403-b3ec048df640", 00:23:27.390 "is_configured": false, 00:23:27.390 "data_offset": 2048, 00:23:27.390 "data_size": 63488 00:23:27.390 }, 00:23:27.390 { 00:23:27.390 "name": "BaseBdev2", 00:23:27.390 "uuid": "fa494936-83ca-49c5-a4a5-6d523a196fb9", 00:23:27.390 "is_configured": true, 00:23:27.390 "data_offset": 2048, 00:23:27.390 "data_size": 63488 00:23:27.390 }, 00:23:27.390 { 00:23:27.390 "name": "BaseBdev3", 00:23:27.390 "uuid": "9cba85c4-5a06-4326-aa27-29ed25262cac", 00:23:27.390 "is_configured": true, 00:23:27.390 "data_offset": 2048, 00:23:27.390 "data_size": 63488 00:23:27.390 } 00:23:27.390 ] 00:23:27.390 }' 00:23:27.390 11:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:27.390 11:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.957 11:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.957 11:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:28.216 11:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:28.216 11:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.216 11:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:28.474 11:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1c0aa45e-964c-474b-b403-b3ec048df640 00:23:28.732 [2024-06-10 11:47:00.668596] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:28.732 [2024-06-10 11:47:00.669159] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:23:28.732 [2024-06-10 11:47:00.669318] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:28.732 [2024-06-10 11:47:00.669583] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:28.732 [2024-06-10 11:47:00.670103] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:23:28.732 NewBaseBdev 00:23:28.732 [2024-06-10 11:47:00.670254] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:23:28.732 [2024-06-10 11:47:00.670577] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.732 11:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:28.732 11:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:23:28.732 11:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:23:28.732 11:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:23:28.732 11:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:23:28.732 11:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:23:28.732 11:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:28.990 11:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:29.249 [ 00:23:29.249 { 00:23:29.249 "name": "NewBaseBdev", 00:23:29.249 "aliases": [ 00:23:29.249 "1c0aa45e-964c-474b-b403-b3ec048df640" 00:23:29.249 ], 00:23:29.249 "product_name": "Malloc disk", 00:23:29.249 "block_size": 512, 00:23:29.249 "num_blocks": 65536, 00:23:29.249 "uuid": "1c0aa45e-964c-474b-b403-b3ec048df640", 00:23:29.249 "assigned_rate_limits": { 00:23:29.249 "rw_ios_per_sec": 0, 00:23:29.249 "rw_mbytes_per_sec": 0, 00:23:29.249 "r_mbytes_per_sec": 0, 00:23:29.249 "w_mbytes_per_sec": 0 00:23:29.249 }, 00:23:29.249 "claimed": true, 00:23:29.249 "claim_type": "exclusive_write", 00:23:29.249 "zoned": false, 00:23:29.249 "supported_io_types": { 00:23:29.249 "read": true, 00:23:29.249 "write": true, 00:23:29.249 "unmap": true, 00:23:29.249 "write_zeroes": true, 00:23:29.249 "flush": true, 00:23:29.249 "reset": true, 00:23:29.249 "compare": false, 00:23:29.249 "compare_and_write": false, 00:23:29.249 "abort": true, 00:23:29.249 "nvme_admin": false, 00:23:29.249 "nvme_io": false 00:23:29.249 }, 00:23:29.249 "memory_domains": [ 00:23:29.249 { 00:23:29.249 "dma_device_id": "system", 00:23:29.249 "dma_device_type": 1 00:23:29.249 }, 00:23:29.249 { 00:23:29.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.249 "dma_device_type": 2 00:23:29.249 } 00:23:29.249 ], 00:23:29.249 "driver_specific": {} 00:23:29.249 } 00:23:29.249 ] 00:23:29.249 11:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:23:29.249 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:29.249 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:29.249 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:29.249 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:29.249 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:29.249 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:29.249 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:29.249 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:29.249 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:29.249 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:29.249 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.249 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:29.507 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:29.507 "name": "Existed_Raid", 00:23:29.507 "uuid": "f6c4c22a-b9af-4eff-b8f0-a83a60300194", 00:23:29.507 "strip_size_kb": 0, 00:23:29.507 "state": "online", 00:23:29.507 "raid_level": "raid1", 00:23:29.507 "superblock": true, 00:23:29.507 "num_base_bdevs": 3, 00:23:29.507 "num_base_bdevs_discovered": 3, 00:23:29.507 "num_base_bdevs_operational": 3, 00:23:29.507 "base_bdevs_list": [ 00:23:29.507 { 00:23:29.507 "name": "NewBaseBdev", 00:23:29.507 "uuid": "1c0aa45e-964c-474b-b403-b3ec048df640", 00:23:29.507 "is_configured": true, 00:23:29.507 "data_offset": 2048, 00:23:29.507 "data_size": 63488 00:23:29.507 }, 00:23:29.507 { 00:23:29.507 "name": "BaseBdev2", 00:23:29.507 "uuid": "fa494936-83ca-49c5-a4a5-6d523a196fb9", 00:23:29.507 "is_configured": true, 00:23:29.507 "data_offset": 2048, 00:23:29.507 "data_size": 63488 00:23:29.507 }, 00:23:29.507 { 00:23:29.507 "name": "BaseBdev3", 00:23:29.507 "uuid": "9cba85c4-5a06-4326-aa27-29ed25262cac", 00:23:29.507 "is_configured": true, 00:23:29.507 "data_offset": 2048, 00:23:29.507 "data_size": 63488 00:23:29.507 } 00:23:29.507 ] 00:23:29.507 }' 00:23:29.507 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:29.507 11:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.074 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:30.074 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:30.074 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:30.074 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:30.074 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:30.074 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:30.074 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:30.074 11:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:30.332 [2024-06-10 11:47:02.187367] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:30.332 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:30.332 "name": "Existed_Raid", 00:23:30.332 "aliases": [ 00:23:30.332 "f6c4c22a-b9af-4eff-b8f0-a83a60300194" 00:23:30.332 ], 00:23:30.332 "product_name": "Raid Volume", 00:23:30.332 "block_size": 512, 00:23:30.332 "num_blocks": 63488, 00:23:30.332 "uuid": "f6c4c22a-b9af-4eff-b8f0-a83a60300194", 00:23:30.332 "assigned_rate_limits": { 00:23:30.332 "rw_ios_per_sec": 0, 00:23:30.332 "rw_mbytes_per_sec": 0, 00:23:30.332 "r_mbytes_per_sec": 0, 00:23:30.332 "w_mbytes_per_sec": 0 00:23:30.332 }, 00:23:30.332 "claimed": false, 00:23:30.332 "zoned": false, 00:23:30.332 "supported_io_types": { 00:23:30.332 "read": true, 00:23:30.333 "write": true, 00:23:30.333 "unmap": false, 00:23:30.333 "write_zeroes": true, 00:23:30.333 "flush": false, 00:23:30.333 "reset": true, 00:23:30.333 "compare": false, 00:23:30.333 "compare_and_write": false, 00:23:30.333 "abort": false, 00:23:30.333 "nvme_admin": false, 00:23:30.333 "nvme_io": false 00:23:30.333 }, 00:23:30.333 "memory_domains": [ 00:23:30.333 { 00:23:30.333 "dma_device_id": "system", 00:23:30.333 "dma_device_type": 1 00:23:30.333 }, 00:23:30.333 { 00:23:30.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.333 "dma_device_type": 2 00:23:30.333 }, 00:23:30.333 { 00:23:30.333 "dma_device_id": "system", 00:23:30.333 "dma_device_type": 1 00:23:30.333 }, 00:23:30.333 { 00:23:30.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.333 "dma_device_type": 2 00:23:30.333 }, 00:23:30.333 { 00:23:30.333 "dma_device_id": "system", 00:23:30.333 "dma_device_type": 1 00:23:30.333 }, 00:23:30.333 { 00:23:30.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.333 "dma_device_type": 2 00:23:30.333 } 00:23:30.333 ], 00:23:30.333 "driver_specific": { 00:23:30.333 "raid": { 00:23:30.333 "uuid": "f6c4c22a-b9af-4eff-b8f0-a83a60300194", 00:23:30.333 "strip_size_kb": 0, 00:23:30.333 "state": "online", 00:23:30.333 "raid_level": "raid1", 00:23:30.333 "superblock": true, 00:23:30.333 "num_base_bdevs": 3, 00:23:30.333 "num_base_bdevs_discovered": 3, 00:23:30.333 "num_base_bdevs_operational": 3, 00:23:30.333 "base_bdevs_list": [ 00:23:30.333 { 00:23:30.333 "name": "NewBaseBdev", 00:23:30.333 "uuid": "1c0aa45e-964c-474b-b403-b3ec048df640", 00:23:30.333 "is_configured": true, 00:23:30.333 "data_offset": 2048, 00:23:30.333 "data_size": 63488 00:23:30.333 }, 00:23:30.333 { 00:23:30.333 "name": "BaseBdev2", 00:23:30.333 "uuid": "fa494936-83ca-49c5-a4a5-6d523a196fb9", 00:23:30.333 "is_configured": true, 00:23:30.333 "data_offset": 2048, 00:23:30.333 "data_size": 63488 00:23:30.333 }, 00:23:30.333 { 00:23:30.333 "name": "BaseBdev3", 00:23:30.333 "uuid": "9cba85c4-5a06-4326-aa27-29ed25262cac", 00:23:30.333 "is_configured": true, 00:23:30.333 "data_offset": 2048, 00:23:30.333 "data_size": 63488 00:23:30.333 } 00:23:30.333 ] 00:23:30.333 } 00:23:30.333 } 00:23:30.333 }' 00:23:30.333 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:30.333 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:30.333 BaseBdev2 00:23:30.333 BaseBdev3' 00:23:30.333 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:30.333 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:30.333 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:30.591 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:30.591 "name": "NewBaseBdev", 00:23:30.591 "aliases": [ 00:23:30.591 "1c0aa45e-964c-474b-b403-b3ec048df640" 00:23:30.591 ], 00:23:30.591 "product_name": "Malloc disk", 00:23:30.591 "block_size": 512, 00:23:30.591 "num_blocks": 65536, 00:23:30.591 "uuid": "1c0aa45e-964c-474b-b403-b3ec048df640", 00:23:30.591 "assigned_rate_limits": { 00:23:30.591 "rw_ios_per_sec": 0, 00:23:30.591 "rw_mbytes_per_sec": 0, 00:23:30.591 "r_mbytes_per_sec": 0, 00:23:30.591 "w_mbytes_per_sec": 0 00:23:30.591 }, 00:23:30.591 "claimed": true, 00:23:30.591 "claim_type": "exclusive_write", 00:23:30.591 "zoned": false, 00:23:30.591 "supported_io_types": { 00:23:30.591 "read": true, 00:23:30.591 "write": true, 00:23:30.591 "unmap": true, 00:23:30.591 "write_zeroes": true, 00:23:30.591 "flush": true, 00:23:30.591 "reset": true, 00:23:30.591 "compare": false, 00:23:30.591 "compare_and_write": false, 00:23:30.591 "abort": true, 00:23:30.591 "nvme_admin": false, 00:23:30.591 "nvme_io": false 00:23:30.591 }, 00:23:30.591 "memory_domains": [ 00:23:30.591 { 00:23:30.591 "dma_device_id": "system", 00:23:30.591 "dma_device_type": 1 00:23:30.591 }, 00:23:30.591 { 00:23:30.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.591 "dma_device_type": 2 00:23:30.591 } 00:23:30.591 ], 00:23:30.591 "driver_specific": {} 00:23:30.591 }' 00:23:30.591 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.591 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.591 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:30.591 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.591 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.850 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:30.850 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.850 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.850 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:30.850 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.850 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.850 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:30.850 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:30.851 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:30.851 11:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:31.154 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:31.154 "name": "BaseBdev2", 00:23:31.154 "aliases": [ 00:23:31.154 "fa494936-83ca-49c5-a4a5-6d523a196fb9" 00:23:31.154 ], 00:23:31.154 "product_name": "Malloc disk", 00:23:31.154 "block_size": 512, 00:23:31.154 "num_blocks": 65536, 00:23:31.154 "uuid": "fa494936-83ca-49c5-a4a5-6d523a196fb9", 00:23:31.154 "assigned_rate_limits": { 00:23:31.154 "rw_ios_per_sec": 0, 00:23:31.154 "rw_mbytes_per_sec": 0, 00:23:31.154 "r_mbytes_per_sec": 0, 00:23:31.154 "w_mbytes_per_sec": 0 00:23:31.154 }, 00:23:31.154 "claimed": true, 00:23:31.154 "claim_type": "exclusive_write", 00:23:31.154 "zoned": false, 00:23:31.154 "supported_io_types": { 00:23:31.154 "read": true, 00:23:31.154 "write": true, 00:23:31.154 "unmap": true, 00:23:31.154 "write_zeroes": true, 00:23:31.154 "flush": true, 00:23:31.154 "reset": true, 00:23:31.154 "compare": false, 00:23:31.154 "compare_and_write": false, 00:23:31.154 "abort": true, 00:23:31.154 "nvme_admin": false, 00:23:31.154 "nvme_io": false 00:23:31.154 }, 00:23:31.154 "memory_domains": [ 00:23:31.154 { 00:23:31.154 "dma_device_id": "system", 00:23:31.154 "dma_device_type": 1 00:23:31.154 }, 00:23:31.154 { 00:23:31.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.154 "dma_device_type": 2 00:23:31.154 } 00:23:31.154 ], 00:23:31.154 "driver_specific": {} 00:23:31.154 }' 00:23:31.154 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:31.154 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:31.154 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:31.154 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:31.154 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:31.416 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:31.416 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:31.416 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:31.416 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:31.416 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:31.416 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:31.416 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:31.416 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:31.416 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:31.416 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:31.675 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:31.675 "name": "BaseBdev3", 00:23:31.675 "aliases": [ 00:23:31.675 "9cba85c4-5a06-4326-aa27-29ed25262cac" 00:23:31.675 ], 00:23:31.675 "product_name": "Malloc disk", 00:23:31.675 "block_size": 512, 00:23:31.675 "num_blocks": 65536, 00:23:31.675 "uuid": "9cba85c4-5a06-4326-aa27-29ed25262cac", 00:23:31.675 "assigned_rate_limits": { 00:23:31.675 "rw_ios_per_sec": 0, 00:23:31.675 "rw_mbytes_per_sec": 0, 00:23:31.675 "r_mbytes_per_sec": 0, 00:23:31.675 "w_mbytes_per_sec": 0 00:23:31.675 }, 00:23:31.675 "claimed": true, 00:23:31.675 "claim_type": "exclusive_write", 00:23:31.675 "zoned": false, 00:23:31.675 "supported_io_types": { 00:23:31.675 "read": true, 00:23:31.675 "write": true, 00:23:31.675 "unmap": true, 00:23:31.675 "write_zeroes": true, 00:23:31.675 "flush": true, 00:23:31.675 "reset": true, 00:23:31.675 "compare": false, 00:23:31.675 "compare_and_write": false, 00:23:31.675 "abort": true, 00:23:31.675 "nvme_admin": false, 00:23:31.675 "nvme_io": false 00:23:31.675 }, 00:23:31.675 "memory_domains": [ 00:23:31.675 { 00:23:31.675 "dma_device_id": "system", 00:23:31.675 "dma_device_type": 1 00:23:31.675 }, 00:23:31.675 { 00:23:31.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.675 "dma_device_type": 2 00:23:31.675 } 00:23:31.675 ], 00:23:31.675 "driver_specific": {} 00:23:31.675 }' 00:23:31.675 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:31.675 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:31.675 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:31.675 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:31.933 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:31.933 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:31.933 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:31.933 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:31.933 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:31.933 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:31.933 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:31.933 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:31.933 11:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:32.191 [2024-06-10 11:47:04.199488] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:32.191 [2024-06-10 11:47:04.199728] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:32.191 [2024-06-10 11:47:04.199912] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:32.191 [2024-06-10 11:47:04.200291] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:32.191 [2024-06-10 11:47:04.200397] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:23:32.191 11:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 133493 00:23:32.191 11:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 133493 ']' 00:23:32.191 11:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 133493 00:23:32.191 11:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:23:32.191 11:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:32.191 11:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 133493 00:23:32.191 11:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:32.191 11:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:32.191 11:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 133493' 00:23:32.191 killing process with pid 133493 00:23:32.191 11:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 133493 00:23:32.191 [2024-06-10 11:47:04.241329] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:32.191 11:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 133493 00:23:32.758 [2024-06-10 11:47:04.579908] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:34.195 11:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:23:34.195 00:23:34.195 real 0m33.056s 00:23:34.195 user 1m0.001s 00:23:34.195 sys 0m4.388s 00:23:34.195 ************************************ 00:23:34.195 END TEST raid_state_function_test_sb 00:23:34.195 ************************************ 00:23:34.195 11:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:34.195 11:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.195 11:47:06 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:23:34.195 11:47:06 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:23:34.195 11:47:06 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:34.195 11:47:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:34.195 ************************************ 00:23:34.195 START TEST raid_superblock_test 00:23:34.195 ************************************ 00:23:34.195 11:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 3 00:23:34.195 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:23:34.195 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:23:34.195 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:23:34.195 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:23:34.195 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:23:34.195 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:23:34.195 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:23:34.195 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:23:34.196 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:23:34.196 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:23:34.196 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:23:34.196 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:23:34.196 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:23:34.196 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:23:34.196 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:23:34.196 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=134506 00:23:34.196 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 134506 /var/tmp/spdk-raid.sock 00:23:34.196 11:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 134506 ']' 00:23:34.196 11:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:34.196 11:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:34.196 11:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:34.196 11:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:34.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:34.196 11:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:34.196 11:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.196 [2024-06-10 11:47:06.193091] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:23:34.196 [2024-06-10 11:47:06.194370] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134506 ] 00:23:34.454 [2024-06-10 11:47:06.375624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.713 [2024-06-10 11:47:06.588022] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.972 [2024-06-10 11:47:06.809525] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:35.231 11:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:35.231 11:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:23:35.231 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:23:35.231 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:35.231 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:23:35.231 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:23:35.231 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:35.231 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:35.231 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:35.231 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:35.231 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:35.489 malloc1 00:23:35.489 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:35.747 [2024-06-10 11:47:07.612790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:35.747 [2024-06-10 11:47:07.613135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.747 [2024-06-10 11:47:07.613318] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:23:35.747 [2024-06-10 11:47:07.613414] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.747 [2024-06-10 11:47:07.616100] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.747 [2024-06-10 11:47:07.616286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:35.747 pt1 00:23:35.747 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:35.747 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:35.747 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:23:35.747 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:23:35.747 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:35.747 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:35.747 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:35.747 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:35.747 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:36.006 malloc2 00:23:36.006 11:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:36.264 [2024-06-10 11:47:08.166814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:36.264 [2024-06-10 11:47:08.167162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.264 [2024-06-10 11:47:08.167257] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:36.264 [2024-06-10 11:47:08.167378] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.264 [2024-06-10 11:47:08.170023] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.264 [2024-06-10 11:47:08.170197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:36.264 pt2 00:23:36.264 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:36.264 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:36.264 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:23:36.264 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:23:36.264 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:36.264 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:36.264 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:36.264 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:36.264 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:36.523 malloc3 00:23:36.523 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:36.781 [2024-06-10 11:47:08.657327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:36.781 [2024-06-10 11:47:08.657680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.781 [2024-06-10 11:47:08.657752] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:36.781 [2024-06-10 11:47:08.657919] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.781 [2024-06-10 11:47:08.660521] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.781 [2024-06-10 11:47:08.660713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:36.781 pt3 00:23:36.781 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:36.781 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:36.781 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:23:37.040 [2024-06-10 11:47:08.905476] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:37.040 [2024-06-10 11:47:08.907777] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:37.040 [2024-06-10 11:47:08.907994] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:37.040 [2024-06-10 11:47:08.908303] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:23:37.040 [2024-06-10 11:47:08.908412] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:37.040 [2024-06-10 11:47:08.908593] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:23:37.040 [2024-06-10 11:47:08.909075] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:23:37.040 [2024-06-10 11:47:08.909188] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:23:37.040 [2024-06-10 11:47:08.909477] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:37.040 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:37.040 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:37.040 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:37.040 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:37.040 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:37.040 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:37.040 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:37.040 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:37.040 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:37.040 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:37.040 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.040 11:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.298 11:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:37.298 "name": "raid_bdev1", 00:23:37.298 "uuid": "850824ac-6834-41ed-9101-356afaefe4f9", 00:23:37.298 "strip_size_kb": 0, 00:23:37.298 "state": "online", 00:23:37.298 "raid_level": "raid1", 00:23:37.298 "superblock": true, 00:23:37.298 "num_base_bdevs": 3, 00:23:37.298 "num_base_bdevs_discovered": 3, 00:23:37.298 "num_base_bdevs_operational": 3, 00:23:37.298 "base_bdevs_list": [ 00:23:37.298 { 00:23:37.298 "name": "pt1", 00:23:37.298 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:37.298 "is_configured": true, 00:23:37.298 "data_offset": 2048, 00:23:37.298 "data_size": 63488 00:23:37.298 }, 00:23:37.298 { 00:23:37.298 "name": "pt2", 00:23:37.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:37.298 "is_configured": true, 00:23:37.298 "data_offset": 2048, 00:23:37.299 "data_size": 63488 00:23:37.299 }, 00:23:37.299 { 00:23:37.299 "name": "pt3", 00:23:37.299 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:37.299 "is_configured": true, 00:23:37.299 "data_offset": 2048, 00:23:37.299 "data_size": 63488 00:23:37.299 } 00:23:37.299 ] 00:23:37.299 }' 00:23:37.299 11:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:37.299 11:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.866 11:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:23:37.866 11:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:37.866 11:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:37.866 11:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:37.866 11:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:37.866 11:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:37.866 11:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:37.866 11:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:38.124 [2024-06-10 11:47:09.945870] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:38.124 11:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:38.124 "name": "raid_bdev1", 00:23:38.124 "aliases": [ 00:23:38.124 "850824ac-6834-41ed-9101-356afaefe4f9" 00:23:38.124 ], 00:23:38.124 "product_name": "Raid Volume", 00:23:38.124 "block_size": 512, 00:23:38.124 "num_blocks": 63488, 00:23:38.124 "uuid": "850824ac-6834-41ed-9101-356afaefe4f9", 00:23:38.124 "assigned_rate_limits": { 00:23:38.124 "rw_ios_per_sec": 0, 00:23:38.124 "rw_mbytes_per_sec": 0, 00:23:38.124 "r_mbytes_per_sec": 0, 00:23:38.124 "w_mbytes_per_sec": 0 00:23:38.124 }, 00:23:38.124 "claimed": false, 00:23:38.124 "zoned": false, 00:23:38.124 "supported_io_types": { 00:23:38.124 "read": true, 00:23:38.124 "write": true, 00:23:38.124 "unmap": false, 00:23:38.124 "write_zeroes": true, 00:23:38.124 "flush": false, 00:23:38.124 "reset": true, 00:23:38.124 "compare": false, 00:23:38.124 "compare_and_write": false, 00:23:38.124 "abort": false, 00:23:38.124 "nvme_admin": false, 00:23:38.124 "nvme_io": false 00:23:38.124 }, 00:23:38.124 "memory_domains": [ 00:23:38.124 { 00:23:38.124 "dma_device_id": "system", 00:23:38.124 "dma_device_type": 1 00:23:38.124 }, 00:23:38.124 { 00:23:38.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.124 "dma_device_type": 2 00:23:38.124 }, 00:23:38.124 { 00:23:38.124 "dma_device_id": "system", 00:23:38.124 "dma_device_type": 1 00:23:38.124 }, 00:23:38.124 { 00:23:38.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.124 "dma_device_type": 2 00:23:38.124 }, 00:23:38.124 { 00:23:38.124 "dma_device_id": "system", 00:23:38.125 "dma_device_type": 1 00:23:38.125 }, 00:23:38.125 { 00:23:38.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.125 "dma_device_type": 2 00:23:38.125 } 00:23:38.125 ], 00:23:38.125 "driver_specific": { 00:23:38.125 "raid": { 00:23:38.125 "uuid": "850824ac-6834-41ed-9101-356afaefe4f9", 00:23:38.125 "strip_size_kb": 0, 00:23:38.125 "state": "online", 00:23:38.125 "raid_level": "raid1", 00:23:38.125 "superblock": true, 00:23:38.125 "num_base_bdevs": 3, 00:23:38.125 "num_base_bdevs_discovered": 3, 00:23:38.125 "num_base_bdevs_operational": 3, 00:23:38.125 "base_bdevs_list": [ 00:23:38.125 { 00:23:38.125 "name": "pt1", 00:23:38.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:38.125 "is_configured": true, 00:23:38.125 "data_offset": 2048, 00:23:38.125 "data_size": 63488 00:23:38.125 }, 00:23:38.125 { 00:23:38.125 "name": "pt2", 00:23:38.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:38.125 "is_configured": true, 00:23:38.125 "data_offset": 2048, 00:23:38.125 "data_size": 63488 00:23:38.125 }, 00:23:38.125 { 00:23:38.125 "name": "pt3", 00:23:38.125 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:38.125 "is_configured": true, 00:23:38.125 "data_offset": 2048, 00:23:38.125 "data_size": 63488 00:23:38.125 } 00:23:38.125 ] 00:23:38.125 } 00:23:38.125 } 00:23:38.125 }' 00:23:38.125 11:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:38.125 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:38.125 pt2 00:23:38.125 pt3' 00:23:38.125 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:38.125 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:38.125 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:38.383 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:38.383 "name": "pt1", 00:23:38.383 "aliases": [ 00:23:38.383 "00000000-0000-0000-0000-000000000001" 00:23:38.383 ], 00:23:38.383 "product_name": "passthru", 00:23:38.383 "block_size": 512, 00:23:38.383 "num_blocks": 65536, 00:23:38.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:38.383 "assigned_rate_limits": { 00:23:38.383 "rw_ios_per_sec": 0, 00:23:38.383 "rw_mbytes_per_sec": 0, 00:23:38.383 "r_mbytes_per_sec": 0, 00:23:38.383 "w_mbytes_per_sec": 0 00:23:38.383 }, 00:23:38.383 "claimed": true, 00:23:38.383 "claim_type": "exclusive_write", 00:23:38.383 "zoned": false, 00:23:38.383 "supported_io_types": { 00:23:38.383 "read": true, 00:23:38.383 "write": true, 00:23:38.383 "unmap": true, 00:23:38.383 "write_zeroes": true, 00:23:38.383 "flush": true, 00:23:38.383 "reset": true, 00:23:38.383 "compare": false, 00:23:38.383 "compare_and_write": false, 00:23:38.383 "abort": true, 00:23:38.383 "nvme_admin": false, 00:23:38.383 "nvme_io": false 00:23:38.383 }, 00:23:38.383 "memory_domains": [ 00:23:38.383 { 00:23:38.383 "dma_device_id": "system", 00:23:38.383 "dma_device_type": 1 00:23:38.383 }, 00:23:38.383 { 00:23:38.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.383 "dma_device_type": 2 00:23:38.383 } 00:23:38.383 ], 00:23:38.383 "driver_specific": { 00:23:38.383 "passthru": { 00:23:38.383 "name": "pt1", 00:23:38.383 "base_bdev_name": "malloc1" 00:23:38.383 } 00:23:38.383 } 00:23:38.383 }' 00:23:38.383 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:38.383 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:38.383 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:38.383 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:38.383 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:38.642 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:38.642 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:38.642 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:38.642 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:38.642 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:38.642 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:38.642 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:38.642 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:38.642 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:38.642 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:38.901 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:38.901 "name": "pt2", 00:23:38.901 "aliases": [ 00:23:38.901 "00000000-0000-0000-0000-000000000002" 00:23:38.901 ], 00:23:38.901 "product_name": "passthru", 00:23:38.901 "block_size": 512, 00:23:38.901 "num_blocks": 65536, 00:23:38.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:38.901 "assigned_rate_limits": { 00:23:38.901 "rw_ios_per_sec": 0, 00:23:38.901 "rw_mbytes_per_sec": 0, 00:23:38.901 "r_mbytes_per_sec": 0, 00:23:38.901 "w_mbytes_per_sec": 0 00:23:38.901 }, 00:23:38.901 "claimed": true, 00:23:38.901 "claim_type": "exclusive_write", 00:23:38.901 "zoned": false, 00:23:38.901 "supported_io_types": { 00:23:38.901 "read": true, 00:23:38.901 "write": true, 00:23:38.901 "unmap": true, 00:23:38.901 "write_zeroes": true, 00:23:38.901 "flush": true, 00:23:38.901 "reset": true, 00:23:38.901 "compare": false, 00:23:38.901 "compare_and_write": false, 00:23:38.901 "abort": true, 00:23:38.901 "nvme_admin": false, 00:23:38.901 "nvme_io": false 00:23:38.901 }, 00:23:38.901 "memory_domains": [ 00:23:38.901 { 00:23:38.901 "dma_device_id": "system", 00:23:38.901 "dma_device_type": 1 00:23:38.901 }, 00:23:38.901 { 00:23:38.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.901 "dma_device_type": 2 00:23:38.901 } 00:23:38.901 ], 00:23:38.901 "driver_specific": { 00:23:38.901 "passthru": { 00:23:38.901 "name": "pt2", 00:23:38.901 "base_bdev_name": "malloc2" 00:23:38.901 } 00:23:38.901 } 00:23:38.901 }' 00:23:38.901 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:38.901 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:39.159 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:39.159 11:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:39.159 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:39.159 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:39.159 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:39.159 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:39.159 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:39.159 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:39.159 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:39.417 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:39.417 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:39.417 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:39.417 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:39.417 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:39.417 "name": "pt3", 00:23:39.417 "aliases": [ 00:23:39.417 "00000000-0000-0000-0000-000000000003" 00:23:39.417 ], 00:23:39.417 "product_name": "passthru", 00:23:39.417 "block_size": 512, 00:23:39.417 "num_blocks": 65536, 00:23:39.417 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:39.417 "assigned_rate_limits": { 00:23:39.417 "rw_ios_per_sec": 0, 00:23:39.417 "rw_mbytes_per_sec": 0, 00:23:39.417 "r_mbytes_per_sec": 0, 00:23:39.417 "w_mbytes_per_sec": 0 00:23:39.417 }, 00:23:39.417 "claimed": true, 00:23:39.417 "claim_type": "exclusive_write", 00:23:39.417 "zoned": false, 00:23:39.417 "supported_io_types": { 00:23:39.417 "read": true, 00:23:39.417 "write": true, 00:23:39.417 "unmap": true, 00:23:39.417 "write_zeroes": true, 00:23:39.417 "flush": true, 00:23:39.417 "reset": true, 00:23:39.417 "compare": false, 00:23:39.417 "compare_and_write": false, 00:23:39.417 "abort": true, 00:23:39.417 "nvme_admin": false, 00:23:39.417 "nvme_io": false 00:23:39.417 }, 00:23:39.417 "memory_domains": [ 00:23:39.417 { 00:23:39.417 "dma_device_id": "system", 00:23:39.417 "dma_device_type": 1 00:23:39.417 }, 00:23:39.417 { 00:23:39.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.417 "dma_device_type": 2 00:23:39.417 } 00:23:39.417 ], 00:23:39.417 "driver_specific": { 00:23:39.417 "passthru": { 00:23:39.417 "name": "pt3", 00:23:39.417 "base_bdev_name": "malloc3" 00:23:39.417 } 00:23:39.417 } 00:23:39.417 }' 00:23:39.417 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:39.674 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:39.674 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:39.674 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:39.674 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:39.674 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:39.674 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:39.674 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:39.674 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:39.674 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:39.933 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:39.933 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:39.933 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:39.933 11:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:23:39.933 [2024-06-10 11:47:11.990238] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:40.191 11:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=850824ac-6834-41ed-9101-356afaefe4f9 00:23:40.191 11:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 850824ac-6834-41ed-9101-356afaefe4f9 ']' 00:23:40.191 11:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:40.448 [2024-06-10 11:47:12.250101] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:40.448 [2024-06-10 11:47:12.250371] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:40.448 [2024-06-10 11:47:12.250544] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:40.448 [2024-06-10 11:47:12.250726] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:40.448 [2024-06-10 11:47:12.250831] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:23:40.448 11:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.448 11:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:23:40.448 11:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:23:40.448 11:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:23:40.448 11:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:40.448 11:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:40.706 11:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:40.706 11:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:40.964 11:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:40.964 11:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:41.222 11:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:41.222 11:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:41.787 11:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:23:41.787 11:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:41.787 11:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:23:41.787 11:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:41.787 11:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:41.787 11:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:41.787 11:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:41.787 11:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:41.787 11:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:41.788 11:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:41.788 11:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:41.788 11:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:41.788 11:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:41.788 [2024-06-10 11:47:13.750380] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:41.788 [2024-06-10 11:47:13.752887] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:41.788 [2024-06-10 11:47:13.753100] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:41.788 [2024-06-10 11:47:13.753190] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:41.788 [2024-06-10 11:47:13.753380] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:41.788 [2024-06-10 11:47:13.753448] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:41.788 [2024-06-10 11:47:13.753563] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:41.788 [2024-06-10 11:47:13.753606] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:23:41.788 request: 00:23:41.788 { 00:23:41.788 "name": "raid_bdev1", 00:23:41.788 "raid_level": "raid1", 00:23:41.788 "base_bdevs": [ 00:23:41.788 "malloc1", 00:23:41.788 "malloc2", 00:23:41.788 "malloc3" 00:23:41.788 ], 00:23:41.788 "superblock": false, 00:23:41.788 "method": "bdev_raid_create", 00:23:41.788 "req_id": 1 00:23:41.788 } 00:23:41.788 Got JSON-RPC error response 00:23:41.788 response: 00:23:41.788 { 00:23:41.788 "code": -17, 00:23:41.788 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:41.788 } 00:23:41.788 11:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:23:41.788 11:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:41.788 11:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:41.788 11:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:41.788 11:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.788 11:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:23:42.047 11:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:23:42.047 11:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:23:42.047 11:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:42.304 [2024-06-10 11:47:14.274430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:42.305 [2024-06-10 11:47:14.274780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:42.305 [2024-06-10 11:47:14.274927] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:42.305 [2024-06-10 11:47:14.275027] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:42.305 [2024-06-10 11:47:14.277758] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:42.305 [2024-06-10 11:47:14.277972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:42.305 [2024-06-10 11:47:14.278193] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:42.305 [2024-06-10 11:47:14.278353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:42.305 pt1 00:23:42.305 11:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:42.305 11:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:42.305 11:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:42.305 11:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:42.305 11:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:42.305 11:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:42.305 11:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:42.305 11:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:42.305 11:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:42.305 11:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:42.305 11:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.305 11:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.869 11:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:42.869 "name": "raid_bdev1", 00:23:42.869 "uuid": "850824ac-6834-41ed-9101-356afaefe4f9", 00:23:42.869 "strip_size_kb": 0, 00:23:42.869 "state": "configuring", 00:23:42.869 "raid_level": "raid1", 00:23:42.869 "superblock": true, 00:23:42.869 "num_base_bdevs": 3, 00:23:42.869 "num_base_bdevs_discovered": 1, 00:23:42.869 "num_base_bdevs_operational": 3, 00:23:42.869 "base_bdevs_list": [ 00:23:42.869 { 00:23:42.869 "name": "pt1", 00:23:42.869 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:42.869 "is_configured": true, 00:23:42.869 "data_offset": 2048, 00:23:42.869 "data_size": 63488 00:23:42.869 }, 00:23:42.869 { 00:23:42.869 "name": null, 00:23:42.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:42.869 "is_configured": false, 00:23:42.869 "data_offset": 2048, 00:23:42.869 "data_size": 63488 00:23:42.869 }, 00:23:42.869 { 00:23:42.869 "name": null, 00:23:42.869 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:42.869 "is_configured": false, 00:23:42.869 "data_offset": 2048, 00:23:42.869 "data_size": 63488 00:23:42.869 } 00:23:42.869 ] 00:23:42.869 }' 00:23:42.869 11:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:42.869 11:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.434 11:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:23:43.434 11:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:43.434 [2024-06-10 11:47:15.442980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:43.434 [2024-06-10 11:47:15.443299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.434 [2024-06-10 11:47:15.443448] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:43.434 [2024-06-10 11:47:15.443547] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.434 [2024-06-10 11:47:15.444225] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.434 [2024-06-10 11:47:15.444401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:43.434 [2024-06-10 11:47:15.444627] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:43.434 [2024-06-10 11:47:15.444740] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:43.434 pt2 00:23:43.434 11:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:43.699 [2024-06-10 11:47:15.675069] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:43.699 11:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:43.699 11:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:43.699 11:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:43.699 11:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:43.699 11:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:43.699 11:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:43.699 11:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:43.699 11:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:43.699 11:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:43.699 11:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:43.699 11:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.699 11:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.957 11:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:43.957 "name": "raid_bdev1", 00:23:43.957 "uuid": "850824ac-6834-41ed-9101-356afaefe4f9", 00:23:43.957 "strip_size_kb": 0, 00:23:43.957 "state": "configuring", 00:23:43.957 "raid_level": "raid1", 00:23:43.957 "superblock": true, 00:23:43.957 "num_base_bdevs": 3, 00:23:43.957 "num_base_bdevs_discovered": 1, 00:23:43.957 "num_base_bdevs_operational": 3, 00:23:43.957 "base_bdevs_list": [ 00:23:43.957 { 00:23:43.957 "name": "pt1", 00:23:43.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:43.957 "is_configured": true, 00:23:43.957 "data_offset": 2048, 00:23:43.957 "data_size": 63488 00:23:43.957 }, 00:23:43.957 { 00:23:43.957 "name": null, 00:23:43.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:43.957 "is_configured": false, 00:23:43.957 "data_offset": 2048, 00:23:43.957 "data_size": 63488 00:23:43.957 }, 00:23:43.957 { 00:23:43.957 "name": null, 00:23:43.957 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:43.957 "is_configured": false, 00:23:43.957 "data_offset": 2048, 00:23:43.957 "data_size": 63488 00:23:43.957 } 00:23:43.957 ] 00:23:43.957 }' 00:23:43.957 11:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:43.957 11:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.522 11:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:23:44.522 11:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:44.522 11:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:44.780 [2024-06-10 11:47:16.751332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:44.780 [2024-06-10 11:47:16.751644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.781 [2024-06-10 11:47:16.751808] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:44.781 [2024-06-10 11:47:16.751924] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.781 [2024-06-10 11:47:16.752507] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.781 [2024-06-10 11:47:16.752667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:44.781 [2024-06-10 11:47:16.752899] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:44.781 [2024-06-10 11:47:16.753024] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:44.781 pt2 00:23:44.781 11:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:44.781 11:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:44.781 11:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:45.039 [2024-06-10 11:47:16.983401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:45.039 [2024-06-10 11:47:16.983869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.039 [2024-06-10 11:47:16.983997] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:45.039 [2024-06-10 11:47:16.984105] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.039 [2024-06-10 11:47:16.984640] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.039 [2024-06-10 11:47:16.984782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:45.039 [2024-06-10 11:47:16.984973] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:45.039 [2024-06-10 11:47:16.985064] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:45.039 [2024-06-10 11:47:16.985273] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:23:45.039 [2024-06-10 11:47:16.985426] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:45.039 [2024-06-10 11:47:16.985578] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:45.039 [2024-06-10 11:47:16.986021] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:23:45.039 [2024-06-10 11:47:16.986127] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:23:45.039 [2024-06-10 11:47:16.986374] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:45.039 pt3 00:23:45.039 11:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:45.039 11:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:45.039 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:45.039 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:45.039 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:45.039 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:45.039 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:45.039 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:45.039 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:45.039 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:45.039 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:45.039 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:45.039 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.039 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.298 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:45.298 "name": "raid_bdev1", 00:23:45.298 "uuid": "850824ac-6834-41ed-9101-356afaefe4f9", 00:23:45.298 "strip_size_kb": 0, 00:23:45.298 "state": "online", 00:23:45.298 "raid_level": "raid1", 00:23:45.298 "superblock": true, 00:23:45.298 "num_base_bdevs": 3, 00:23:45.298 "num_base_bdevs_discovered": 3, 00:23:45.298 "num_base_bdevs_operational": 3, 00:23:45.298 "base_bdevs_list": [ 00:23:45.298 { 00:23:45.298 "name": "pt1", 00:23:45.298 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:45.298 "is_configured": true, 00:23:45.298 "data_offset": 2048, 00:23:45.298 "data_size": 63488 00:23:45.298 }, 00:23:45.298 { 00:23:45.298 "name": "pt2", 00:23:45.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:45.298 "is_configured": true, 00:23:45.298 "data_offset": 2048, 00:23:45.298 "data_size": 63488 00:23:45.298 }, 00:23:45.298 { 00:23:45.298 "name": "pt3", 00:23:45.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:45.298 "is_configured": true, 00:23:45.298 "data_offset": 2048, 00:23:45.298 "data_size": 63488 00:23:45.298 } 00:23:45.298 ] 00:23:45.298 }' 00:23:45.298 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:45.298 11:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.232 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:23:46.232 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:46.232 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:46.232 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:46.232 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:46.232 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:46.232 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:46.232 11:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:46.232 [2024-06-10 11:47:18.159958] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:46.232 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:46.232 "name": "raid_bdev1", 00:23:46.232 "aliases": [ 00:23:46.232 "850824ac-6834-41ed-9101-356afaefe4f9" 00:23:46.232 ], 00:23:46.232 "product_name": "Raid Volume", 00:23:46.232 "block_size": 512, 00:23:46.232 "num_blocks": 63488, 00:23:46.232 "uuid": "850824ac-6834-41ed-9101-356afaefe4f9", 00:23:46.232 "assigned_rate_limits": { 00:23:46.232 "rw_ios_per_sec": 0, 00:23:46.232 "rw_mbytes_per_sec": 0, 00:23:46.232 "r_mbytes_per_sec": 0, 00:23:46.232 "w_mbytes_per_sec": 0 00:23:46.232 }, 00:23:46.232 "claimed": false, 00:23:46.232 "zoned": false, 00:23:46.232 "supported_io_types": { 00:23:46.232 "read": true, 00:23:46.232 "write": true, 00:23:46.232 "unmap": false, 00:23:46.232 "write_zeroes": true, 00:23:46.232 "flush": false, 00:23:46.232 "reset": true, 00:23:46.232 "compare": false, 00:23:46.232 "compare_and_write": false, 00:23:46.232 "abort": false, 00:23:46.232 "nvme_admin": false, 00:23:46.232 "nvme_io": false 00:23:46.232 }, 00:23:46.232 "memory_domains": [ 00:23:46.232 { 00:23:46.232 "dma_device_id": "system", 00:23:46.232 "dma_device_type": 1 00:23:46.232 }, 00:23:46.232 { 00:23:46.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.232 "dma_device_type": 2 00:23:46.232 }, 00:23:46.232 { 00:23:46.232 "dma_device_id": "system", 00:23:46.232 "dma_device_type": 1 00:23:46.232 }, 00:23:46.232 { 00:23:46.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.232 "dma_device_type": 2 00:23:46.232 }, 00:23:46.232 { 00:23:46.233 "dma_device_id": "system", 00:23:46.233 "dma_device_type": 1 00:23:46.233 }, 00:23:46.233 { 00:23:46.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.233 "dma_device_type": 2 00:23:46.233 } 00:23:46.233 ], 00:23:46.233 "driver_specific": { 00:23:46.233 "raid": { 00:23:46.233 "uuid": "850824ac-6834-41ed-9101-356afaefe4f9", 00:23:46.233 "strip_size_kb": 0, 00:23:46.233 "state": "online", 00:23:46.233 "raid_level": "raid1", 00:23:46.233 "superblock": true, 00:23:46.233 "num_base_bdevs": 3, 00:23:46.233 "num_base_bdevs_discovered": 3, 00:23:46.233 "num_base_bdevs_operational": 3, 00:23:46.233 "base_bdevs_list": [ 00:23:46.233 { 00:23:46.233 "name": "pt1", 00:23:46.233 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:46.233 "is_configured": true, 00:23:46.233 "data_offset": 2048, 00:23:46.233 "data_size": 63488 00:23:46.233 }, 00:23:46.233 { 00:23:46.233 "name": "pt2", 00:23:46.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:46.233 "is_configured": true, 00:23:46.233 "data_offset": 2048, 00:23:46.233 "data_size": 63488 00:23:46.233 }, 00:23:46.233 { 00:23:46.233 "name": "pt3", 00:23:46.233 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:46.233 "is_configured": true, 00:23:46.233 "data_offset": 2048, 00:23:46.233 "data_size": 63488 00:23:46.233 } 00:23:46.233 ] 00:23:46.233 } 00:23:46.233 } 00:23:46.233 }' 00:23:46.233 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:46.233 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:46.233 pt2 00:23:46.233 pt3' 00:23:46.233 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:46.233 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:46.233 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:46.490 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:46.490 "name": "pt1", 00:23:46.490 "aliases": [ 00:23:46.490 "00000000-0000-0000-0000-000000000001" 00:23:46.490 ], 00:23:46.490 "product_name": "passthru", 00:23:46.490 "block_size": 512, 00:23:46.490 "num_blocks": 65536, 00:23:46.490 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:46.491 "assigned_rate_limits": { 00:23:46.491 "rw_ios_per_sec": 0, 00:23:46.491 "rw_mbytes_per_sec": 0, 00:23:46.491 "r_mbytes_per_sec": 0, 00:23:46.491 "w_mbytes_per_sec": 0 00:23:46.491 }, 00:23:46.491 "claimed": true, 00:23:46.491 "claim_type": "exclusive_write", 00:23:46.491 "zoned": false, 00:23:46.491 "supported_io_types": { 00:23:46.491 "read": true, 00:23:46.491 "write": true, 00:23:46.491 "unmap": true, 00:23:46.491 "write_zeroes": true, 00:23:46.491 "flush": true, 00:23:46.491 "reset": true, 00:23:46.491 "compare": false, 00:23:46.491 "compare_and_write": false, 00:23:46.491 "abort": true, 00:23:46.491 "nvme_admin": false, 00:23:46.491 "nvme_io": false 00:23:46.491 }, 00:23:46.491 "memory_domains": [ 00:23:46.491 { 00:23:46.491 "dma_device_id": "system", 00:23:46.491 "dma_device_type": 1 00:23:46.491 }, 00:23:46.491 { 00:23:46.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.491 "dma_device_type": 2 00:23:46.491 } 00:23:46.491 ], 00:23:46.491 "driver_specific": { 00:23:46.491 "passthru": { 00:23:46.491 "name": "pt1", 00:23:46.491 "base_bdev_name": "malloc1" 00:23:46.491 } 00:23:46.491 } 00:23:46.491 }' 00:23:46.491 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.786 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.786 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:46.786 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.786 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.786 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:46.786 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.786 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.786 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:46.786 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.786 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:47.044 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:47.045 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:47.045 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:47.045 11:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:47.304 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:47.304 "name": "pt2", 00:23:47.304 "aliases": [ 00:23:47.304 "00000000-0000-0000-0000-000000000002" 00:23:47.304 ], 00:23:47.304 "product_name": "passthru", 00:23:47.304 "block_size": 512, 00:23:47.304 "num_blocks": 65536, 00:23:47.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:47.304 "assigned_rate_limits": { 00:23:47.304 "rw_ios_per_sec": 0, 00:23:47.304 "rw_mbytes_per_sec": 0, 00:23:47.304 "r_mbytes_per_sec": 0, 00:23:47.304 "w_mbytes_per_sec": 0 00:23:47.304 }, 00:23:47.304 "claimed": true, 00:23:47.304 "claim_type": "exclusive_write", 00:23:47.304 "zoned": false, 00:23:47.304 "supported_io_types": { 00:23:47.304 "read": true, 00:23:47.304 "write": true, 00:23:47.304 "unmap": true, 00:23:47.304 "write_zeroes": true, 00:23:47.304 "flush": true, 00:23:47.304 "reset": true, 00:23:47.304 "compare": false, 00:23:47.304 "compare_and_write": false, 00:23:47.304 "abort": true, 00:23:47.304 "nvme_admin": false, 00:23:47.304 "nvme_io": false 00:23:47.304 }, 00:23:47.304 "memory_domains": [ 00:23:47.304 { 00:23:47.304 "dma_device_id": "system", 00:23:47.304 "dma_device_type": 1 00:23:47.304 }, 00:23:47.304 { 00:23:47.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:47.304 "dma_device_type": 2 00:23:47.304 } 00:23:47.304 ], 00:23:47.304 "driver_specific": { 00:23:47.304 "passthru": { 00:23:47.304 "name": "pt2", 00:23:47.304 "base_bdev_name": "malloc2" 00:23:47.304 } 00:23:47.304 } 00:23:47.304 }' 00:23:47.304 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:47.304 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:47.304 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:47.304 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:47.304 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:47.563 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:47.563 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:47.564 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:47.564 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:47.564 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:47.564 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:47.564 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:47.564 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:47.564 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:47.564 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:47.822 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:47.822 "name": "pt3", 00:23:47.822 "aliases": [ 00:23:47.822 "00000000-0000-0000-0000-000000000003" 00:23:47.822 ], 00:23:47.822 "product_name": "passthru", 00:23:47.822 "block_size": 512, 00:23:47.822 "num_blocks": 65536, 00:23:47.822 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:47.822 "assigned_rate_limits": { 00:23:47.822 "rw_ios_per_sec": 0, 00:23:47.822 "rw_mbytes_per_sec": 0, 00:23:47.822 "r_mbytes_per_sec": 0, 00:23:47.822 "w_mbytes_per_sec": 0 00:23:47.822 }, 00:23:47.822 "claimed": true, 00:23:47.822 "claim_type": "exclusive_write", 00:23:47.822 "zoned": false, 00:23:47.822 "supported_io_types": { 00:23:47.822 "read": true, 00:23:47.822 "write": true, 00:23:47.822 "unmap": true, 00:23:47.822 "write_zeroes": true, 00:23:47.822 "flush": true, 00:23:47.822 "reset": true, 00:23:47.822 "compare": false, 00:23:47.822 "compare_and_write": false, 00:23:47.822 "abort": true, 00:23:47.822 "nvme_admin": false, 00:23:47.822 "nvme_io": false 00:23:47.822 }, 00:23:47.822 "memory_domains": [ 00:23:47.822 { 00:23:47.823 "dma_device_id": "system", 00:23:47.823 "dma_device_type": 1 00:23:47.823 }, 00:23:47.823 { 00:23:47.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:47.823 "dma_device_type": 2 00:23:47.823 } 00:23:47.823 ], 00:23:47.823 "driver_specific": { 00:23:47.823 "passthru": { 00:23:47.823 "name": "pt3", 00:23:47.823 "base_bdev_name": "malloc3" 00:23:47.823 } 00:23:47.823 } 00:23:47.823 }' 00:23:47.823 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:47.823 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:48.081 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:48.081 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:48.081 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:48.081 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:48.081 11:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:48.081 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:48.081 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:48.081 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:48.081 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:48.339 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:48.339 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:23:48.339 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:48.597 [2024-06-10 11:47:20.460645] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:48.597 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 850824ac-6834-41ed-9101-356afaefe4f9 '!=' 850824ac-6834-41ed-9101-356afaefe4f9 ']' 00:23:48.597 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:23:48.597 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:48.597 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:48.597 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:48.855 [2024-06-10 11:47:20.740423] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:48.855 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:48.855 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:48.855 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:48.855 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:48.855 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:48.855 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:48.855 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:48.855 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:48.855 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:48.855 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:48.855 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.855 11:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.114 11:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:49.114 "name": "raid_bdev1", 00:23:49.114 "uuid": "850824ac-6834-41ed-9101-356afaefe4f9", 00:23:49.114 "strip_size_kb": 0, 00:23:49.114 "state": "online", 00:23:49.114 "raid_level": "raid1", 00:23:49.114 "superblock": true, 00:23:49.114 "num_base_bdevs": 3, 00:23:49.114 "num_base_bdevs_discovered": 2, 00:23:49.114 "num_base_bdevs_operational": 2, 00:23:49.114 "base_bdevs_list": [ 00:23:49.114 { 00:23:49.114 "name": null, 00:23:49.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.114 "is_configured": false, 00:23:49.114 "data_offset": 2048, 00:23:49.114 "data_size": 63488 00:23:49.114 }, 00:23:49.114 { 00:23:49.114 "name": "pt2", 00:23:49.114 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:49.114 "is_configured": true, 00:23:49.114 "data_offset": 2048, 00:23:49.114 "data_size": 63488 00:23:49.114 }, 00:23:49.114 { 00:23:49.114 "name": "pt3", 00:23:49.114 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:49.114 "is_configured": true, 00:23:49.114 "data_offset": 2048, 00:23:49.114 "data_size": 63488 00:23:49.114 } 00:23:49.114 ] 00:23:49.114 }' 00:23:49.114 11:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:49.114 11:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.049 11:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:50.049 [2024-06-10 11:47:22.044631] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:50.049 [2024-06-10 11:47:22.044958] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:50.049 [2024-06-10 11:47:22.045141] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:50.049 [2024-06-10 11:47:22.045336] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:50.049 [2024-06-10 11:47:22.045435] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:23:50.049 11:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:23:50.049 11:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.617 11:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:23:50.617 11:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:23:50.617 11:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:23:50.617 11:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:50.617 11:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:50.617 11:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:50.617 11:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:50.617 11:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:51.184 11:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:51.184 11:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:51.184 11:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:23:51.184 11:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:51.184 11:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:51.184 [2024-06-10 11:47:23.140836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:51.184 [2024-06-10 11:47:23.141183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.184 [2024-06-10 11:47:23.141288] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:51.184 [2024-06-10 11:47:23.141449] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.184 [2024-06-10 11:47:23.144870] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.184 [2024-06-10 11:47:23.145211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:51.184 [2024-06-10 11:47:23.145541] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:51.184 [2024-06-10 11:47:23.145722] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:51.184 pt2 00:23:51.184 11:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:51.184 11:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:51.184 11:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:51.184 11:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:51.184 11:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:51.184 11:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:51.184 11:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:51.184 11:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:51.184 11:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:51.184 11:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:51.184 11:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.184 11:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.442 11:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:51.442 "name": "raid_bdev1", 00:23:51.442 "uuid": "850824ac-6834-41ed-9101-356afaefe4f9", 00:23:51.442 "strip_size_kb": 0, 00:23:51.442 "state": "configuring", 00:23:51.442 "raid_level": "raid1", 00:23:51.442 "superblock": true, 00:23:51.442 "num_base_bdevs": 3, 00:23:51.442 "num_base_bdevs_discovered": 1, 00:23:51.442 "num_base_bdevs_operational": 2, 00:23:51.442 "base_bdevs_list": [ 00:23:51.442 { 00:23:51.442 "name": null, 00:23:51.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.443 "is_configured": false, 00:23:51.443 "data_offset": 2048, 00:23:51.443 "data_size": 63488 00:23:51.443 }, 00:23:51.443 { 00:23:51.443 "name": "pt2", 00:23:51.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:51.443 "is_configured": true, 00:23:51.443 "data_offset": 2048, 00:23:51.443 "data_size": 63488 00:23:51.443 }, 00:23:51.443 { 00:23:51.443 "name": null, 00:23:51.443 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:51.443 "is_configured": false, 00:23:51.443 "data_offset": 2048, 00:23:51.443 "data_size": 63488 00:23:51.443 } 00:23:51.443 ] 00:23:51.443 }' 00:23:51.443 11:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:51.443 11:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.010 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:23:52.010 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:52.010 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:23:52.010 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:52.268 [2024-06-10 11:47:24.249867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:52.268 [2024-06-10 11:47:24.250179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:52.268 [2024-06-10 11:47:24.250344] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:52.268 [2024-06-10 11:47:24.250453] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:52.268 [2024-06-10 11:47:24.251168] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:52.268 [2024-06-10 11:47:24.251207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:52.268 [2024-06-10 11:47:24.251337] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:52.268 [2024-06-10 11:47:24.251364] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:52.268 [2024-06-10 11:47:24.251496] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:23:52.268 [2024-06-10 11:47:24.251508] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:52.268 [2024-06-10 11:47:24.251635] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:52.268 [2024-06-10 11:47:24.251998] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:23:52.268 [2024-06-10 11:47:24.252012] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:23:52.268 [2024-06-10 11:47:24.252168] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:52.268 pt3 00:23:52.268 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:52.268 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:52.268 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:52.268 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:52.268 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:52.268 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:52.268 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:52.268 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:52.269 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:52.269 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:52.269 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.269 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.527 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:52.527 "name": "raid_bdev1", 00:23:52.527 "uuid": "850824ac-6834-41ed-9101-356afaefe4f9", 00:23:52.527 "strip_size_kb": 0, 00:23:52.527 "state": "online", 00:23:52.527 "raid_level": "raid1", 00:23:52.527 "superblock": true, 00:23:52.527 "num_base_bdevs": 3, 00:23:52.527 "num_base_bdevs_discovered": 2, 00:23:52.527 "num_base_bdevs_operational": 2, 00:23:52.527 "base_bdevs_list": [ 00:23:52.527 { 00:23:52.527 "name": null, 00:23:52.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.527 "is_configured": false, 00:23:52.527 "data_offset": 2048, 00:23:52.527 "data_size": 63488 00:23:52.527 }, 00:23:52.527 { 00:23:52.527 "name": "pt2", 00:23:52.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:52.527 "is_configured": true, 00:23:52.527 "data_offset": 2048, 00:23:52.527 "data_size": 63488 00:23:52.527 }, 00:23:52.527 { 00:23:52.527 "name": "pt3", 00:23:52.527 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:52.527 "is_configured": true, 00:23:52.527 "data_offset": 2048, 00:23:52.527 "data_size": 63488 00:23:52.527 } 00:23:52.527 ] 00:23:52.527 }' 00:23:52.527 11:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:52.527 11:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.462 11:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:53.721 [2024-06-10 11:47:25.598159] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:53.721 [2024-06-10 11:47:25.598502] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:53.721 [2024-06-10 11:47:25.598763] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:53.721 [2024-06-10 11:47:25.598950] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:53.721 [2024-06-10 11:47:25.599054] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:23:53.721 11:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.721 11:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:23:53.979 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:23:53.979 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:23:53.979 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:23:53.979 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:23:53.980 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:54.548 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:54.805 [2024-06-10 11:47:26.643202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:54.805 [2024-06-10 11:47:26.643555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:54.805 [2024-06-10 11:47:26.643730] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:54.805 [2024-06-10 11:47:26.643853] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:54.805 [2024-06-10 11:47:26.646554] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:54.805 [2024-06-10 11:47:26.646845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:54.805 [2024-06-10 11:47:26.647097] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:54.805 [2024-06-10 11:47:26.647269] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:54.805 [2024-06-10 11:47:26.647645] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:54.805 [2024-06-10 11:47:26.647806] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:54.805 [2024-06-10 11:47:26.647923] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:23:54.805 [2024-06-10 11:47:26.648120] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:54.805 pt1 00:23:54.805 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:23:54.805 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:54.805 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:54.805 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:54.805 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:54.806 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:54.806 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:54.806 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:54.806 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:54.806 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:54.806 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:54.806 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.806 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.064 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:55.064 "name": "raid_bdev1", 00:23:55.064 "uuid": "850824ac-6834-41ed-9101-356afaefe4f9", 00:23:55.064 "strip_size_kb": 0, 00:23:55.064 "state": "configuring", 00:23:55.064 "raid_level": "raid1", 00:23:55.064 "superblock": true, 00:23:55.064 "num_base_bdevs": 3, 00:23:55.064 "num_base_bdevs_discovered": 1, 00:23:55.064 "num_base_bdevs_operational": 2, 00:23:55.064 "base_bdevs_list": [ 00:23:55.064 { 00:23:55.064 "name": null, 00:23:55.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.064 "is_configured": false, 00:23:55.064 "data_offset": 2048, 00:23:55.064 "data_size": 63488 00:23:55.064 }, 00:23:55.064 { 00:23:55.064 "name": "pt2", 00:23:55.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:55.064 "is_configured": true, 00:23:55.064 "data_offset": 2048, 00:23:55.064 "data_size": 63488 00:23:55.064 }, 00:23:55.064 { 00:23:55.064 "name": null, 00:23:55.064 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:55.064 "is_configured": false, 00:23:55.064 "data_offset": 2048, 00:23:55.064 "data_size": 63488 00:23:55.064 } 00:23:55.064 ] 00:23:55.064 }' 00:23:55.064 11:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:55.064 11:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.630 11:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:23:55.630 11:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:55.888 11:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:23:55.888 11:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:56.182 [2024-06-10 11:47:28.080116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:56.182 [2024-06-10 11:47:28.080492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.182 [2024-06-10 11:47:28.080650] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:23:56.182 [2024-06-10 11:47:28.080785] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.182 [2024-06-10 11:47:28.081396] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.182 [2024-06-10 11:47:28.081566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:56.182 [2024-06-10 11:47:28.081811] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:56.182 [2024-06-10 11:47:28.081944] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:56.182 [2024-06-10 11:47:28.082189] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:23:56.182 [2024-06-10 11:47:28.082337] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:56.182 [2024-06-10 11:47:28.082616] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:23:56.182 [2024-06-10 11:47:28.083208] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:23:56.182 [2024-06-10 11:47:28.083349] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:23:56.182 [2024-06-10 11:47:28.083626] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:56.182 pt3 00:23:56.182 11:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:56.182 11:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:56.182 11:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:56.182 11:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:56.182 11:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:56.182 11:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:56.182 11:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:56.182 11:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:56.182 11:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:56.182 11:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:56.182 11:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.182 11:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.440 11:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:56.440 "name": "raid_bdev1", 00:23:56.440 "uuid": "850824ac-6834-41ed-9101-356afaefe4f9", 00:23:56.440 "strip_size_kb": 0, 00:23:56.440 "state": "online", 00:23:56.440 "raid_level": "raid1", 00:23:56.440 "superblock": true, 00:23:56.440 "num_base_bdevs": 3, 00:23:56.440 "num_base_bdevs_discovered": 2, 00:23:56.440 "num_base_bdevs_operational": 2, 00:23:56.440 "base_bdevs_list": [ 00:23:56.440 { 00:23:56.440 "name": null, 00:23:56.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:56.440 "is_configured": false, 00:23:56.440 "data_offset": 2048, 00:23:56.440 "data_size": 63488 00:23:56.440 }, 00:23:56.440 { 00:23:56.440 "name": "pt2", 00:23:56.440 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:56.440 "is_configured": true, 00:23:56.440 "data_offset": 2048, 00:23:56.440 "data_size": 63488 00:23:56.440 }, 00:23:56.440 { 00:23:56.440 "name": "pt3", 00:23:56.440 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:56.440 "is_configured": true, 00:23:56.440 "data_offset": 2048, 00:23:56.440 "data_size": 63488 00:23:56.440 } 00:23:56.440 ] 00:23:56.440 }' 00:23:56.440 11:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:56.440 11:47:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.006 11:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:23:57.006 11:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:57.263 11:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:23:57.263 11:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:57.263 11:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:23:57.521 [2024-06-10 11:47:29.508626] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:57.521 11:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 850824ac-6834-41ed-9101-356afaefe4f9 '!=' 850824ac-6834-41ed-9101-356afaefe4f9 ']' 00:23:57.521 11:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 134506 00:23:57.522 11:47:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 134506 ']' 00:23:57.522 11:47:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 134506 00:23:57.522 11:47:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:23:57.522 11:47:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:57.522 11:47:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 134506 00:23:57.522 killing process with pid 134506 00:23:57.522 11:47:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:57.522 11:47:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:57.522 11:47:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 134506' 00:23:57.522 11:47:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 134506 00:23:57.522 11:47:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 134506 00:23:57.522 [2024-06-10 11:47:29.548189] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:57.522 [2024-06-10 11:47:29.548273] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:57.522 [2024-06-10 11:47:29.548348] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:57.522 [2024-06-10 11:47:29.548360] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:23:58.088 [2024-06-10 11:47:29.889099] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:59.462 ************************************ 00:23:59.462 END TEST raid_superblock_test 00:23:59.462 ************************************ 00:23:59.462 11:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:23:59.462 00:23:59.462 real 0m25.249s 00:23:59.462 user 0m45.723s 00:23:59.462 sys 0m3.323s 00:23:59.462 11:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:59.462 11:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.462 11:47:31 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:23:59.462 11:47:31 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:23:59.462 11:47:31 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:59.462 11:47:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:59.462 ************************************ 00:23:59.462 START TEST raid_read_error_test 00:23:59.462 ************************************ 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 3 read 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.r9Jm7Yy2i2 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=135277 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 135277 /var/tmp/spdk-raid.sock 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 135277 ']' 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:59.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:59.462 11:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.722 [2024-06-10 11:47:31.528165] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:23:59.722 [2024-06-10 11:47:31.528591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135277 ] 00:23:59.722 [2024-06-10 11:47:31.743102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.980 [2024-06-10 11:47:32.021793] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.545 [2024-06-10 11:47:32.296096] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:00.545 11:47:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:00.545 11:47:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:24:00.546 11:47:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:00.546 11:47:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:00.803 BaseBdev1_malloc 00:24:01.060 11:47:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:01.060 true 00:24:01.318 11:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:01.576 [2024-06-10 11:47:33.524225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:01.576 [2024-06-10 11:47:33.524648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:01.576 [2024-06-10 11:47:33.524828] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:24:01.576 [2024-06-10 11:47:33.524974] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:01.576 [2024-06-10 11:47:33.528027] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:01.576 [2024-06-10 11:47:33.528298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:01.576 BaseBdev1 00:24:01.576 11:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:01.576 11:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:02.143 BaseBdev2_malloc 00:24:02.143 11:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:02.143 true 00:24:02.399 11:47:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:02.399 [2024-06-10 11:47:34.431547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:02.399 [2024-06-10 11:47:34.431941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:02.399 [2024-06-10 11:47:34.432140] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:02.399 [2024-06-10 11:47:34.432282] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:02.399 [2024-06-10 11:47:34.435328] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:02.399 [2024-06-10 11:47:34.435573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:02.399 BaseBdev2 00:24:02.399 11:47:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:02.399 11:47:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:03.007 BaseBdev3_malloc 00:24:03.007 11:47:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:03.285 true 00:24:03.285 11:47:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:03.544 [2024-06-10 11:47:35.357911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:03.544 [2024-06-10 11:47:35.358252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.544 [2024-06-10 11:47:35.358333] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:24:03.544 [2024-06-10 11:47:35.358461] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.544 [2024-06-10 11:47:35.361157] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.544 [2024-06-10 11:47:35.361364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:03.544 BaseBdev3 00:24:03.544 11:47:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:24:03.804 [2024-06-10 11:47:35.602168] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:03.804 [2024-06-10 11:47:35.604753] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:03.804 [2024-06-10 11:47:35.605018] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:03.804 [2024-06-10 11:47:35.605392] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:24:03.804 [2024-06-10 11:47:35.605530] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:03.804 [2024-06-10 11:47:35.605757] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:24:03.804 [2024-06-10 11:47:35.606258] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:24:03.804 [2024-06-10 11:47:35.606379] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:24:03.804 [2024-06-10 11:47:35.606742] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:03.804 11:47:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:03.804 11:47:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:03.804 11:47:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:03.804 11:47:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:03.804 11:47:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:03.804 11:47:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:03.804 11:47:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:03.804 11:47:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:03.804 11:47:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:03.804 11:47:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:03.804 11:47:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.804 11:47:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.062 11:47:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:04.062 "name": "raid_bdev1", 00:24:04.062 "uuid": "bc09fa0c-5581-4f1a-a7ba-c341ef1fa99c", 00:24:04.062 "strip_size_kb": 0, 00:24:04.062 "state": "online", 00:24:04.062 "raid_level": "raid1", 00:24:04.062 "superblock": true, 00:24:04.062 "num_base_bdevs": 3, 00:24:04.062 "num_base_bdevs_discovered": 3, 00:24:04.062 "num_base_bdevs_operational": 3, 00:24:04.062 "base_bdevs_list": [ 00:24:04.062 { 00:24:04.062 "name": "BaseBdev1", 00:24:04.062 "uuid": "da156f86-e2df-5d8d-a21f-a55b65cb21fd", 00:24:04.062 "is_configured": true, 00:24:04.062 "data_offset": 2048, 00:24:04.062 "data_size": 63488 00:24:04.062 }, 00:24:04.062 { 00:24:04.062 "name": "BaseBdev2", 00:24:04.062 "uuid": "f548417c-b50a-5fdf-8b32-16beeefbd8ab", 00:24:04.062 "is_configured": true, 00:24:04.062 "data_offset": 2048, 00:24:04.062 "data_size": 63488 00:24:04.062 }, 00:24:04.062 { 00:24:04.062 "name": "BaseBdev3", 00:24:04.062 "uuid": "cf8fa58e-69fc-5751-9ed5-6863639a7663", 00:24:04.062 "is_configured": true, 00:24:04.062 "data_offset": 2048, 00:24:04.062 "data_size": 63488 00:24:04.062 } 00:24:04.062 ] 00:24:04.062 }' 00:24:04.062 11:47:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:04.062 11:47:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.626 11:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:24:04.626 11:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:04.626 [2024-06-10 11:47:36.568543] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:05.562 11:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:05.822 11:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:24:05.822 11:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:24:05.822 11:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:24:05.822 11:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:24:05.822 11:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:05.822 11:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:05.822 11:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:05.822 11:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:05.822 11:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:05.822 11:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:05.822 11:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:05.823 11:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:05.823 11:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:05.823 11:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:05.823 11:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.823 11:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.082 11:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:06.082 "name": "raid_bdev1", 00:24:06.082 "uuid": "bc09fa0c-5581-4f1a-a7ba-c341ef1fa99c", 00:24:06.082 "strip_size_kb": 0, 00:24:06.082 "state": "online", 00:24:06.082 "raid_level": "raid1", 00:24:06.082 "superblock": true, 00:24:06.082 "num_base_bdevs": 3, 00:24:06.082 "num_base_bdevs_discovered": 3, 00:24:06.082 "num_base_bdevs_operational": 3, 00:24:06.082 "base_bdevs_list": [ 00:24:06.082 { 00:24:06.082 "name": "BaseBdev1", 00:24:06.082 "uuid": "da156f86-e2df-5d8d-a21f-a55b65cb21fd", 00:24:06.082 "is_configured": true, 00:24:06.082 "data_offset": 2048, 00:24:06.082 "data_size": 63488 00:24:06.082 }, 00:24:06.082 { 00:24:06.082 "name": "BaseBdev2", 00:24:06.082 "uuid": "f548417c-b50a-5fdf-8b32-16beeefbd8ab", 00:24:06.082 "is_configured": true, 00:24:06.082 "data_offset": 2048, 00:24:06.082 "data_size": 63488 00:24:06.082 }, 00:24:06.082 { 00:24:06.082 "name": "BaseBdev3", 00:24:06.082 "uuid": "cf8fa58e-69fc-5751-9ed5-6863639a7663", 00:24:06.082 "is_configured": true, 00:24:06.082 "data_offset": 2048, 00:24:06.082 "data_size": 63488 00:24:06.082 } 00:24:06.082 ] 00:24:06.082 }' 00:24:06.082 11:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:06.082 11:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.659 11:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:06.917 [2024-06-10 11:47:38.925050] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:06.917 [2024-06-10 11:47:38.925293] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:06.917 [2024-06-10 11:47:38.928427] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:06.917 [2024-06-10 11:47:38.928623] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:06.917 [2024-06-10 11:47:38.928769] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:06.917 [2024-06-10 11:47:38.928890] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:24:06.917 0 00:24:06.917 11:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 135277 00:24:06.917 11:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 135277 ']' 00:24:06.917 11:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 135277 00:24:06.917 11:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:24:06.917 11:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:06.917 11:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 135277 00:24:06.917 killing process with pid 135277 00:24:06.917 11:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:06.917 11:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:06.917 11:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 135277' 00:24:06.917 11:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 135277 00:24:06.917 11:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 135277 00:24:06.917 [2024-06-10 11:47:38.966811] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:07.485 [2024-06-10 11:47:39.275537] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:09.385 11:47:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.r9Jm7Yy2i2 00:24:09.385 11:47:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:24:09.385 11:47:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:24:09.385 ************************************ 00:24:09.385 END TEST raid_read_error_test 00:24:09.385 ************************************ 00:24:09.385 11:47:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:24:09.385 11:47:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:24:09.385 11:47:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:09.385 11:47:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:24:09.385 11:47:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:24:09.385 00:24:09.385 real 0m9.592s 00:24:09.385 user 0m14.388s 00:24:09.385 sys 0m1.129s 00:24:09.385 11:47:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:09.385 11:47:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.385 11:47:41 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:24:09.385 11:47:41 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:24:09.385 11:47:41 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:09.385 11:47:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:09.385 ************************************ 00:24:09.385 START TEST raid_write_error_test 00:24:09.385 ************************************ 00:24:09.385 11:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 3 write 00:24:09.385 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:24:09.385 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:24:09.385 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:24:09.385 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:24:09.385 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:09.385 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:24:09.385 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:09.385 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:09.385 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:24:09.385 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:09.385 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.yWkICtsODE 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=135494 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 135494 /var/tmp/spdk-raid.sock 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 135494 ']' 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:09.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:09.386 11:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.386 [2024-06-10 11:47:41.161326] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:24:09.386 [2024-06-10 11:47:41.161791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135494 ] 00:24:09.386 [2024-06-10 11:47:41.336443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.644 [2024-06-10 11:47:41.578317] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.902 [2024-06-10 11:47:41.841811] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:10.467 11:47:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:10.467 11:47:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:24:10.467 11:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:10.467 11:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:10.725 BaseBdev1_malloc 00:24:10.725 11:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:10.725 true 00:24:10.984 11:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:10.984 [2024-06-10 11:47:42.982003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:10.984 [2024-06-10 11:47:42.982381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:10.984 [2024-06-10 11:47:42.982583] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:24:10.984 [2024-06-10 11:47:42.982716] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:10.984 [2024-06-10 11:47:42.985898] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:10.984 [2024-06-10 11:47:42.986126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:10.984 BaseBdev1 00:24:10.984 11:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:10.984 11:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:11.243 BaseBdev2_malloc 00:24:11.243 11:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:11.501 true 00:24:11.759 11:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:12.018 [2024-06-10 11:47:43.872224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:12.018 [2024-06-10 11:47:43.872549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.018 [2024-06-10 11:47:43.872714] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:12.018 [2024-06-10 11:47:43.872829] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.018 [2024-06-10 11:47:43.875586] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.018 [2024-06-10 11:47:43.875800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:12.018 BaseBdev2 00:24:12.018 11:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:12.018 11:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:12.276 BaseBdev3_malloc 00:24:12.534 11:47:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:12.534 true 00:24:12.534 11:47:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:12.792 [2024-06-10 11:47:44.835575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:12.792 [2024-06-10 11:47:44.835942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.792 [2024-06-10 11:47:44.836103] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:24:12.792 [2024-06-10 11:47:44.836231] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.792 [2024-06-10 11:47:44.839342] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.792 [2024-06-10 11:47:44.839561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:12.792 BaseBdev3 00:24:13.059 11:47:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:24:13.059 [2024-06-10 11:47:45.108047] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:13.059 [2024-06-10 11:47:45.110603] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:13.059 [2024-06-10 11:47:45.110957] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:13.059 [2024-06-10 11:47:45.111343] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:24:13.059 [2024-06-10 11:47:45.111470] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:13.059 [2024-06-10 11:47:45.111643] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:24:13.059 [2024-06-10 11:47:45.112149] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:24:13.059 [2024-06-10 11:47:45.112280] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:24:13.059 [2024-06-10 11:47:45.112589] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:13.316 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:13.316 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:13.316 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:13.316 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:13.316 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:13.316 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:13.316 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:13.316 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:13.316 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:13.316 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:13.316 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.316 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.574 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:13.574 "name": "raid_bdev1", 00:24:13.574 "uuid": "b41df44f-2660-485e-ba21-cc301b905610", 00:24:13.574 "strip_size_kb": 0, 00:24:13.574 "state": "online", 00:24:13.574 "raid_level": "raid1", 00:24:13.574 "superblock": true, 00:24:13.574 "num_base_bdevs": 3, 00:24:13.574 "num_base_bdevs_discovered": 3, 00:24:13.574 "num_base_bdevs_operational": 3, 00:24:13.574 "base_bdevs_list": [ 00:24:13.574 { 00:24:13.574 "name": "BaseBdev1", 00:24:13.574 "uuid": "40e49e22-5466-5795-812f-54d1fc82028d", 00:24:13.574 "is_configured": true, 00:24:13.574 "data_offset": 2048, 00:24:13.574 "data_size": 63488 00:24:13.574 }, 00:24:13.574 { 00:24:13.574 "name": "BaseBdev2", 00:24:13.574 "uuid": "f949b41c-045c-53cc-95fd-9b78e2518dd4", 00:24:13.574 "is_configured": true, 00:24:13.574 "data_offset": 2048, 00:24:13.574 "data_size": 63488 00:24:13.574 }, 00:24:13.574 { 00:24:13.574 "name": "BaseBdev3", 00:24:13.574 "uuid": "edb780f1-e457-565f-812a-01879ac46474", 00:24:13.574 "is_configured": true, 00:24:13.574 "data_offset": 2048, 00:24:13.574 "data_size": 63488 00:24:13.574 } 00:24:13.574 ] 00:24:13.574 }' 00:24:13.574 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:13.574 11:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.141 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:24:14.141 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:14.141 [2024-06-10 11:47:46.138275] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:15.075 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:15.641 [2024-06-10 11:47:47.402090] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:24:15.641 [2024-06-10 11:47:47.402466] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:15.641 [2024-06-10 11:47:47.402898] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:24:15.641 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:24:15.641 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:24:15.641 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:24:15.641 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:24:15.641 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:15.641 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:15.641 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:15.641 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:15.641 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:15.641 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:15.641 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:15.641 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:15.641 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:15.641 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:15.641 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.641 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.904 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:15.904 "name": "raid_bdev1", 00:24:15.904 "uuid": "b41df44f-2660-485e-ba21-cc301b905610", 00:24:15.904 "strip_size_kb": 0, 00:24:15.904 "state": "online", 00:24:15.904 "raid_level": "raid1", 00:24:15.904 "superblock": true, 00:24:15.904 "num_base_bdevs": 3, 00:24:15.904 "num_base_bdevs_discovered": 2, 00:24:15.904 "num_base_bdevs_operational": 2, 00:24:15.904 "base_bdevs_list": [ 00:24:15.904 { 00:24:15.904 "name": null, 00:24:15.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.904 "is_configured": false, 00:24:15.904 "data_offset": 2048, 00:24:15.904 "data_size": 63488 00:24:15.904 }, 00:24:15.904 { 00:24:15.904 "name": "BaseBdev2", 00:24:15.904 "uuid": "f949b41c-045c-53cc-95fd-9b78e2518dd4", 00:24:15.904 "is_configured": true, 00:24:15.904 "data_offset": 2048, 00:24:15.904 "data_size": 63488 00:24:15.904 }, 00:24:15.904 { 00:24:15.904 "name": "BaseBdev3", 00:24:15.904 "uuid": "edb780f1-e457-565f-812a-01879ac46474", 00:24:15.904 "is_configured": true, 00:24:15.904 "data_offset": 2048, 00:24:15.904 "data_size": 63488 00:24:15.904 } 00:24:15.904 ] 00:24:15.904 }' 00:24:15.904 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:15.904 11:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.479 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:16.738 [2024-06-10 11:47:48.581270] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:16.738 [2024-06-10 11:47:48.581480] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:16.738 [2024-06-10 11:47:48.584721] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:16.738 [2024-06-10 11:47:48.584951] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:16.738 [2024-06-10 11:47:48.585096] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:16.738 [2024-06-10 11:47:48.585394] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:24:16.738 0 00:24:16.738 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 135494 00:24:16.738 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 135494 ']' 00:24:16.738 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 135494 00:24:16.738 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:24:16.738 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:16.738 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 135494 00:24:16.738 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:16.738 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:16.738 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 135494' 00:24:16.738 killing process with pid 135494 00:24:16.738 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 135494 00:24:16.738 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 135494 00:24:16.738 [2024-06-10 11:47:48.632687] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:16.996 [2024-06-10 11:47:48.928383] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:18.899 11:47:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.yWkICtsODE 00:24:18.899 11:47:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:24:18.899 11:47:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:24:18.899 ************************************ 00:24:18.899 END TEST raid_write_error_test 00:24:18.899 ************************************ 00:24:18.899 11:47:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:24:18.899 11:47:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:24:18.899 11:47:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:18.899 11:47:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:24:18.899 11:47:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:24:18.899 00:24:18.899 real 0m9.624s 00:24:18.899 user 0m14.524s 00:24:18.899 sys 0m1.048s 00:24:18.899 11:47:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:18.899 11:47:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.899 11:47:50 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:24:18.899 11:47:50 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:24:18.899 11:47:50 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:24:18.899 11:47:50 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:24:18.899 11:47:50 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:18.899 11:47:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:18.899 ************************************ 00:24:18.899 START TEST raid_state_function_test 00:24:18.899 ************************************ 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 4 false 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=135706 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 135706' 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:18.899 Process raid pid: 135706 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 135706 /var/tmp/spdk-raid.sock 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 135706 ']' 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:18.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:18.899 11:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.899 [2024-06-10 11:47:50.828176] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:24:18.900 [2024-06-10 11:47:50.828610] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.159 [2024-06-10 11:47:50.995025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.418 [2024-06-10 11:47:51.222228] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.418 [2024-06-10 11:47:51.453336] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:19.984 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:19.984 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:24:19.984 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:19.984 [2024-06-10 11:47:51.991173] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:19.984 [2024-06-10 11:47:51.991497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:19.984 [2024-06-10 11:47:51.991608] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:19.984 [2024-06-10 11:47:51.991745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:19.984 [2024-06-10 11:47:51.991846] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:19.984 [2024-06-10 11:47:51.991901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:19.984 [2024-06-10 11:47:51.992029] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:19.984 [2024-06-10 11:47:51.992136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:19.984 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:19.984 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:19.984 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:19.984 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:19.984 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:19.984 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:19.984 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:19.984 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:19.984 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:19.984 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:19.984 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.984 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:20.254 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:20.254 "name": "Existed_Raid", 00:24:20.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.254 "strip_size_kb": 64, 00:24:20.254 "state": "configuring", 00:24:20.254 "raid_level": "raid0", 00:24:20.254 "superblock": false, 00:24:20.254 "num_base_bdevs": 4, 00:24:20.254 "num_base_bdevs_discovered": 0, 00:24:20.254 "num_base_bdevs_operational": 4, 00:24:20.254 "base_bdevs_list": [ 00:24:20.254 { 00:24:20.254 "name": "BaseBdev1", 00:24:20.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.254 "is_configured": false, 00:24:20.254 "data_offset": 0, 00:24:20.254 "data_size": 0 00:24:20.254 }, 00:24:20.254 { 00:24:20.254 "name": "BaseBdev2", 00:24:20.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.255 "is_configured": false, 00:24:20.255 "data_offset": 0, 00:24:20.255 "data_size": 0 00:24:20.255 }, 00:24:20.255 { 00:24:20.255 "name": "BaseBdev3", 00:24:20.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.255 "is_configured": false, 00:24:20.255 "data_offset": 0, 00:24:20.255 "data_size": 0 00:24:20.255 }, 00:24:20.255 { 00:24:20.255 "name": "BaseBdev4", 00:24:20.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.255 "is_configured": false, 00:24:20.255 "data_offset": 0, 00:24:20.255 "data_size": 0 00:24:20.255 } 00:24:20.255 ] 00:24:20.255 }' 00:24:20.255 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:20.255 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.824 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:21.082 [2024-06-10 11:47:53.047242] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:21.082 [2024-06-10 11:47:53.047509] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:24:21.082 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:21.340 [2024-06-10 11:47:53.335350] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:21.340 [2024-06-10 11:47:53.335631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:21.340 [2024-06-10 11:47:53.335735] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:21.340 [2024-06-10 11:47:53.335829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:21.340 [2024-06-10 11:47:53.336062] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:21.340 [2024-06-10 11:47:53.336137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:21.340 [2024-06-10 11:47:53.336170] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:21.340 [2024-06-10 11:47:53.336218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:21.340 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:21.907 [2024-06-10 11:47:53.655882] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:21.907 BaseBdev1 00:24:21.907 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:21.907 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:24:21.907 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:24:21.907 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:24:21.907 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:24:21.907 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:24:21.907 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:21.907 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:22.165 [ 00:24:22.165 { 00:24:22.165 "name": "BaseBdev1", 00:24:22.165 "aliases": [ 00:24:22.165 "b01b0b35-d3d2-46f3-a3c0-b299cbf6fede" 00:24:22.165 ], 00:24:22.165 "product_name": "Malloc disk", 00:24:22.165 "block_size": 512, 00:24:22.165 "num_blocks": 65536, 00:24:22.165 "uuid": "b01b0b35-d3d2-46f3-a3c0-b299cbf6fede", 00:24:22.165 "assigned_rate_limits": { 00:24:22.165 "rw_ios_per_sec": 0, 00:24:22.165 "rw_mbytes_per_sec": 0, 00:24:22.165 "r_mbytes_per_sec": 0, 00:24:22.165 "w_mbytes_per_sec": 0 00:24:22.165 }, 00:24:22.165 "claimed": true, 00:24:22.165 "claim_type": "exclusive_write", 00:24:22.165 "zoned": false, 00:24:22.165 "supported_io_types": { 00:24:22.165 "read": true, 00:24:22.165 "write": true, 00:24:22.165 "unmap": true, 00:24:22.165 "write_zeroes": true, 00:24:22.165 "flush": true, 00:24:22.165 "reset": true, 00:24:22.165 "compare": false, 00:24:22.165 "compare_and_write": false, 00:24:22.165 "abort": true, 00:24:22.165 "nvme_admin": false, 00:24:22.165 "nvme_io": false 00:24:22.165 }, 00:24:22.166 "memory_domains": [ 00:24:22.166 { 00:24:22.166 "dma_device_id": "system", 00:24:22.166 "dma_device_type": 1 00:24:22.166 }, 00:24:22.166 { 00:24:22.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.166 "dma_device_type": 2 00:24:22.166 } 00:24:22.166 ], 00:24:22.166 "driver_specific": {} 00:24:22.166 } 00:24:22.166 ] 00:24:22.166 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:24:22.166 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:22.166 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:22.166 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:22.166 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:22.166 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:22.166 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:22.166 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:22.166 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:22.166 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:22.166 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:22.166 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.166 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:22.425 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:22.425 "name": "Existed_Raid", 00:24:22.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.425 "strip_size_kb": 64, 00:24:22.425 "state": "configuring", 00:24:22.425 "raid_level": "raid0", 00:24:22.425 "superblock": false, 00:24:22.425 "num_base_bdevs": 4, 00:24:22.425 "num_base_bdevs_discovered": 1, 00:24:22.425 "num_base_bdevs_operational": 4, 00:24:22.425 "base_bdevs_list": [ 00:24:22.425 { 00:24:22.425 "name": "BaseBdev1", 00:24:22.425 "uuid": "b01b0b35-d3d2-46f3-a3c0-b299cbf6fede", 00:24:22.425 "is_configured": true, 00:24:22.425 "data_offset": 0, 00:24:22.425 "data_size": 65536 00:24:22.425 }, 00:24:22.425 { 00:24:22.425 "name": "BaseBdev2", 00:24:22.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.425 "is_configured": false, 00:24:22.425 "data_offset": 0, 00:24:22.425 "data_size": 0 00:24:22.425 }, 00:24:22.425 { 00:24:22.425 "name": "BaseBdev3", 00:24:22.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.425 "is_configured": false, 00:24:22.425 "data_offset": 0, 00:24:22.425 "data_size": 0 00:24:22.425 }, 00:24:22.425 { 00:24:22.425 "name": "BaseBdev4", 00:24:22.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.425 "is_configured": false, 00:24:22.425 "data_offset": 0, 00:24:22.425 "data_size": 0 00:24:22.425 } 00:24:22.425 ] 00:24:22.425 }' 00:24:22.425 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:22.425 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:23.361 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:23.361 [2024-06-10 11:47:55.304327] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:23.361 [2024-06-10 11:47:55.304613] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:24:23.361 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:23.620 [2024-06-10 11:47:55.612409] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:23.620 [2024-06-10 11:47:55.614875] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:23.620 [2024-06-10 11:47:55.615091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:23.620 [2024-06-10 11:47:55.615186] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:23.620 [2024-06-10 11:47:55.615246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:23.620 [2024-06-10 11:47:55.615318] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:23.620 [2024-06-10 11:47:55.615372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:23.620 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:23.620 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:23.620 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:23.620 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:23.620 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:23.620 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:23.620 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:23.620 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:23.620 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:23.620 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:23.620 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:23.620 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:23.620 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.620 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:23.879 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:23.879 "name": "Existed_Raid", 00:24:23.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.879 "strip_size_kb": 64, 00:24:23.879 "state": "configuring", 00:24:23.879 "raid_level": "raid0", 00:24:23.879 "superblock": false, 00:24:23.879 "num_base_bdevs": 4, 00:24:23.879 "num_base_bdevs_discovered": 1, 00:24:23.879 "num_base_bdevs_operational": 4, 00:24:23.879 "base_bdevs_list": [ 00:24:23.879 { 00:24:23.879 "name": "BaseBdev1", 00:24:23.879 "uuid": "b01b0b35-d3d2-46f3-a3c0-b299cbf6fede", 00:24:23.879 "is_configured": true, 00:24:23.879 "data_offset": 0, 00:24:23.879 "data_size": 65536 00:24:23.879 }, 00:24:23.879 { 00:24:23.879 "name": "BaseBdev2", 00:24:23.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.879 "is_configured": false, 00:24:23.879 "data_offset": 0, 00:24:23.879 "data_size": 0 00:24:23.879 }, 00:24:23.879 { 00:24:23.879 "name": "BaseBdev3", 00:24:23.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.879 "is_configured": false, 00:24:23.879 "data_offset": 0, 00:24:23.879 "data_size": 0 00:24:23.879 }, 00:24:23.879 { 00:24:23.879 "name": "BaseBdev4", 00:24:23.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.879 "is_configured": false, 00:24:23.879 "data_offset": 0, 00:24:23.879 "data_size": 0 00:24:23.879 } 00:24:23.879 ] 00:24:23.879 }' 00:24:23.879 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:23.879 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.446 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:24.704 [2024-06-10 11:47:56.718872] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:24.704 BaseBdev2 00:24:24.704 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:24.704 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:24:24.704 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:24:24.704 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:24:24.704 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:24:24.704 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:24:24.704 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:24.962 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:25.220 [ 00:24:25.220 { 00:24:25.220 "name": "BaseBdev2", 00:24:25.220 "aliases": [ 00:24:25.220 "b2915efc-d683-43fc-8945-1983552f4cf5" 00:24:25.220 ], 00:24:25.220 "product_name": "Malloc disk", 00:24:25.220 "block_size": 512, 00:24:25.220 "num_blocks": 65536, 00:24:25.220 "uuid": "b2915efc-d683-43fc-8945-1983552f4cf5", 00:24:25.220 "assigned_rate_limits": { 00:24:25.220 "rw_ios_per_sec": 0, 00:24:25.220 "rw_mbytes_per_sec": 0, 00:24:25.220 "r_mbytes_per_sec": 0, 00:24:25.220 "w_mbytes_per_sec": 0 00:24:25.220 }, 00:24:25.220 "claimed": true, 00:24:25.220 "claim_type": "exclusive_write", 00:24:25.220 "zoned": false, 00:24:25.220 "supported_io_types": { 00:24:25.220 "read": true, 00:24:25.220 "write": true, 00:24:25.220 "unmap": true, 00:24:25.220 "write_zeroes": true, 00:24:25.220 "flush": true, 00:24:25.220 "reset": true, 00:24:25.220 "compare": false, 00:24:25.220 "compare_and_write": false, 00:24:25.220 "abort": true, 00:24:25.220 "nvme_admin": false, 00:24:25.220 "nvme_io": false 00:24:25.220 }, 00:24:25.220 "memory_domains": [ 00:24:25.220 { 00:24:25.220 "dma_device_id": "system", 00:24:25.220 "dma_device_type": 1 00:24:25.220 }, 00:24:25.220 { 00:24:25.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:25.220 "dma_device_type": 2 00:24:25.220 } 00:24:25.220 ], 00:24:25.220 "driver_specific": {} 00:24:25.220 } 00:24:25.220 ] 00:24:25.220 11:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:24:25.220 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:25.220 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:25.220 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:25.220 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:25.220 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:25.220 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:25.220 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:25.220 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:25.220 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:25.220 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:25.220 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:25.220 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:25.220 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.220 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:25.477 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:25.477 "name": "Existed_Raid", 00:24:25.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.478 "strip_size_kb": 64, 00:24:25.478 "state": "configuring", 00:24:25.478 "raid_level": "raid0", 00:24:25.478 "superblock": false, 00:24:25.478 "num_base_bdevs": 4, 00:24:25.478 "num_base_bdevs_discovered": 2, 00:24:25.478 "num_base_bdevs_operational": 4, 00:24:25.478 "base_bdevs_list": [ 00:24:25.478 { 00:24:25.478 "name": "BaseBdev1", 00:24:25.478 "uuid": "b01b0b35-d3d2-46f3-a3c0-b299cbf6fede", 00:24:25.478 "is_configured": true, 00:24:25.478 "data_offset": 0, 00:24:25.478 "data_size": 65536 00:24:25.478 }, 00:24:25.478 { 00:24:25.478 "name": "BaseBdev2", 00:24:25.478 "uuid": "b2915efc-d683-43fc-8945-1983552f4cf5", 00:24:25.478 "is_configured": true, 00:24:25.478 "data_offset": 0, 00:24:25.478 "data_size": 65536 00:24:25.478 }, 00:24:25.478 { 00:24:25.478 "name": "BaseBdev3", 00:24:25.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.478 "is_configured": false, 00:24:25.478 "data_offset": 0, 00:24:25.478 "data_size": 0 00:24:25.478 }, 00:24:25.478 { 00:24:25.478 "name": "BaseBdev4", 00:24:25.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.478 "is_configured": false, 00:24:25.478 "data_offset": 0, 00:24:25.478 "data_size": 0 00:24:25.478 } 00:24:25.478 ] 00:24:25.478 }' 00:24:25.478 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:25.478 11:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.045 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:26.303 [2024-06-10 11:47:58.266920] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:26.303 BaseBdev3 00:24:26.303 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:24:26.303 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:24:26.303 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:24:26.303 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:24:26.303 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:24:26.303 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:24:26.303 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:26.561 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:26.820 [ 00:24:26.820 { 00:24:26.820 "name": "BaseBdev3", 00:24:26.820 "aliases": [ 00:24:26.820 "e88a7cb4-6380-49aa-9b06-d668ad2bb3f6" 00:24:26.820 ], 00:24:26.820 "product_name": "Malloc disk", 00:24:26.820 "block_size": 512, 00:24:26.820 "num_blocks": 65536, 00:24:26.820 "uuid": "e88a7cb4-6380-49aa-9b06-d668ad2bb3f6", 00:24:26.820 "assigned_rate_limits": { 00:24:26.820 "rw_ios_per_sec": 0, 00:24:26.820 "rw_mbytes_per_sec": 0, 00:24:26.820 "r_mbytes_per_sec": 0, 00:24:26.820 "w_mbytes_per_sec": 0 00:24:26.820 }, 00:24:26.820 "claimed": true, 00:24:26.820 "claim_type": "exclusive_write", 00:24:26.820 "zoned": false, 00:24:26.820 "supported_io_types": { 00:24:26.820 "read": true, 00:24:26.820 "write": true, 00:24:26.820 "unmap": true, 00:24:26.820 "write_zeroes": true, 00:24:26.820 "flush": true, 00:24:26.820 "reset": true, 00:24:26.820 "compare": false, 00:24:26.820 "compare_and_write": false, 00:24:26.820 "abort": true, 00:24:26.820 "nvme_admin": false, 00:24:26.820 "nvme_io": false 00:24:26.820 }, 00:24:26.820 "memory_domains": [ 00:24:26.820 { 00:24:26.820 "dma_device_id": "system", 00:24:26.820 "dma_device_type": 1 00:24:26.820 }, 00:24:26.820 { 00:24:26.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.820 "dma_device_type": 2 00:24:26.820 } 00:24:26.820 ], 00:24:26.820 "driver_specific": {} 00:24:26.820 } 00:24:26.820 ] 00:24:26.820 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:24:26.820 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:26.820 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:26.820 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:26.820 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:26.820 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:26.820 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:26.820 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:26.820 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:26.820 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:26.820 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:26.820 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:26.820 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:26.820 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.820 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.078 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:27.078 "name": "Existed_Raid", 00:24:27.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.078 "strip_size_kb": 64, 00:24:27.078 "state": "configuring", 00:24:27.078 "raid_level": "raid0", 00:24:27.078 "superblock": false, 00:24:27.078 "num_base_bdevs": 4, 00:24:27.078 "num_base_bdevs_discovered": 3, 00:24:27.078 "num_base_bdevs_operational": 4, 00:24:27.078 "base_bdevs_list": [ 00:24:27.078 { 00:24:27.078 "name": "BaseBdev1", 00:24:27.078 "uuid": "b01b0b35-d3d2-46f3-a3c0-b299cbf6fede", 00:24:27.078 "is_configured": true, 00:24:27.078 "data_offset": 0, 00:24:27.078 "data_size": 65536 00:24:27.078 }, 00:24:27.078 { 00:24:27.078 "name": "BaseBdev2", 00:24:27.078 "uuid": "b2915efc-d683-43fc-8945-1983552f4cf5", 00:24:27.078 "is_configured": true, 00:24:27.078 "data_offset": 0, 00:24:27.078 "data_size": 65536 00:24:27.078 }, 00:24:27.078 { 00:24:27.078 "name": "BaseBdev3", 00:24:27.078 "uuid": "e88a7cb4-6380-49aa-9b06-d668ad2bb3f6", 00:24:27.078 "is_configured": true, 00:24:27.078 "data_offset": 0, 00:24:27.078 "data_size": 65536 00:24:27.078 }, 00:24:27.078 { 00:24:27.078 "name": "BaseBdev4", 00:24:27.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.078 "is_configured": false, 00:24:27.078 "data_offset": 0, 00:24:27.078 "data_size": 0 00:24:27.078 } 00:24:27.078 ] 00:24:27.078 }' 00:24:27.078 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:27.078 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:27.644 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:27.902 [2024-06-10 11:47:59.796356] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:27.902 [2024-06-10 11:47:59.796636] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:24:27.902 [2024-06-10 11:47:59.796705] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:27.902 [2024-06-10 11:47:59.797018] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:24:27.902 [2024-06-10 11:47:59.797553] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:24:27.902 [2024-06-10 11:47:59.797689] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:24:27.902 [2024-06-10 11:47:59.798122] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.902 BaseBdev4 00:24:27.902 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:24:27.902 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:24:27.902 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:24:27.902 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:24:27.902 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:24:27.902 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:24:27.902 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:28.159 11:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:28.417 [ 00:24:28.417 { 00:24:28.417 "name": "BaseBdev4", 00:24:28.417 "aliases": [ 00:24:28.417 "aca7d430-4a62-49e8-8631-6718fe4ff863" 00:24:28.417 ], 00:24:28.417 "product_name": "Malloc disk", 00:24:28.417 "block_size": 512, 00:24:28.417 "num_blocks": 65536, 00:24:28.417 "uuid": "aca7d430-4a62-49e8-8631-6718fe4ff863", 00:24:28.417 "assigned_rate_limits": { 00:24:28.417 "rw_ios_per_sec": 0, 00:24:28.417 "rw_mbytes_per_sec": 0, 00:24:28.417 "r_mbytes_per_sec": 0, 00:24:28.417 "w_mbytes_per_sec": 0 00:24:28.417 }, 00:24:28.417 "claimed": true, 00:24:28.417 "claim_type": "exclusive_write", 00:24:28.417 "zoned": false, 00:24:28.417 "supported_io_types": { 00:24:28.417 "read": true, 00:24:28.417 "write": true, 00:24:28.417 "unmap": true, 00:24:28.417 "write_zeroes": true, 00:24:28.417 "flush": true, 00:24:28.417 "reset": true, 00:24:28.417 "compare": false, 00:24:28.417 "compare_and_write": false, 00:24:28.417 "abort": true, 00:24:28.417 "nvme_admin": false, 00:24:28.417 "nvme_io": false 00:24:28.417 }, 00:24:28.417 "memory_domains": [ 00:24:28.417 { 00:24:28.417 "dma_device_id": "system", 00:24:28.417 "dma_device_type": 1 00:24:28.417 }, 00:24:28.417 { 00:24:28.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.417 "dma_device_type": 2 00:24:28.417 } 00:24:28.417 ], 00:24:28.417 "driver_specific": {} 00:24:28.417 } 00:24:28.417 ] 00:24:28.417 11:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:24:28.417 11:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:28.417 11:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:28.417 11:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:28.417 11:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:28.417 11:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:28.417 11:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:28.417 11:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:28.417 11:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:28.417 11:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:28.417 11:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:28.417 11:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:28.417 11:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:28.417 11:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.417 11:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.674 11:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:28.674 "name": "Existed_Raid", 00:24:28.674 "uuid": "7bcfa411-3f24-427e-9ce8-174c7efdc634", 00:24:28.674 "strip_size_kb": 64, 00:24:28.674 "state": "online", 00:24:28.674 "raid_level": "raid0", 00:24:28.674 "superblock": false, 00:24:28.674 "num_base_bdevs": 4, 00:24:28.674 "num_base_bdevs_discovered": 4, 00:24:28.674 "num_base_bdevs_operational": 4, 00:24:28.674 "base_bdevs_list": [ 00:24:28.674 { 00:24:28.674 "name": "BaseBdev1", 00:24:28.674 "uuid": "b01b0b35-d3d2-46f3-a3c0-b299cbf6fede", 00:24:28.674 "is_configured": true, 00:24:28.674 "data_offset": 0, 00:24:28.674 "data_size": 65536 00:24:28.674 }, 00:24:28.674 { 00:24:28.674 "name": "BaseBdev2", 00:24:28.674 "uuid": "b2915efc-d683-43fc-8945-1983552f4cf5", 00:24:28.674 "is_configured": true, 00:24:28.674 "data_offset": 0, 00:24:28.674 "data_size": 65536 00:24:28.674 }, 00:24:28.674 { 00:24:28.674 "name": "BaseBdev3", 00:24:28.674 "uuid": "e88a7cb4-6380-49aa-9b06-d668ad2bb3f6", 00:24:28.674 "is_configured": true, 00:24:28.674 "data_offset": 0, 00:24:28.674 "data_size": 65536 00:24:28.674 }, 00:24:28.674 { 00:24:28.674 "name": "BaseBdev4", 00:24:28.674 "uuid": "aca7d430-4a62-49e8-8631-6718fe4ff863", 00:24:28.674 "is_configured": true, 00:24:28.674 "data_offset": 0, 00:24:28.674 "data_size": 65536 00:24:28.674 } 00:24:28.674 ] 00:24:28.674 }' 00:24:28.674 11:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:28.674 11:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:29.237 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:29.237 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:29.237 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:29.237 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:29.237 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:29.237 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:29.237 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:29.237 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:29.495 [2024-06-10 11:48:01.349054] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:29.495 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:29.495 "name": "Existed_Raid", 00:24:29.495 "aliases": [ 00:24:29.495 "7bcfa411-3f24-427e-9ce8-174c7efdc634" 00:24:29.495 ], 00:24:29.495 "product_name": "Raid Volume", 00:24:29.495 "block_size": 512, 00:24:29.495 "num_blocks": 262144, 00:24:29.495 "uuid": "7bcfa411-3f24-427e-9ce8-174c7efdc634", 00:24:29.495 "assigned_rate_limits": { 00:24:29.495 "rw_ios_per_sec": 0, 00:24:29.495 "rw_mbytes_per_sec": 0, 00:24:29.495 "r_mbytes_per_sec": 0, 00:24:29.495 "w_mbytes_per_sec": 0 00:24:29.495 }, 00:24:29.495 "claimed": false, 00:24:29.495 "zoned": false, 00:24:29.495 "supported_io_types": { 00:24:29.495 "read": true, 00:24:29.495 "write": true, 00:24:29.495 "unmap": true, 00:24:29.495 "write_zeroes": true, 00:24:29.495 "flush": true, 00:24:29.495 "reset": true, 00:24:29.495 "compare": false, 00:24:29.495 "compare_and_write": false, 00:24:29.495 "abort": false, 00:24:29.495 "nvme_admin": false, 00:24:29.495 "nvme_io": false 00:24:29.495 }, 00:24:29.495 "memory_domains": [ 00:24:29.495 { 00:24:29.495 "dma_device_id": "system", 00:24:29.495 "dma_device_type": 1 00:24:29.495 }, 00:24:29.495 { 00:24:29.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.495 "dma_device_type": 2 00:24:29.495 }, 00:24:29.495 { 00:24:29.495 "dma_device_id": "system", 00:24:29.495 "dma_device_type": 1 00:24:29.495 }, 00:24:29.495 { 00:24:29.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.495 "dma_device_type": 2 00:24:29.495 }, 00:24:29.495 { 00:24:29.495 "dma_device_id": "system", 00:24:29.495 "dma_device_type": 1 00:24:29.495 }, 00:24:29.495 { 00:24:29.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.495 "dma_device_type": 2 00:24:29.495 }, 00:24:29.495 { 00:24:29.495 "dma_device_id": "system", 00:24:29.495 "dma_device_type": 1 00:24:29.495 }, 00:24:29.495 { 00:24:29.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.495 "dma_device_type": 2 00:24:29.495 } 00:24:29.495 ], 00:24:29.495 "driver_specific": { 00:24:29.495 "raid": { 00:24:29.495 "uuid": "7bcfa411-3f24-427e-9ce8-174c7efdc634", 00:24:29.495 "strip_size_kb": 64, 00:24:29.495 "state": "online", 00:24:29.495 "raid_level": "raid0", 00:24:29.495 "superblock": false, 00:24:29.495 "num_base_bdevs": 4, 00:24:29.495 "num_base_bdevs_discovered": 4, 00:24:29.495 "num_base_bdevs_operational": 4, 00:24:29.495 "base_bdevs_list": [ 00:24:29.495 { 00:24:29.495 "name": "BaseBdev1", 00:24:29.495 "uuid": "b01b0b35-d3d2-46f3-a3c0-b299cbf6fede", 00:24:29.495 "is_configured": true, 00:24:29.495 "data_offset": 0, 00:24:29.495 "data_size": 65536 00:24:29.495 }, 00:24:29.495 { 00:24:29.495 "name": "BaseBdev2", 00:24:29.495 "uuid": "b2915efc-d683-43fc-8945-1983552f4cf5", 00:24:29.495 "is_configured": true, 00:24:29.495 "data_offset": 0, 00:24:29.495 "data_size": 65536 00:24:29.495 }, 00:24:29.495 { 00:24:29.495 "name": "BaseBdev3", 00:24:29.495 "uuid": "e88a7cb4-6380-49aa-9b06-d668ad2bb3f6", 00:24:29.495 "is_configured": true, 00:24:29.495 "data_offset": 0, 00:24:29.495 "data_size": 65536 00:24:29.495 }, 00:24:29.495 { 00:24:29.495 "name": "BaseBdev4", 00:24:29.495 "uuid": "aca7d430-4a62-49e8-8631-6718fe4ff863", 00:24:29.495 "is_configured": true, 00:24:29.495 "data_offset": 0, 00:24:29.495 "data_size": 65536 00:24:29.495 } 00:24:29.495 ] 00:24:29.495 } 00:24:29.495 } 00:24:29.495 }' 00:24:29.495 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:29.495 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:29.495 BaseBdev2 00:24:29.495 BaseBdev3 00:24:29.495 BaseBdev4' 00:24:29.495 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:29.495 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:29.495 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:29.752 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:29.752 "name": "BaseBdev1", 00:24:29.752 "aliases": [ 00:24:29.752 "b01b0b35-d3d2-46f3-a3c0-b299cbf6fede" 00:24:29.752 ], 00:24:29.752 "product_name": "Malloc disk", 00:24:29.752 "block_size": 512, 00:24:29.752 "num_blocks": 65536, 00:24:29.752 "uuid": "b01b0b35-d3d2-46f3-a3c0-b299cbf6fede", 00:24:29.752 "assigned_rate_limits": { 00:24:29.752 "rw_ios_per_sec": 0, 00:24:29.752 "rw_mbytes_per_sec": 0, 00:24:29.752 "r_mbytes_per_sec": 0, 00:24:29.752 "w_mbytes_per_sec": 0 00:24:29.752 }, 00:24:29.752 "claimed": true, 00:24:29.752 "claim_type": "exclusive_write", 00:24:29.752 "zoned": false, 00:24:29.752 "supported_io_types": { 00:24:29.752 "read": true, 00:24:29.752 "write": true, 00:24:29.752 "unmap": true, 00:24:29.752 "write_zeroes": true, 00:24:29.752 "flush": true, 00:24:29.752 "reset": true, 00:24:29.752 "compare": false, 00:24:29.752 "compare_and_write": false, 00:24:29.752 "abort": true, 00:24:29.752 "nvme_admin": false, 00:24:29.752 "nvme_io": false 00:24:29.752 }, 00:24:29.752 "memory_domains": [ 00:24:29.752 { 00:24:29.752 "dma_device_id": "system", 00:24:29.752 "dma_device_type": 1 00:24:29.752 }, 00:24:29.752 { 00:24:29.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.752 "dma_device_type": 2 00:24:29.752 } 00:24:29.752 ], 00:24:29.752 "driver_specific": {} 00:24:29.752 }' 00:24:29.752 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:29.752 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:29.752 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:29.752 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:29.752 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:29.752 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:29.752 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.010 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.010 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:30.010 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:30.010 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:30.010 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:30.010 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:30.010 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:30.010 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:30.266 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:30.266 "name": "BaseBdev2", 00:24:30.266 "aliases": [ 00:24:30.266 "b2915efc-d683-43fc-8945-1983552f4cf5" 00:24:30.266 ], 00:24:30.266 "product_name": "Malloc disk", 00:24:30.266 "block_size": 512, 00:24:30.266 "num_blocks": 65536, 00:24:30.266 "uuid": "b2915efc-d683-43fc-8945-1983552f4cf5", 00:24:30.266 "assigned_rate_limits": { 00:24:30.266 "rw_ios_per_sec": 0, 00:24:30.266 "rw_mbytes_per_sec": 0, 00:24:30.266 "r_mbytes_per_sec": 0, 00:24:30.266 "w_mbytes_per_sec": 0 00:24:30.266 }, 00:24:30.266 "claimed": true, 00:24:30.266 "claim_type": "exclusive_write", 00:24:30.266 "zoned": false, 00:24:30.266 "supported_io_types": { 00:24:30.266 "read": true, 00:24:30.266 "write": true, 00:24:30.266 "unmap": true, 00:24:30.266 "write_zeroes": true, 00:24:30.266 "flush": true, 00:24:30.266 "reset": true, 00:24:30.266 "compare": false, 00:24:30.266 "compare_and_write": false, 00:24:30.266 "abort": true, 00:24:30.266 "nvme_admin": false, 00:24:30.266 "nvme_io": false 00:24:30.266 }, 00:24:30.266 "memory_domains": [ 00:24:30.266 { 00:24:30.266 "dma_device_id": "system", 00:24:30.266 "dma_device_type": 1 00:24:30.266 }, 00:24:30.266 { 00:24:30.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.266 "dma_device_type": 2 00:24:30.266 } 00:24:30.266 ], 00:24:30.266 "driver_specific": {} 00:24:30.266 }' 00:24:30.266 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:30.266 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:30.266 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:30.266 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:30.522 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:30.522 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:30.522 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.522 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.522 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:30.522 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:30.522 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:30.522 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:30.522 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:30.522 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:30.522 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:30.869 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:30.869 "name": "BaseBdev3", 00:24:30.869 "aliases": [ 00:24:30.869 "e88a7cb4-6380-49aa-9b06-d668ad2bb3f6" 00:24:30.869 ], 00:24:30.869 "product_name": "Malloc disk", 00:24:30.869 "block_size": 512, 00:24:30.870 "num_blocks": 65536, 00:24:30.870 "uuid": "e88a7cb4-6380-49aa-9b06-d668ad2bb3f6", 00:24:30.870 "assigned_rate_limits": { 00:24:30.870 "rw_ios_per_sec": 0, 00:24:30.870 "rw_mbytes_per_sec": 0, 00:24:30.870 "r_mbytes_per_sec": 0, 00:24:30.870 "w_mbytes_per_sec": 0 00:24:30.870 }, 00:24:30.870 "claimed": true, 00:24:30.870 "claim_type": "exclusive_write", 00:24:30.870 "zoned": false, 00:24:30.870 "supported_io_types": { 00:24:30.870 "read": true, 00:24:30.870 "write": true, 00:24:30.870 "unmap": true, 00:24:30.870 "write_zeroes": true, 00:24:30.870 "flush": true, 00:24:30.870 "reset": true, 00:24:30.870 "compare": false, 00:24:30.870 "compare_and_write": false, 00:24:30.870 "abort": true, 00:24:30.870 "nvme_admin": false, 00:24:30.870 "nvme_io": false 00:24:30.870 }, 00:24:30.870 "memory_domains": [ 00:24:30.870 { 00:24:30.870 "dma_device_id": "system", 00:24:30.870 "dma_device_type": 1 00:24:30.870 }, 00:24:30.870 { 00:24:30.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.870 "dma_device_type": 2 00:24:30.870 } 00:24:30.870 ], 00:24:30.870 "driver_specific": {} 00:24:30.870 }' 00:24:30.870 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:31.128 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:31.128 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:31.128 11:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.128 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.128 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:31.128 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.128 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.128 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:31.128 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.386 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.386 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:31.386 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:31.386 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:31.386 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:31.643 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:31.643 "name": "BaseBdev4", 00:24:31.643 "aliases": [ 00:24:31.643 "aca7d430-4a62-49e8-8631-6718fe4ff863" 00:24:31.643 ], 00:24:31.643 "product_name": "Malloc disk", 00:24:31.643 "block_size": 512, 00:24:31.643 "num_blocks": 65536, 00:24:31.643 "uuid": "aca7d430-4a62-49e8-8631-6718fe4ff863", 00:24:31.643 "assigned_rate_limits": { 00:24:31.643 "rw_ios_per_sec": 0, 00:24:31.643 "rw_mbytes_per_sec": 0, 00:24:31.643 "r_mbytes_per_sec": 0, 00:24:31.643 "w_mbytes_per_sec": 0 00:24:31.643 }, 00:24:31.643 "claimed": true, 00:24:31.643 "claim_type": "exclusive_write", 00:24:31.643 "zoned": false, 00:24:31.643 "supported_io_types": { 00:24:31.643 "read": true, 00:24:31.643 "write": true, 00:24:31.643 "unmap": true, 00:24:31.643 "write_zeroes": true, 00:24:31.643 "flush": true, 00:24:31.643 "reset": true, 00:24:31.643 "compare": false, 00:24:31.643 "compare_and_write": false, 00:24:31.643 "abort": true, 00:24:31.643 "nvme_admin": false, 00:24:31.643 "nvme_io": false 00:24:31.643 }, 00:24:31.643 "memory_domains": [ 00:24:31.643 { 00:24:31.643 "dma_device_id": "system", 00:24:31.643 "dma_device_type": 1 00:24:31.643 }, 00:24:31.643 { 00:24:31.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.643 "dma_device_type": 2 00:24:31.643 } 00:24:31.644 ], 00:24:31.644 "driver_specific": {} 00:24:31.644 }' 00:24:31.644 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:31.644 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:31.644 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:31.644 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.644 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.644 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:31.644 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.901 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.901 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:31.901 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.901 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.901 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:31.901 11:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:32.158 [2024-06-10 11:48:04.137505] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:32.158 [2024-06-10 11:48:04.137741] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:32.158 [2024-06-10 11:48:04.137889] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:32.416 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:32.416 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:24:32.416 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:32.416 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:32.416 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:24:32.416 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:24:32.416 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:32.416 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:24:32.416 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:32.416 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:32.416 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:32.416 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:32.416 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:32.416 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:32.416 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:32.416 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.416 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:32.674 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:32.674 "name": "Existed_Raid", 00:24:32.674 "uuid": "7bcfa411-3f24-427e-9ce8-174c7efdc634", 00:24:32.674 "strip_size_kb": 64, 00:24:32.674 "state": "offline", 00:24:32.674 "raid_level": "raid0", 00:24:32.674 "superblock": false, 00:24:32.674 "num_base_bdevs": 4, 00:24:32.674 "num_base_bdevs_discovered": 3, 00:24:32.674 "num_base_bdevs_operational": 3, 00:24:32.674 "base_bdevs_list": [ 00:24:32.674 { 00:24:32.674 "name": null, 00:24:32.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.674 "is_configured": false, 00:24:32.674 "data_offset": 0, 00:24:32.674 "data_size": 65536 00:24:32.674 }, 00:24:32.674 { 00:24:32.674 "name": "BaseBdev2", 00:24:32.674 "uuid": "b2915efc-d683-43fc-8945-1983552f4cf5", 00:24:32.674 "is_configured": true, 00:24:32.674 "data_offset": 0, 00:24:32.674 "data_size": 65536 00:24:32.674 }, 00:24:32.674 { 00:24:32.674 "name": "BaseBdev3", 00:24:32.674 "uuid": "e88a7cb4-6380-49aa-9b06-d668ad2bb3f6", 00:24:32.674 "is_configured": true, 00:24:32.674 "data_offset": 0, 00:24:32.674 "data_size": 65536 00:24:32.674 }, 00:24:32.674 { 00:24:32.674 "name": "BaseBdev4", 00:24:32.674 "uuid": "aca7d430-4a62-49e8-8631-6718fe4ff863", 00:24:32.674 "is_configured": true, 00:24:32.674 "data_offset": 0, 00:24:32.674 "data_size": 65536 00:24:32.674 } 00:24:32.674 ] 00:24:32.674 }' 00:24:32.674 11:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:32.674 11:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.240 11:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:33.240 11:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:33.240 11:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.240 11:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:33.240 11:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:33.240 11:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:33.240 11:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:33.498 [2024-06-10 11:48:05.475163] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:33.756 11:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:33.756 11:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:33.756 11:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.756 11:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:34.014 11:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:34.014 11:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:34.014 11:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:34.014 [2024-06-10 11:48:06.003576] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:34.278 11:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:34.278 11:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:34.278 11:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:34.278 11:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.571 11:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:34.571 11:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:34.571 11:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:34.571 [2024-06-10 11:48:06.591654] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:34.571 [2024-06-10 11:48:06.591909] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:24:34.829 11:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:34.829 11:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:34.829 11:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.829 11:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:35.088 11:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:35.088 11:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:35.088 11:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:35.088 11:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:35.088 11:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:35.088 11:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:35.346 BaseBdev2 00:24:35.346 11:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:35.346 11:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:24:35.346 11:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:24:35.346 11:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:24:35.346 11:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:24:35.346 11:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:24:35.346 11:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:35.604 11:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:35.862 [ 00:24:35.862 { 00:24:35.862 "name": "BaseBdev2", 00:24:35.862 "aliases": [ 00:24:35.862 "57ca3534-4ff3-48ee-9840-01905bd9872b" 00:24:35.862 ], 00:24:35.862 "product_name": "Malloc disk", 00:24:35.862 "block_size": 512, 00:24:35.862 "num_blocks": 65536, 00:24:35.862 "uuid": "57ca3534-4ff3-48ee-9840-01905bd9872b", 00:24:35.862 "assigned_rate_limits": { 00:24:35.862 "rw_ios_per_sec": 0, 00:24:35.862 "rw_mbytes_per_sec": 0, 00:24:35.862 "r_mbytes_per_sec": 0, 00:24:35.862 "w_mbytes_per_sec": 0 00:24:35.862 }, 00:24:35.862 "claimed": false, 00:24:35.862 "zoned": false, 00:24:35.862 "supported_io_types": { 00:24:35.862 "read": true, 00:24:35.862 "write": true, 00:24:35.862 "unmap": true, 00:24:35.862 "write_zeroes": true, 00:24:35.862 "flush": true, 00:24:35.862 "reset": true, 00:24:35.862 "compare": false, 00:24:35.862 "compare_and_write": false, 00:24:35.862 "abort": true, 00:24:35.862 "nvme_admin": false, 00:24:35.862 "nvme_io": false 00:24:35.862 }, 00:24:35.862 "memory_domains": [ 00:24:35.862 { 00:24:35.862 "dma_device_id": "system", 00:24:35.862 "dma_device_type": 1 00:24:35.862 }, 00:24:35.862 { 00:24:35.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.862 "dma_device_type": 2 00:24:35.862 } 00:24:35.862 ], 00:24:35.862 "driver_specific": {} 00:24:35.862 } 00:24:35.862 ] 00:24:35.862 11:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:24:35.862 11:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:35.862 11:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:35.862 11:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:36.120 BaseBdev3 00:24:36.120 11:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:36.120 11:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:24:36.120 11:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:24:36.120 11:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:24:36.120 11:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:24:36.120 11:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:24:36.120 11:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:36.377 11:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:36.377 [ 00:24:36.377 { 00:24:36.377 "name": "BaseBdev3", 00:24:36.377 "aliases": [ 00:24:36.377 "45d5390a-28de-426b-bfe5-98c1001841ae" 00:24:36.377 ], 00:24:36.377 "product_name": "Malloc disk", 00:24:36.377 "block_size": 512, 00:24:36.377 "num_blocks": 65536, 00:24:36.377 "uuid": "45d5390a-28de-426b-bfe5-98c1001841ae", 00:24:36.377 "assigned_rate_limits": { 00:24:36.377 "rw_ios_per_sec": 0, 00:24:36.377 "rw_mbytes_per_sec": 0, 00:24:36.377 "r_mbytes_per_sec": 0, 00:24:36.377 "w_mbytes_per_sec": 0 00:24:36.377 }, 00:24:36.377 "claimed": false, 00:24:36.377 "zoned": false, 00:24:36.377 "supported_io_types": { 00:24:36.377 "read": true, 00:24:36.377 "write": true, 00:24:36.377 "unmap": true, 00:24:36.377 "write_zeroes": true, 00:24:36.377 "flush": true, 00:24:36.377 "reset": true, 00:24:36.377 "compare": false, 00:24:36.377 "compare_and_write": false, 00:24:36.377 "abort": true, 00:24:36.377 "nvme_admin": false, 00:24:36.377 "nvme_io": false 00:24:36.377 }, 00:24:36.377 "memory_domains": [ 00:24:36.377 { 00:24:36.377 "dma_device_id": "system", 00:24:36.377 "dma_device_type": 1 00:24:36.377 }, 00:24:36.377 { 00:24:36.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.377 "dma_device_type": 2 00:24:36.377 } 00:24:36.377 ], 00:24:36.377 "driver_specific": {} 00:24:36.377 } 00:24:36.377 ] 00:24:36.377 11:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:24:36.377 11:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:36.377 11:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:36.377 11:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:36.636 BaseBdev4 00:24:36.636 11:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:36.636 11:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:24:36.636 11:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:24:36.636 11:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:24:36.636 11:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:24:36.636 11:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:24:36.636 11:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:36.894 11:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:37.152 [ 00:24:37.152 { 00:24:37.152 "name": "BaseBdev4", 00:24:37.152 "aliases": [ 00:24:37.152 "96336603-d0b3-4266-bac1-80c3a9f1d89e" 00:24:37.152 ], 00:24:37.152 "product_name": "Malloc disk", 00:24:37.152 "block_size": 512, 00:24:37.152 "num_blocks": 65536, 00:24:37.152 "uuid": "96336603-d0b3-4266-bac1-80c3a9f1d89e", 00:24:37.152 "assigned_rate_limits": { 00:24:37.152 "rw_ios_per_sec": 0, 00:24:37.152 "rw_mbytes_per_sec": 0, 00:24:37.152 "r_mbytes_per_sec": 0, 00:24:37.152 "w_mbytes_per_sec": 0 00:24:37.152 }, 00:24:37.152 "claimed": false, 00:24:37.152 "zoned": false, 00:24:37.152 "supported_io_types": { 00:24:37.152 "read": true, 00:24:37.152 "write": true, 00:24:37.152 "unmap": true, 00:24:37.152 "write_zeroes": true, 00:24:37.152 "flush": true, 00:24:37.152 "reset": true, 00:24:37.152 "compare": false, 00:24:37.152 "compare_and_write": false, 00:24:37.152 "abort": true, 00:24:37.152 "nvme_admin": false, 00:24:37.152 "nvme_io": false 00:24:37.152 }, 00:24:37.152 "memory_domains": [ 00:24:37.152 { 00:24:37.152 "dma_device_id": "system", 00:24:37.152 "dma_device_type": 1 00:24:37.152 }, 00:24:37.152 { 00:24:37.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.152 "dma_device_type": 2 00:24:37.152 } 00:24:37.152 ], 00:24:37.152 "driver_specific": {} 00:24:37.152 } 00:24:37.152 ] 00:24:37.152 11:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:24:37.152 11:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:37.152 11:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:37.152 11:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:37.410 [2024-06-10 11:48:09.291548] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:37.410 [2024-06-10 11:48:09.292213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:37.410 [2024-06-10 11:48:09.292372] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:37.410 [2024-06-10 11:48:09.294597] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:37.410 [2024-06-10 11:48:09.294819] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:37.410 11:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:37.410 11:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:37.410 11:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:37.410 11:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:37.410 11:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:37.410 11:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:37.410 11:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:37.410 11:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:37.410 11:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:37.410 11:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:37.410 11:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.410 11:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:37.668 11:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:37.668 "name": "Existed_Raid", 00:24:37.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.668 "strip_size_kb": 64, 00:24:37.668 "state": "configuring", 00:24:37.668 "raid_level": "raid0", 00:24:37.668 "superblock": false, 00:24:37.668 "num_base_bdevs": 4, 00:24:37.668 "num_base_bdevs_discovered": 3, 00:24:37.668 "num_base_bdevs_operational": 4, 00:24:37.668 "base_bdevs_list": [ 00:24:37.668 { 00:24:37.668 "name": "BaseBdev1", 00:24:37.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.668 "is_configured": false, 00:24:37.668 "data_offset": 0, 00:24:37.668 "data_size": 0 00:24:37.668 }, 00:24:37.668 { 00:24:37.668 "name": "BaseBdev2", 00:24:37.668 "uuid": "57ca3534-4ff3-48ee-9840-01905bd9872b", 00:24:37.668 "is_configured": true, 00:24:37.668 "data_offset": 0, 00:24:37.668 "data_size": 65536 00:24:37.668 }, 00:24:37.668 { 00:24:37.668 "name": "BaseBdev3", 00:24:37.668 "uuid": "45d5390a-28de-426b-bfe5-98c1001841ae", 00:24:37.668 "is_configured": true, 00:24:37.668 "data_offset": 0, 00:24:37.668 "data_size": 65536 00:24:37.668 }, 00:24:37.668 { 00:24:37.668 "name": "BaseBdev4", 00:24:37.668 "uuid": "96336603-d0b3-4266-bac1-80c3a9f1d89e", 00:24:37.668 "is_configured": true, 00:24:37.668 "data_offset": 0, 00:24:37.668 "data_size": 65536 00:24:37.668 } 00:24:37.668 ] 00:24:37.668 }' 00:24:37.668 11:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:37.668 11:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.233 11:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:38.491 [2024-06-10 11:48:10.355113] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:38.491 11:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:38.491 11:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:38.491 11:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:38.491 11:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:38.491 11:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:38.491 11:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:38.491 11:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:38.491 11:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:38.491 11:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:38.491 11:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:38.491 11:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.491 11:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:38.749 11:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:38.749 "name": "Existed_Raid", 00:24:38.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.749 "strip_size_kb": 64, 00:24:38.749 "state": "configuring", 00:24:38.749 "raid_level": "raid0", 00:24:38.749 "superblock": false, 00:24:38.749 "num_base_bdevs": 4, 00:24:38.749 "num_base_bdevs_discovered": 2, 00:24:38.749 "num_base_bdevs_operational": 4, 00:24:38.749 "base_bdevs_list": [ 00:24:38.749 { 00:24:38.749 "name": "BaseBdev1", 00:24:38.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.749 "is_configured": false, 00:24:38.749 "data_offset": 0, 00:24:38.749 "data_size": 0 00:24:38.749 }, 00:24:38.749 { 00:24:38.749 "name": null, 00:24:38.749 "uuid": "57ca3534-4ff3-48ee-9840-01905bd9872b", 00:24:38.749 "is_configured": false, 00:24:38.749 "data_offset": 0, 00:24:38.749 "data_size": 65536 00:24:38.749 }, 00:24:38.749 { 00:24:38.749 "name": "BaseBdev3", 00:24:38.749 "uuid": "45d5390a-28de-426b-bfe5-98c1001841ae", 00:24:38.749 "is_configured": true, 00:24:38.749 "data_offset": 0, 00:24:38.749 "data_size": 65536 00:24:38.749 }, 00:24:38.749 { 00:24:38.749 "name": "BaseBdev4", 00:24:38.749 "uuid": "96336603-d0b3-4266-bac1-80c3a9f1d89e", 00:24:38.749 "is_configured": true, 00:24:38.749 "data_offset": 0, 00:24:38.749 "data_size": 65536 00:24:38.749 } 00:24:38.749 ] 00:24:38.749 }' 00:24:38.749 11:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:38.749 11:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.314 11:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.314 11:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:39.572 11:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:39.572 11:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:39.829 [2024-06-10 11:48:11.835404] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:39.829 BaseBdev1 00:24:39.829 11:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:39.829 11:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:24:39.829 11:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:24:39.829 11:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:24:39.829 11:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:24:39.829 11:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:24:39.829 11:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:40.395 11:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:40.395 [ 00:24:40.395 { 00:24:40.395 "name": "BaseBdev1", 00:24:40.395 "aliases": [ 00:24:40.395 "c8ce3e0e-9d8f-439d-9e9f-d3db3638261e" 00:24:40.395 ], 00:24:40.395 "product_name": "Malloc disk", 00:24:40.395 "block_size": 512, 00:24:40.395 "num_blocks": 65536, 00:24:40.395 "uuid": "c8ce3e0e-9d8f-439d-9e9f-d3db3638261e", 00:24:40.395 "assigned_rate_limits": { 00:24:40.395 "rw_ios_per_sec": 0, 00:24:40.395 "rw_mbytes_per_sec": 0, 00:24:40.395 "r_mbytes_per_sec": 0, 00:24:40.395 "w_mbytes_per_sec": 0 00:24:40.395 }, 00:24:40.395 "claimed": true, 00:24:40.395 "claim_type": "exclusive_write", 00:24:40.395 "zoned": false, 00:24:40.395 "supported_io_types": { 00:24:40.395 "read": true, 00:24:40.395 "write": true, 00:24:40.395 "unmap": true, 00:24:40.395 "write_zeroes": true, 00:24:40.395 "flush": true, 00:24:40.395 "reset": true, 00:24:40.395 "compare": false, 00:24:40.395 "compare_and_write": false, 00:24:40.395 "abort": true, 00:24:40.395 "nvme_admin": false, 00:24:40.395 "nvme_io": false 00:24:40.395 }, 00:24:40.395 "memory_domains": [ 00:24:40.395 { 00:24:40.395 "dma_device_id": "system", 00:24:40.395 "dma_device_type": 1 00:24:40.395 }, 00:24:40.395 { 00:24:40.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:40.395 "dma_device_type": 2 00:24:40.395 } 00:24:40.395 ], 00:24:40.395 "driver_specific": {} 00:24:40.395 } 00:24:40.395 ] 00:24:40.395 11:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:24:40.395 11:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:40.395 11:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:40.395 11:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:40.395 11:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:40.395 11:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:40.395 11:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:40.395 11:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:40.395 11:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:40.395 11:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:40.395 11:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:40.395 11:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.395 11:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:40.654 11:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:40.654 "name": "Existed_Raid", 00:24:40.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.654 "strip_size_kb": 64, 00:24:40.654 "state": "configuring", 00:24:40.654 "raid_level": "raid0", 00:24:40.654 "superblock": false, 00:24:40.654 "num_base_bdevs": 4, 00:24:40.654 "num_base_bdevs_discovered": 3, 00:24:40.654 "num_base_bdevs_operational": 4, 00:24:40.654 "base_bdevs_list": [ 00:24:40.654 { 00:24:40.654 "name": "BaseBdev1", 00:24:40.654 "uuid": "c8ce3e0e-9d8f-439d-9e9f-d3db3638261e", 00:24:40.654 "is_configured": true, 00:24:40.654 "data_offset": 0, 00:24:40.654 "data_size": 65536 00:24:40.654 }, 00:24:40.654 { 00:24:40.654 "name": null, 00:24:40.654 "uuid": "57ca3534-4ff3-48ee-9840-01905bd9872b", 00:24:40.654 "is_configured": false, 00:24:40.654 "data_offset": 0, 00:24:40.654 "data_size": 65536 00:24:40.654 }, 00:24:40.654 { 00:24:40.654 "name": "BaseBdev3", 00:24:40.654 "uuid": "45d5390a-28de-426b-bfe5-98c1001841ae", 00:24:40.654 "is_configured": true, 00:24:40.654 "data_offset": 0, 00:24:40.654 "data_size": 65536 00:24:40.654 }, 00:24:40.654 { 00:24:40.654 "name": "BaseBdev4", 00:24:40.654 "uuid": "96336603-d0b3-4266-bac1-80c3a9f1d89e", 00:24:40.654 "is_configured": true, 00:24:40.654 "data_offset": 0, 00:24:40.654 "data_size": 65536 00:24:40.654 } 00:24:40.654 ] 00:24:40.654 }' 00:24:40.654 11:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:40.654 11:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.283 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.283 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:41.541 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:41.541 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:41.799 [2024-06-10 11:48:13.711876] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:41.799 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:41.799 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:41.799 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:41.799 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:41.799 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:41.799 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:41.799 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:41.799 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:41.799 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:41.799 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:41.799 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.799 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:42.057 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:42.057 "name": "Existed_Raid", 00:24:42.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.057 "strip_size_kb": 64, 00:24:42.057 "state": "configuring", 00:24:42.057 "raid_level": "raid0", 00:24:42.057 "superblock": false, 00:24:42.057 "num_base_bdevs": 4, 00:24:42.057 "num_base_bdevs_discovered": 2, 00:24:42.057 "num_base_bdevs_operational": 4, 00:24:42.057 "base_bdevs_list": [ 00:24:42.057 { 00:24:42.057 "name": "BaseBdev1", 00:24:42.057 "uuid": "c8ce3e0e-9d8f-439d-9e9f-d3db3638261e", 00:24:42.057 "is_configured": true, 00:24:42.057 "data_offset": 0, 00:24:42.057 "data_size": 65536 00:24:42.057 }, 00:24:42.057 { 00:24:42.057 "name": null, 00:24:42.057 "uuid": "57ca3534-4ff3-48ee-9840-01905bd9872b", 00:24:42.057 "is_configured": false, 00:24:42.057 "data_offset": 0, 00:24:42.057 "data_size": 65536 00:24:42.057 }, 00:24:42.057 { 00:24:42.057 "name": null, 00:24:42.057 "uuid": "45d5390a-28de-426b-bfe5-98c1001841ae", 00:24:42.057 "is_configured": false, 00:24:42.057 "data_offset": 0, 00:24:42.057 "data_size": 65536 00:24:42.057 }, 00:24:42.057 { 00:24:42.057 "name": "BaseBdev4", 00:24:42.057 "uuid": "96336603-d0b3-4266-bac1-80c3a9f1d89e", 00:24:42.057 "is_configured": true, 00:24:42.057 "data_offset": 0, 00:24:42.057 "data_size": 65536 00:24:42.057 } 00:24:42.057 ] 00:24:42.057 }' 00:24:42.057 11:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:42.057 11:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.624 11:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:42.624 11:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.190 11:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:43.190 11:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:43.448 [2024-06-10 11:48:15.280270] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:43.448 11:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:43.448 11:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:43.448 11:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:43.448 11:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:43.448 11:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:43.448 11:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:43.448 11:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:43.448 11:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:43.448 11:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:43.448 11:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:43.448 11:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:43.448 11:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.706 11:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:43.706 "name": "Existed_Raid", 00:24:43.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.706 "strip_size_kb": 64, 00:24:43.706 "state": "configuring", 00:24:43.706 "raid_level": "raid0", 00:24:43.706 "superblock": false, 00:24:43.706 "num_base_bdevs": 4, 00:24:43.706 "num_base_bdevs_discovered": 3, 00:24:43.706 "num_base_bdevs_operational": 4, 00:24:43.706 "base_bdevs_list": [ 00:24:43.706 { 00:24:43.706 "name": "BaseBdev1", 00:24:43.706 "uuid": "c8ce3e0e-9d8f-439d-9e9f-d3db3638261e", 00:24:43.706 "is_configured": true, 00:24:43.706 "data_offset": 0, 00:24:43.706 "data_size": 65536 00:24:43.706 }, 00:24:43.706 { 00:24:43.706 "name": null, 00:24:43.706 "uuid": "57ca3534-4ff3-48ee-9840-01905bd9872b", 00:24:43.706 "is_configured": false, 00:24:43.706 "data_offset": 0, 00:24:43.706 "data_size": 65536 00:24:43.706 }, 00:24:43.706 { 00:24:43.706 "name": "BaseBdev3", 00:24:43.706 "uuid": "45d5390a-28de-426b-bfe5-98c1001841ae", 00:24:43.706 "is_configured": true, 00:24:43.706 "data_offset": 0, 00:24:43.706 "data_size": 65536 00:24:43.706 }, 00:24:43.706 { 00:24:43.706 "name": "BaseBdev4", 00:24:43.706 "uuid": "96336603-d0b3-4266-bac1-80c3a9f1d89e", 00:24:43.706 "is_configured": true, 00:24:43.706 "data_offset": 0, 00:24:43.706 "data_size": 65536 00:24:43.706 } 00:24:43.706 ] 00:24:43.706 }' 00:24:43.706 11:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:43.706 11:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.271 11:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.271 11:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:44.533 11:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:44.533 11:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:44.799 [2024-06-10 11:48:16.712629] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:44.799 11:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:44.799 11:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:44.799 11:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:44.799 11:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:44.799 11:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:44.799 11:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:44.799 11:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:44.799 11:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:44.799 11:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:44.799 11:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:44.799 11:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.799 11:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:45.364 11:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:45.364 "name": "Existed_Raid", 00:24:45.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.364 "strip_size_kb": 64, 00:24:45.364 "state": "configuring", 00:24:45.364 "raid_level": "raid0", 00:24:45.364 "superblock": false, 00:24:45.364 "num_base_bdevs": 4, 00:24:45.364 "num_base_bdevs_discovered": 2, 00:24:45.364 "num_base_bdevs_operational": 4, 00:24:45.364 "base_bdevs_list": [ 00:24:45.364 { 00:24:45.364 "name": null, 00:24:45.364 "uuid": "c8ce3e0e-9d8f-439d-9e9f-d3db3638261e", 00:24:45.364 "is_configured": false, 00:24:45.364 "data_offset": 0, 00:24:45.364 "data_size": 65536 00:24:45.364 }, 00:24:45.364 { 00:24:45.364 "name": null, 00:24:45.364 "uuid": "57ca3534-4ff3-48ee-9840-01905bd9872b", 00:24:45.364 "is_configured": false, 00:24:45.364 "data_offset": 0, 00:24:45.364 "data_size": 65536 00:24:45.364 }, 00:24:45.364 { 00:24:45.364 "name": "BaseBdev3", 00:24:45.364 "uuid": "45d5390a-28de-426b-bfe5-98c1001841ae", 00:24:45.364 "is_configured": true, 00:24:45.364 "data_offset": 0, 00:24:45.364 "data_size": 65536 00:24:45.364 }, 00:24:45.364 { 00:24:45.364 "name": "BaseBdev4", 00:24:45.364 "uuid": "96336603-d0b3-4266-bac1-80c3a9f1d89e", 00:24:45.364 "is_configured": true, 00:24:45.364 "data_offset": 0, 00:24:45.364 "data_size": 65536 00:24:45.364 } 00:24:45.364 ] 00:24:45.364 }' 00:24:45.364 11:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:45.364 11:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.929 11:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.929 11:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:46.187 11:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:46.187 11:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:46.445 [2024-06-10 11:48:18.273961] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:46.445 11:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:46.445 11:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:46.445 11:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:46.445 11:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:46.445 11:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:46.445 11:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:46.445 11:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:46.445 11:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:46.445 11:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:46.445 11:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:46.445 11:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.445 11:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:46.702 11:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:46.702 "name": "Existed_Raid", 00:24:46.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.702 "strip_size_kb": 64, 00:24:46.702 "state": "configuring", 00:24:46.702 "raid_level": "raid0", 00:24:46.702 "superblock": false, 00:24:46.702 "num_base_bdevs": 4, 00:24:46.702 "num_base_bdevs_discovered": 3, 00:24:46.702 "num_base_bdevs_operational": 4, 00:24:46.702 "base_bdevs_list": [ 00:24:46.702 { 00:24:46.702 "name": null, 00:24:46.702 "uuid": "c8ce3e0e-9d8f-439d-9e9f-d3db3638261e", 00:24:46.702 "is_configured": false, 00:24:46.702 "data_offset": 0, 00:24:46.702 "data_size": 65536 00:24:46.702 }, 00:24:46.702 { 00:24:46.702 "name": "BaseBdev2", 00:24:46.702 "uuid": "57ca3534-4ff3-48ee-9840-01905bd9872b", 00:24:46.702 "is_configured": true, 00:24:46.702 "data_offset": 0, 00:24:46.702 "data_size": 65536 00:24:46.702 }, 00:24:46.702 { 00:24:46.702 "name": "BaseBdev3", 00:24:46.702 "uuid": "45d5390a-28de-426b-bfe5-98c1001841ae", 00:24:46.702 "is_configured": true, 00:24:46.702 "data_offset": 0, 00:24:46.702 "data_size": 65536 00:24:46.702 }, 00:24:46.702 { 00:24:46.702 "name": "BaseBdev4", 00:24:46.702 "uuid": "96336603-d0b3-4266-bac1-80c3a9f1d89e", 00:24:46.702 "is_configured": true, 00:24:46.702 "data_offset": 0, 00:24:46.702 "data_size": 65536 00:24:46.702 } 00:24:46.702 ] 00:24:46.702 }' 00:24:46.702 11:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:46.702 11:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.267 11:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.267 11:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:47.525 11:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:47.525 11:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.525 11:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:47.782 11:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c8ce3e0e-9d8f-439d-9e9f-d3db3638261e 00:24:48.040 [2024-06-10 11:48:20.005136] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:48.040 [2024-06-10 11:48:20.005395] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:24:48.040 [2024-06-10 11:48:20.005443] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:48.040 [2024-06-10 11:48:20.005661] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:48.040 [2024-06-10 11:48:20.006110] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:24:48.040 [2024-06-10 11:48:20.006241] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:24:48.040 [2024-06-10 11:48:20.006570] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:48.040 NewBaseBdev 00:24:48.040 11:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:48.040 11:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:24:48.040 11:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:24:48.040 11:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:24:48.040 11:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:24:48.040 11:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:24:48.040 11:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:48.298 11:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:48.556 [ 00:24:48.556 { 00:24:48.556 "name": "NewBaseBdev", 00:24:48.556 "aliases": [ 00:24:48.556 "c8ce3e0e-9d8f-439d-9e9f-d3db3638261e" 00:24:48.556 ], 00:24:48.556 "product_name": "Malloc disk", 00:24:48.556 "block_size": 512, 00:24:48.556 "num_blocks": 65536, 00:24:48.556 "uuid": "c8ce3e0e-9d8f-439d-9e9f-d3db3638261e", 00:24:48.556 "assigned_rate_limits": { 00:24:48.556 "rw_ios_per_sec": 0, 00:24:48.556 "rw_mbytes_per_sec": 0, 00:24:48.556 "r_mbytes_per_sec": 0, 00:24:48.556 "w_mbytes_per_sec": 0 00:24:48.556 }, 00:24:48.556 "claimed": true, 00:24:48.556 "claim_type": "exclusive_write", 00:24:48.556 "zoned": false, 00:24:48.556 "supported_io_types": { 00:24:48.556 "read": true, 00:24:48.556 "write": true, 00:24:48.556 "unmap": true, 00:24:48.556 "write_zeroes": true, 00:24:48.556 "flush": true, 00:24:48.556 "reset": true, 00:24:48.556 "compare": false, 00:24:48.556 "compare_and_write": false, 00:24:48.556 "abort": true, 00:24:48.556 "nvme_admin": false, 00:24:48.556 "nvme_io": false 00:24:48.556 }, 00:24:48.556 "memory_domains": [ 00:24:48.556 { 00:24:48.556 "dma_device_id": "system", 00:24:48.556 "dma_device_type": 1 00:24:48.556 }, 00:24:48.556 { 00:24:48.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.556 "dma_device_type": 2 00:24:48.556 } 00:24:48.556 ], 00:24:48.556 "driver_specific": {} 00:24:48.556 } 00:24:48.556 ] 00:24:48.556 11:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:24:48.556 11:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:48.556 11:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:48.556 11:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:48.556 11:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:48.556 11:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:48.556 11:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:48.556 11:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:48.556 11:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:48.556 11:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:48.556 11:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:48.556 11:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.556 11:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:48.814 11:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:48.814 "name": "Existed_Raid", 00:24:48.814 "uuid": "091b2d7a-3da0-437c-b0c2-8d66a7a16463", 00:24:48.814 "strip_size_kb": 64, 00:24:48.814 "state": "online", 00:24:48.814 "raid_level": "raid0", 00:24:48.814 "superblock": false, 00:24:48.814 "num_base_bdevs": 4, 00:24:48.814 "num_base_bdevs_discovered": 4, 00:24:48.814 "num_base_bdevs_operational": 4, 00:24:48.814 "base_bdevs_list": [ 00:24:48.814 { 00:24:48.814 "name": "NewBaseBdev", 00:24:48.814 "uuid": "c8ce3e0e-9d8f-439d-9e9f-d3db3638261e", 00:24:48.814 "is_configured": true, 00:24:48.814 "data_offset": 0, 00:24:48.814 "data_size": 65536 00:24:48.814 }, 00:24:48.814 { 00:24:48.814 "name": "BaseBdev2", 00:24:48.814 "uuid": "57ca3534-4ff3-48ee-9840-01905bd9872b", 00:24:48.815 "is_configured": true, 00:24:48.815 "data_offset": 0, 00:24:48.815 "data_size": 65536 00:24:48.815 }, 00:24:48.815 { 00:24:48.815 "name": "BaseBdev3", 00:24:48.815 "uuid": "45d5390a-28de-426b-bfe5-98c1001841ae", 00:24:48.815 "is_configured": true, 00:24:48.815 "data_offset": 0, 00:24:48.815 "data_size": 65536 00:24:48.815 }, 00:24:48.815 { 00:24:48.815 "name": "BaseBdev4", 00:24:48.815 "uuid": "96336603-d0b3-4266-bac1-80c3a9f1d89e", 00:24:48.815 "is_configured": true, 00:24:48.815 "data_offset": 0, 00:24:48.815 "data_size": 65536 00:24:48.815 } 00:24:48.815 ] 00:24:48.815 }' 00:24:48.815 11:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:48.815 11:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.380 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:49.380 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:49.380 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:49.380 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:49.380 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:49.380 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:49.380 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:49.380 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:49.638 [2024-06-10 11:48:21.585864] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:49.638 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:49.638 "name": "Existed_Raid", 00:24:49.638 "aliases": [ 00:24:49.638 "091b2d7a-3da0-437c-b0c2-8d66a7a16463" 00:24:49.638 ], 00:24:49.638 "product_name": "Raid Volume", 00:24:49.638 "block_size": 512, 00:24:49.638 "num_blocks": 262144, 00:24:49.638 "uuid": "091b2d7a-3da0-437c-b0c2-8d66a7a16463", 00:24:49.638 "assigned_rate_limits": { 00:24:49.638 "rw_ios_per_sec": 0, 00:24:49.638 "rw_mbytes_per_sec": 0, 00:24:49.638 "r_mbytes_per_sec": 0, 00:24:49.638 "w_mbytes_per_sec": 0 00:24:49.638 }, 00:24:49.638 "claimed": false, 00:24:49.638 "zoned": false, 00:24:49.638 "supported_io_types": { 00:24:49.638 "read": true, 00:24:49.638 "write": true, 00:24:49.638 "unmap": true, 00:24:49.638 "write_zeroes": true, 00:24:49.638 "flush": true, 00:24:49.638 "reset": true, 00:24:49.638 "compare": false, 00:24:49.638 "compare_and_write": false, 00:24:49.638 "abort": false, 00:24:49.638 "nvme_admin": false, 00:24:49.638 "nvme_io": false 00:24:49.638 }, 00:24:49.638 "memory_domains": [ 00:24:49.638 { 00:24:49.638 "dma_device_id": "system", 00:24:49.638 "dma_device_type": 1 00:24:49.638 }, 00:24:49.638 { 00:24:49.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.638 "dma_device_type": 2 00:24:49.638 }, 00:24:49.638 { 00:24:49.638 "dma_device_id": "system", 00:24:49.638 "dma_device_type": 1 00:24:49.638 }, 00:24:49.638 { 00:24:49.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.638 "dma_device_type": 2 00:24:49.638 }, 00:24:49.638 { 00:24:49.638 "dma_device_id": "system", 00:24:49.638 "dma_device_type": 1 00:24:49.638 }, 00:24:49.638 { 00:24:49.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.638 "dma_device_type": 2 00:24:49.638 }, 00:24:49.638 { 00:24:49.638 "dma_device_id": "system", 00:24:49.638 "dma_device_type": 1 00:24:49.638 }, 00:24:49.638 { 00:24:49.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.638 "dma_device_type": 2 00:24:49.638 } 00:24:49.638 ], 00:24:49.638 "driver_specific": { 00:24:49.638 "raid": { 00:24:49.638 "uuid": "091b2d7a-3da0-437c-b0c2-8d66a7a16463", 00:24:49.638 "strip_size_kb": 64, 00:24:49.638 "state": "online", 00:24:49.638 "raid_level": "raid0", 00:24:49.638 "superblock": false, 00:24:49.638 "num_base_bdevs": 4, 00:24:49.638 "num_base_bdevs_discovered": 4, 00:24:49.638 "num_base_bdevs_operational": 4, 00:24:49.638 "base_bdevs_list": [ 00:24:49.638 { 00:24:49.638 "name": "NewBaseBdev", 00:24:49.638 "uuid": "c8ce3e0e-9d8f-439d-9e9f-d3db3638261e", 00:24:49.638 "is_configured": true, 00:24:49.638 "data_offset": 0, 00:24:49.638 "data_size": 65536 00:24:49.638 }, 00:24:49.638 { 00:24:49.638 "name": "BaseBdev2", 00:24:49.638 "uuid": "57ca3534-4ff3-48ee-9840-01905bd9872b", 00:24:49.638 "is_configured": true, 00:24:49.638 "data_offset": 0, 00:24:49.638 "data_size": 65536 00:24:49.638 }, 00:24:49.638 { 00:24:49.638 "name": "BaseBdev3", 00:24:49.638 "uuid": "45d5390a-28de-426b-bfe5-98c1001841ae", 00:24:49.638 "is_configured": true, 00:24:49.638 "data_offset": 0, 00:24:49.638 "data_size": 65536 00:24:49.638 }, 00:24:49.638 { 00:24:49.638 "name": "BaseBdev4", 00:24:49.638 "uuid": "96336603-d0b3-4266-bac1-80c3a9f1d89e", 00:24:49.638 "is_configured": true, 00:24:49.638 "data_offset": 0, 00:24:49.638 "data_size": 65536 00:24:49.638 } 00:24:49.638 ] 00:24:49.638 } 00:24:49.638 } 00:24:49.638 }' 00:24:49.638 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:49.638 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:49.638 BaseBdev2 00:24:49.638 BaseBdev3 00:24:49.638 BaseBdev4' 00:24:49.638 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:49.638 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:49.638 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:49.896 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:49.896 "name": "NewBaseBdev", 00:24:49.897 "aliases": [ 00:24:49.897 "c8ce3e0e-9d8f-439d-9e9f-d3db3638261e" 00:24:49.897 ], 00:24:49.897 "product_name": "Malloc disk", 00:24:49.897 "block_size": 512, 00:24:49.897 "num_blocks": 65536, 00:24:49.897 "uuid": "c8ce3e0e-9d8f-439d-9e9f-d3db3638261e", 00:24:49.897 "assigned_rate_limits": { 00:24:49.897 "rw_ios_per_sec": 0, 00:24:49.897 "rw_mbytes_per_sec": 0, 00:24:49.897 "r_mbytes_per_sec": 0, 00:24:49.897 "w_mbytes_per_sec": 0 00:24:49.897 }, 00:24:49.897 "claimed": true, 00:24:49.897 "claim_type": "exclusive_write", 00:24:49.897 "zoned": false, 00:24:49.897 "supported_io_types": { 00:24:49.897 "read": true, 00:24:49.897 "write": true, 00:24:49.897 "unmap": true, 00:24:49.897 "write_zeroes": true, 00:24:49.897 "flush": true, 00:24:49.897 "reset": true, 00:24:49.897 "compare": false, 00:24:49.897 "compare_and_write": false, 00:24:49.897 "abort": true, 00:24:49.897 "nvme_admin": false, 00:24:49.897 "nvme_io": false 00:24:49.897 }, 00:24:49.897 "memory_domains": [ 00:24:49.897 { 00:24:49.897 "dma_device_id": "system", 00:24:49.897 "dma_device_type": 1 00:24:49.897 }, 00:24:49.897 { 00:24:49.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.897 "dma_device_type": 2 00:24:49.897 } 00:24:49.897 ], 00:24:49.897 "driver_specific": {} 00:24:49.897 }' 00:24:49.897 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:50.154 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:50.154 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:50.154 11:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:50.154 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:50.154 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:50.154 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:50.154 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:50.154 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:50.154 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:50.154 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:50.413 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:50.413 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:50.413 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:50.413 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:50.671 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:50.671 "name": "BaseBdev2", 00:24:50.671 "aliases": [ 00:24:50.671 "57ca3534-4ff3-48ee-9840-01905bd9872b" 00:24:50.671 ], 00:24:50.671 "product_name": "Malloc disk", 00:24:50.671 "block_size": 512, 00:24:50.671 "num_blocks": 65536, 00:24:50.671 "uuid": "57ca3534-4ff3-48ee-9840-01905bd9872b", 00:24:50.671 "assigned_rate_limits": { 00:24:50.671 "rw_ios_per_sec": 0, 00:24:50.671 "rw_mbytes_per_sec": 0, 00:24:50.671 "r_mbytes_per_sec": 0, 00:24:50.671 "w_mbytes_per_sec": 0 00:24:50.671 }, 00:24:50.671 "claimed": true, 00:24:50.671 "claim_type": "exclusive_write", 00:24:50.671 "zoned": false, 00:24:50.671 "supported_io_types": { 00:24:50.671 "read": true, 00:24:50.671 "write": true, 00:24:50.671 "unmap": true, 00:24:50.671 "write_zeroes": true, 00:24:50.671 "flush": true, 00:24:50.671 "reset": true, 00:24:50.671 "compare": false, 00:24:50.671 "compare_and_write": false, 00:24:50.671 "abort": true, 00:24:50.671 "nvme_admin": false, 00:24:50.671 "nvme_io": false 00:24:50.671 }, 00:24:50.671 "memory_domains": [ 00:24:50.671 { 00:24:50.671 "dma_device_id": "system", 00:24:50.671 "dma_device_type": 1 00:24:50.671 }, 00:24:50.671 { 00:24:50.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:50.671 "dma_device_type": 2 00:24:50.671 } 00:24:50.671 ], 00:24:50.671 "driver_specific": {} 00:24:50.671 }' 00:24:50.671 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:50.671 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:50.671 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:50.671 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:50.671 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:50.929 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:50.929 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:50.929 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:50.929 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:50.929 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:50.929 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:50.929 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:50.929 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:50.929 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:50.929 11:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:51.187 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:51.187 "name": "BaseBdev3", 00:24:51.187 "aliases": [ 00:24:51.187 "45d5390a-28de-426b-bfe5-98c1001841ae" 00:24:51.187 ], 00:24:51.187 "product_name": "Malloc disk", 00:24:51.187 "block_size": 512, 00:24:51.187 "num_blocks": 65536, 00:24:51.187 "uuid": "45d5390a-28de-426b-bfe5-98c1001841ae", 00:24:51.187 "assigned_rate_limits": { 00:24:51.187 "rw_ios_per_sec": 0, 00:24:51.187 "rw_mbytes_per_sec": 0, 00:24:51.187 "r_mbytes_per_sec": 0, 00:24:51.187 "w_mbytes_per_sec": 0 00:24:51.187 }, 00:24:51.187 "claimed": true, 00:24:51.187 "claim_type": "exclusive_write", 00:24:51.187 "zoned": false, 00:24:51.187 "supported_io_types": { 00:24:51.187 "read": true, 00:24:51.187 "write": true, 00:24:51.187 "unmap": true, 00:24:51.187 "write_zeroes": true, 00:24:51.187 "flush": true, 00:24:51.187 "reset": true, 00:24:51.187 "compare": false, 00:24:51.187 "compare_and_write": false, 00:24:51.187 "abort": true, 00:24:51.187 "nvme_admin": false, 00:24:51.187 "nvme_io": false 00:24:51.187 }, 00:24:51.187 "memory_domains": [ 00:24:51.187 { 00:24:51.187 "dma_device_id": "system", 00:24:51.187 "dma_device_type": 1 00:24:51.187 }, 00:24:51.187 { 00:24:51.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:51.187 "dma_device_type": 2 00:24:51.187 } 00:24:51.187 ], 00:24:51.187 "driver_specific": {} 00:24:51.187 }' 00:24:51.187 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:51.445 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:51.445 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:51.445 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:51.445 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:51.445 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:51.445 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:51.445 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:51.703 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:51.703 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:51.703 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:51.703 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:51.703 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:51.703 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:51.703 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:51.961 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:51.961 "name": "BaseBdev4", 00:24:51.961 "aliases": [ 00:24:51.961 "96336603-d0b3-4266-bac1-80c3a9f1d89e" 00:24:51.961 ], 00:24:51.961 "product_name": "Malloc disk", 00:24:51.961 "block_size": 512, 00:24:51.961 "num_blocks": 65536, 00:24:51.961 "uuid": "96336603-d0b3-4266-bac1-80c3a9f1d89e", 00:24:51.961 "assigned_rate_limits": { 00:24:51.961 "rw_ios_per_sec": 0, 00:24:51.961 "rw_mbytes_per_sec": 0, 00:24:51.961 "r_mbytes_per_sec": 0, 00:24:51.961 "w_mbytes_per_sec": 0 00:24:51.961 }, 00:24:51.961 "claimed": true, 00:24:51.961 "claim_type": "exclusive_write", 00:24:51.961 "zoned": false, 00:24:51.961 "supported_io_types": { 00:24:51.961 "read": true, 00:24:51.961 "write": true, 00:24:51.961 "unmap": true, 00:24:51.961 "write_zeroes": true, 00:24:51.961 "flush": true, 00:24:51.961 "reset": true, 00:24:51.961 "compare": false, 00:24:51.961 "compare_and_write": false, 00:24:51.961 "abort": true, 00:24:51.961 "nvme_admin": false, 00:24:51.961 "nvme_io": false 00:24:51.961 }, 00:24:51.961 "memory_domains": [ 00:24:51.961 { 00:24:51.961 "dma_device_id": "system", 00:24:51.961 "dma_device_type": 1 00:24:51.961 }, 00:24:51.961 { 00:24:51.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:51.961 "dma_device_type": 2 00:24:51.961 } 00:24:51.961 ], 00:24:51.961 "driver_specific": {} 00:24:51.961 }' 00:24:51.961 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:51.961 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:51.961 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:51.961 11:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:51.961 11:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:52.218 11:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:52.218 11:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:52.219 11:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:52.219 11:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:52.219 11:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:52.219 11:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:52.219 11:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:52.219 11:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:52.476 [2024-06-10 11:48:24.446162] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:52.476 [2024-06-10 11:48:24.446397] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:52.476 [2024-06-10 11:48:24.446551] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:52.476 [2024-06-10 11:48:24.446718] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:52.476 [2024-06-10 11:48:24.446837] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:24:52.476 11:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 135706 00:24:52.476 11:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 135706 ']' 00:24:52.476 11:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 135706 00:24:52.476 11:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:24:52.476 11:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:52.476 11:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 135706 00:24:52.476 11:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:52.476 11:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:52.476 11:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 135706' 00:24:52.476 killing process with pid 135706 00:24:52.476 11:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 135706 00:24:52.476 [2024-06-10 11:48:24.488713] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:52.476 11:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 135706 00:24:53.042 [2024-06-10 11:48:24.937759] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:54.943 ************************************ 00:24:54.943 END TEST raid_state_function_test 00:24:54.943 ************************************ 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:24:54.943 00:24:54.943 real 0m35.760s 00:24:54.943 user 1m5.105s 00:24:54.943 sys 0m4.526s 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.943 11:48:26 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:24:54.943 11:48:26 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:24:54.943 11:48:26 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:54.943 11:48:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:54.943 ************************************ 00:24:54.943 START TEST raid_state_function_test_sb 00:24:54.943 ************************************ 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 4 true 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=136833 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 136833' 00:24:54.943 Process raid pid: 136833 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 136833 /var/tmp/spdk-raid.sock 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 136833 ']' 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:54.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:54.943 11:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:54.943 [2024-06-10 11:48:26.654183] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:24:54.943 [2024-06-10 11:48:26.654552] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.943 [2024-06-10 11:48:26.822808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.201 [2024-06-10 11:48:27.049326] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.458 [2024-06-10 11:48:27.290938] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:55.716 11:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:55.716 11:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:24:55.716 11:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:55.975 [2024-06-10 11:48:27.984569] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:55.975 [2024-06-10 11:48:27.984875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:55.975 [2024-06-10 11:48:27.985005] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:55.975 [2024-06-10 11:48:27.985069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:55.975 [2024-06-10 11:48:27.985149] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:55.975 [2024-06-10 11:48:27.985201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:55.975 [2024-06-10 11:48:27.985232] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:55.975 [2024-06-10 11:48:27.985381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:55.975 11:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:55.975 11:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:55.975 11:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:55.975 11:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:55.975 11:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:55.975 11:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:55.975 11:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:55.975 11:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:55.975 11:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:55.975 11:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:55.975 11:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.975 11:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:56.233 11:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:56.233 "name": "Existed_Raid", 00:24:56.233 "uuid": "351d78b3-33b0-4ccc-9619-e5542a104e9b", 00:24:56.233 "strip_size_kb": 64, 00:24:56.233 "state": "configuring", 00:24:56.233 "raid_level": "raid0", 00:24:56.233 "superblock": true, 00:24:56.233 "num_base_bdevs": 4, 00:24:56.233 "num_base_bdevs_discovered": 0, 00:24:56.233 "num_base_bdevs_operational": 4, 00:24:56.233 "base_bdevs_list": [ 00:24:56.233 { 00:24:56.233 "name": "BaseBdev1", 00:24:56.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.233 "is_configured": false, 00:24:56.233 "data_offset": 0, 00:24:56.233 "data_size": 0 00:24:56.233 }, 00:24:56.233 { 00:24:56.233 "name": "BaseBdev2", 00:24:56.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.233 "is_configured": false, 00:24:56.233 "data_offset": 0, 00:24:56.233 "data_size": 0 00:24:56.233 }, 00:24:56.233 { 00:24:56.233 "name": "BaseBdev3", 00:24:56.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.233 "is_configured": false, 00:24:56.233 "data_offset": 0, 00:24:56.233 "data_size": 0 00:24:56.233 }, 00:24:56.233 { 00:24:56.233 "name": "BaseBdev4", 00:24:56.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.233 "is_configured": false, 00:24:56.233 "data_offset": 0, 00:24:56.233 "data_size": 0 00:24:56.233 } 00:24:56.233 ] 00:24:56.233 }' 00:24:56.233 11:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:56.233 11:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:56.802 11:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:57.061 [2024-06-10 11:48:28.968653] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:57.061 [2024-06-10 11:48:28.968897] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:24:57.061 11:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:57.319 [2024-06-10 11:48:29.180701] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:57.319 [2024-06-10 11:48:29.180975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:57.319 [2024-06-10 11:48:29.181068] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:57.319 [2024-06-10 11:48:29.181214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:57.319 [2024-06-10 11:48:29.181294] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:57.319 [2024-06-10 11:48:29.181365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:57.319 [2024-06-10 11:48:29.181562] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:57.319 [2024-06-10 11:48:29.181619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:57.319 11:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:57.577 [2024-06-10 11:48:29.481945] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:57.577 BaseBdev1 00:24:57.577 11:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:57.577 11:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:24:57.577 11:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:24:57.577 11:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:24:57.577 11:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:24:57.577 11:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:24:57.577 11:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:57.836 11:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:58.094 [ 00:24:58.094 { 00:24:58.094 "name": "BaseBdev1", 00:24:58.094 "aliases": [ 00:24:58.094 "80c4cf2a-b1e5-43fc-ad6d-a455ee4c1c17" 00:24:58.094 ], 00:24:58.094 "product_name": "Malloc disk", 00:24:58.094 "block_size": 512, 00:24:58.094 "num_blocks": 65536, 00:24:58.094 "uuid": "80c4cf2a-b1e5-43fc-ad6d-a455ee4c1c17", 00:24:58.094 "assigned_rate_limits": { 00:24:58.094 "rw_ios_per_sec": 0, 00:24:58.094 "rw_mbytes_per_sec": 0, 00:24:58.094 "r_mbytes_per_sec": 0, 00:24:58.094 "w_mbytes_per_sec": 0 00:24:58.094 }, 00:24:58.094 "claimed": true, 00:24:58.094 "claim_type": "exclusive_write", 00:24:58.094 "zoned": false, 00:24:58.094 "supported_io_types": { 00:24:58.094 "read": true, 00:24:58.094 "write": true, 00:24:58.094 "unmap": true, 00:24:58.094 "write_zeroes": true, 00:24:58.094 "flush": true, 00:24:58.094 "reset": true, 00:24:58.094 "compare": false, 00:24:58.094 "compare_and_write": false, 00:24:58.094 "abort": true, 00:24:58.094 "nvme_admin": false, 00:24:58.094 "nvme_io": false 00:24:58.094 }, 00:24:58.094 "memory_domains": [ 00:24:58.094 { 00:24:58.094 "dma_device_id": "system", 00:24:58.094 "dma_device_type": 1 00:24:58.094 }, 00:24:58.094 { 00:24:58.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.094 "dma_device_type": 2 00:24:58.094 } 00:24:58.094 ], 00:24:58.094 "driver_specific": {} 00:24:58.094 } 00:24:58.094 ] 00:24:58.094 11:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:24:58.094 11:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:58.094 11:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:58.094 11:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:58.094 11:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:58.094 11:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:58.094 11:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:58.094 11:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:58.094 11:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:58.094 11:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:58.094 11:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:58.094 11:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:58.094 11:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.352 11:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:58.352 "name": "Existed_Raid", 00:24:58.352 "uuid": "4c0a9307-8df2-4f8e-9bb0-8c7e5df341c6", 00:24:58.352 "strip_size_kb": 64, 00:24:58.352 "state": "configuring", 00:24:58.352 "raid_level": "raid0", 00:24:58.352 "superblock": true, 00:24:58.352 "num_base_bdevs": 4, 00:24:58.352 "num_base_bdevs_discovered": 1, 00:24:58.352 "num_base_bdevs_operational": 4, 00:24:58.352 "base_bdevs_list": [ 00:24:58.352 { 00:24:58.352 "name": "BaseBdev1", 00:24:58.352 "uuid": "80c4cf2a-b1e5-43fc-ad6d-a455ee4c1c17", 00:24:58.352 "is_configured": true, 00:24:58.352 "data_offset": 2048, 00:24:58.352 "data_size": 63488 00:24:58.352 }, 00:24:58.352 { 00:24:58.352 "name": "BaseBdev2", 00:24:58.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.352 "is_configured": false, 00:24:58.352 "data_offset": 0, 00:24:58.352 "data_size": 0 00:24:58.352 }, 00:24:58.352 { 00:24:58.352 "name": "BaseBdev3", 00:24:58.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.352 "is_configured": false, 00:24:58.352 "data_offset": 0, 00:24:58.352 "data_size": 0 00:24:58.352 }, 00:24:58.352 { 00:24:58.352 "name": "BaseBdev4", 00:24:58.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.352 "is_configured": false, 00:24:58.352 "data_offset": 0, 00:24:58.352 "data_size": 0 00:24:58.352 } 00:24:58.352 ] 00:24:58.352 }' 00:24:58.352 11:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:58.352 11:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:58.917 11:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:59.173 [2024-06-10 11:48:31.078382] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:59.173 [2024-06-10 11:48:31.078643] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:24:59.173 11:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:59.430 [2024-06-10 11:48:31.354507] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:59.430 [2024-06-10 11:48:31.357447] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:59.430 [2024-06-10 11:48:31.357676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:59.430 [2024-06-10 11:48:31.357796] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:59.430 [2024-06-10 11:48:31.357876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:59.430 [2024-06-10 11:48:31.357994] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:59.430 [2024-06-10 11:48:31.358070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:59.430 11:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:59.430 11:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:59.430 11:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:59.430 11:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:59.430 11:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:59.430 11:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:59.430 11:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:59.430 11:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:59.430 11:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:59.430 11:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:59.430 11:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:59.430 11:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:59.430 11:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.430 11:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:59.688 11:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:59.688 "name": "Existed_Raid", 00:24:59.688 "uuid": "234a06c6-7c68-4e6e-8ecf-fc8ca033cb47", 00:24:59.688 "strip_size_kb": 64, 00:24:59.688 "state": "configuring", 00:24:59.688 "raid_level": "raid0", 00:24:59.688 "superblock": true, 00:24:59.688 "num_base_bdevs": 4, 00:24:59.688 "num_base_bdevs_discovered": 1, 00:24:59.688 "num_base_bdevs_operational": 4, 00:24:59.688 "base_bdevs_list": [ 00:24:59.688 { 00:24:59.688 "name": "BaseBdev1", 00:24:59.688 "uuid": "80c4cf2a-b1e5-43fc-ad6d-a455ee4c1c17", 00:24:59.688 "is_configured": true, 00:24:59.688 "data_offset": 2048, 00:24:59.689 "data_size": 63488 00:24:59.689 }, 00:24:59.689 { 00:24:59.689 "name": "BaseBdev2", 00:24:59.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.689 "is_configured": false, 00:24:59.689 "data_offset": 0, 00:24:59.689 "data_size": 0 00:24:59.689 }, 00:24:59.689 { 00:24:59.689 "name": "BaseBdev3", 00:24:59.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.689 "is_configured": false, 00:24:59.689 "data_offset": 0, 00:24:59.689 "data_size": 0 00:24:59.689 }, 00:24:59.689 { 00:24:59.689 "name": "BaseBdev4", 00:24:59.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.689 "is_configured": false, 00:24:59.689 "data_offset": 0, 00:24:59.689 "data_size": 0 00:24:59.689 } 00:24:59.689 ] 00:24:59.689 }' 00:24:59.689 11:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:59.689 11:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:00.255 11:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:00.512 [2024-06-10 11:48:32.523983] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:00.512 BaseBdev2 00:25:00.512 11:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:00.512 11:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:25:00.512 11:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:25:00.512 11:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:25:00.512 11:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:25:00.512 11:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:25:00.512 11:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:00.770 11:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:01.042 [ 00:25:01.042 { 00:25:01.042 "name": "BaseBdev2", 00:25:01.042 "aliases": [ 00:25:01.042 "c82ff0f2-3b01-4d26-9f47-9ee5e20792b0" 00:25:01.042 ], 00:25:01.042 "product_name": "Malloc disk", 00:25:01.042 "block_size": 512, 00:25:01.042 "num_blocks": 65536, 00:25:01.042 "uuid": "c82ff0f2-3b01-4d26-9f47-9ee5e20792b0", 00:25:01.042 "assigned_rate_limits": { 00:25:01.042 "rw_ios_per_sec": 0, 00:25:01.042 "rw_mbytes_per_sec": 0, 00:25:01.042 "r_mbytes_per_sec": 0, 00:25:01.042 "w_mbytes_per_sec": 0 00:25:01.042 }, 00:25:01.042 "claimed": true, 00:25:01.042 "claim_type": "exclusive_write", 00:25:01.042 "zoned": false, 00:25:01.042 "supported_io_types": { 00:25:01.042 "read": true, 00:25:01.042 "write": true, 00:25:01.042 "unmap": true, 00:25:01.042 "write_zeroes": true, 00:25:01.042 "flush": true, 00:25:01.042 "reset": true, 00:25:01.042 "compare": false, 00:25:01.042 "compare_and_write": false, 00:25:01.042 "abort": true, 00:25:01.042 "nvme_admin": false, 00:25:01.042 "nvme_io": false 00:25:01.042 }, 00:25:01.042 "memory_domains": [ 00:25:01.042 { 00:25:01.042 "dma_device_id": "system", 00:25:01.042 "dma_device_type": 1 00:25:01.042 }, 00:25:01.042 { 00:25:01.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.042 "dma_device_type": 2 00:25:01.042 } 00:25:01.042 ], 00:25:01.042 "driver_specific": {} 00:25:01.042 } 00:25:01.042 ] 00:25:01.042 11:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:25:01.042 11:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:01.042 11:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:01.042 11:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:25:01.042 11:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:01.042 11:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:01.042 11:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:01.042 11:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:01.042 11:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:01.042 11:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:01.042 11:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:01.042 11:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:01.042 11:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:01.042 11:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.042 11:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:01.329 11:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:01.329 "name": "Existed_Raid", 00:25:01.329 "uuid": "234a06c6-7c68-4e6e-8ecf-fc8ca033cb47", 00:25:01.329 "strip_size_kb": 64, 00:25:01.329 "state": "configuring", 00:25:01.329 "raid_level": "raid0", 00:25:01.329 "superblock": true, 00:25:01.329 "num_base_bdevs": 4, 00:25:01.329 "num_base_bdevs_discovered": 2, 00:25:01.329 "num_base_bdevs_operational": 4, 00:25:01.329 "base_bdevs_list": [ 00:25:01.329 { 00:25:01.329 "name": "BaseBdev1", 00:25:01.329 "uuid": "80c4cf2a-b1e5-43fc-ad6d-a455ee4c1c17", 00:25:01.329 "is_configured": true, 00:25:01.329 "data_offset": 2048, 00:25:01.329 "data_size": 63488 00:25:01.329 }, 00:25:01.329 { 00:25:01.329 "name": "BaseBdev2", 00:25:01.329 "uuid": "c82ff0f2-3b01-4d26-9f47-9ee5e20792b0", 00:25:01.329 "is_configured": true, 00:25:01.329 "data_offset": 2048, 00:25:01.329 "data_size": 63488 00:25:01.329 }, 00:25:01.329 { 00:25:01.329 "name": "BaseBdev3", 00:25:01.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.329 "is_configured": false, 00:25:01.329 "data_offset": 0, 00:25:01.329 "data_size": 0 00:25:01.329 }, 00:25:01.329 { 00:25:01.329 "name": "BaseBdev4", 00:25:01.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.329 "is_configured": false, 00:25:01.329 "data_offset": 0, 00:25:01.329 "data_size": 0 00:25:01.329 } 00:25:01.329 ] 00:25:01.329 }' 00:25:01.329 11:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:01.329 11:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:02.261 11:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:02.261 [2024-06-10 11:48:34.261159] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:02.261 BaseBdev3 00:25:02.261 11:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:02.261 11:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:25:02.261 11:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:25:02.261 11:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:25:02.261 11:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:25:02.262 11:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:25:02.262 11:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:02.519 11:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:02.776 [ 00:25:02.776 { 00:25:02.776 "name": "BaseBdev3", 00:25:02.776 "aliases": [ 00:25:02.776 "3f923087-e1d6-4ed2-8206-54f2b063d11f" 00:25:02.776 ], 00:25:02.776 "product_name": "Malloc disk", 00:25:02.776 "block_size": 512, 00:25:02.776 "num_blocks": 65536, 00:25:02.776 "uuid": "3f923087-e1d6-4ed2-8206-54f2b063d11f", 00:25:02.776 "assigned_rate_limits": { 00:25:02.776 "rw_ios_per_sec": 0, 00:25:02.776 "rw_mbytes_per_sec": 0, 00:25:02.776 "r_mbytes_per_sec": 0, 00:25:02.776 "w_mbytes_per_sec": 0 00:25:02.776 }, 00:25:02.776 "claimed": true, 00:25:02.776 "claim_type": "exclusive_write", 00:25:02.776 "zoned": false, 00:25:02.776 "supported_io_types": { 00:25:02.776 "read": true, 00:25:02.776 "write": true, 00:25:02.776 "unmap": true, 00:25:02.776 "write_zeroes": true, 00:25:02.776 "flush": true, 00:25:02.776 "reset": true, 00:25:02.776 "compare": false, 00:25:02.776 "compare_and_write": false, 00:25:02.776 "abort": true, 00:25:02.776 "nvme_admin": false, 00:25:02.776 "nvme_io": false 00:25:02.776 }, 00:25:02.776 "memory_domains": [ 00:25:02.776 { 00:25:02.776 "dma_device_id": "system", 00:25:02.776 "dma_device_type": 1 00:25:02.776 }, 00:25:02.776 { 00:25:02.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.776 "dma_device_type": 2 00:25:02.776 } 00:25:02.776 ], 00:25:02.776 "driver_specific": {} 00:25:02.776 } 00:25:02.776 ] 00:25:02.776 11:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:25:02.776 11:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:02.776 11:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:02.776 11:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:25:02.776 11:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:02.776 11:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:02.776 11:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:02.776 11:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:02.776 11:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:02.776 11:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:02.776 11:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:02.776 11:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:02.776 11:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:02.776 11:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:02.776 11:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.035 11:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:03.035 "name": "Existed_Raid", 00:25:03.035 "uuid": "234a06c6-7c68-4e6e-8ecf-fc8ca033cb47", 00:25:03.035 "strip_size_kb": 64, 00:25:03.035 "state": "configuring", 00:25:03.035 "raid_level": "raid0", 00:25:03.035 "superblock": true, 00:25:03.035 "num_base_bdevs": 4, 00:25:03.035 "num_base_bdevs_discovered": 3, 00:25:03.035 "num_base_bdevs_operational": 4, 00:25:03.035 "base_bdevs_list": [ 00:25:03.035 { 00:25:03.035 "name": "BaseBdev1", 00:25:03.035 "uuid": "80c4cf2a-b1e5-43fc-ad6d-a455ee4c1c17", 00:25:03.035 "is_configured": true, 00:25:03.035 "data_offset": 2048, 00:25:03.035 "data_size": 63488 00:25:03.035 }, 00:25:03.035 { 00:25:03.035 "name": "BaseBdev2", 00:25:03.035 "uuid": "c82ff0f2-3b01-4d26-9f47-9ee5e20792b0", 00:25:03.035 "is_configured": true, 00:25:03.035 "data_offset": 2048, 00:25:03.035 "data_size": 63488 00:25:03.035 }, 00:25:03.035 { 00:25:03.035 "name": "BaseBdev3", 00:25:03.035 "uuid": "3f923087-e1d6-4ed2-8206-54f2b063d11f", 00:25:03.035 "is_configured": true, 00:25:03.035 "data_offset": 2048, 00:25:03.035 "data_size": 63488 00:25:03.035 }, 00:25:03.035 { 00:25:03.035 "name": "BaseBdev4", 00:25:03.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.035 "is_configured": false, 00:25:03.035 "data_offset": 0, 00:25:03.035 "data_size": 0 00:25:03.035 } 00:25:03.035 ] 00:25:03.035 }' 00:25:03.035 11:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:03.035 11:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:03.968 11:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:03.968 [2024-06-10 11:48:35.927242] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:03.968 [2024-06-10 11:48:35.927742] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:25:03.968 [2024-06-10 11:48:35.927889] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:03.968 [2024-06-10 11:48:35.928054] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:25:03.968 [2024-06-10 11:48:35.928434] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:25:03.968 [2024-06-10 11:48:35.928551] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:25:03.968 [2024-06-10 11:48:35.928783] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:03.968 BaseBdev4 00:25:03.968 11:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:03.968 11:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:25:03.968 11:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:25:03.968 11:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:25:03.968 11:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:25:03.968 11:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:25:03.968 11:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:04.226 11:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:04.792 [ 00:25:04.792 { 00:25:04.792 "name": "BaseBdev4", 00:25:04.792 "aliases": [ 00:25:04.792 "2686876e-a2fa-4710-8269-bc158c55d3f3" 00:25:04.792 ], 00:25:04.792 "product_name": "Malloc disk", 00:25:04.792 "block_size": 512, 00:25:04.792 "num_blocks": 65536, 00:25:04.792 "uuid": "2686876e-a2fa-4710-8269-bc158c55d3f3", 00:25:04.792 "assigned_rate_limits": { 00:25:04.792 "rw_ios_per_sec": 0, 00:25:04.792 "rw_mbytes_per_sec": 0, 00:25:04.792 "r_mbytes_per_sec": 0, 00:25:04.792 "w_mbytes_per_sec": 0 00:25:04.792 }, 00:25:04.792 "claimed": true, 00:25:04.792 "claim_type": "exclusive_write", 00:25:04.792 "zoned": false, 00:25:04.792 "supported_io_types": { 00:25:04.792 "read": true, 00:25:04.792 "write": true, 00:25:04.792 "unmap": true, 00:25:04.792 "write_zeroes": true, 00:25:04.792 "flush": true, 00:25:04.792 "reset": true, 00:25:04.792 "compare": false, 00:25:04.792 "compare_and_write": false, 00:25:04.792 "abort": true, 00:25:04.792 "nvme_admin": false, 00:25:04.792 "nvme_io": false 00:25:04.792 }, 00:25:04.792 "memory_domains": [ 00:25:04.792 { 00:25:04.792 "dma_device_id": "system", 00:25:04.792 "dma_device_type": 1 00:25:04.792 }, 00:25:04.792 { 00:25:04.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.792 "dma_device_type": 2 00:25:04.792 } 00:25:04.792 ], 00:25:04.792 "driver_specific": {} 00:25:04.792 } 00:25:04.792 ] 00:25:04.792 11:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:25:04.792 11:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:04.792 11:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:04.792 11:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:25:04.792 11:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:04.792 11:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:04.792 11:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:04.793 11:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:04.793 11:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:04.793 11:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:04.793 11:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:04.793 11:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:04.793 11:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:04.793 11:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:04.793 11:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.051 11:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:05.051 "name": "Existed_Raid", 00:25:05.051 "uuid": "234a06c6-7c68-4e6e-8ecf-fc8ca033cb47", 00:25:05.051 "strip_size_kb": 64, 00:25:05.051 "state": "online", 00:25:05.051 "raid_level": "raid0", 00:25:05.051 "superblock": true, 00:25:05.051 "num_base_bdevs": 4, 00:25:05.051 "num_base_bdevs_discovered": 4, 00:25:05.051 "num_base_bdevs_operational": 4, 00:25:05.051 "base_bdevs_list": [ 00:25:05.051 { 00:25:05.051 "name": "BaseBdev1", 00:25:05.051 "uuid": "80c4cf2a-b1e5-43fc-ad6d-a455ee4c1c17", 00:25:05.051 "is_configured": true, 00:25:05.051 "data_offset": 2048, 00:25:05.051 "data_size": 63488 00:25:05.051 }, 00:25:05.051 { 00:25:05.051 "name": "BaseBdev2", 00:25:05.051 "uuid": "c82ff0f2-3b01-4d26-9f47-9ee5e20792b0", 00:25:05.051 "is_configured": true, 00:25:05.051 "data_offset": 2048, 00:25:05.051 "data_size": 63488 00:25:05.051 }, 00:25:05.051 { 00:25:05.051 "name": "BaseBdev3", 00:25:05.051 "uuid": "3f923087-e1d6-4ed2-8206-54f2b063d11f", 00:25:05.051 "is_configured": true, 00:25:05.051 "data_offset": 2048, 00:25:05.051 "data_size": 63488 00:25:05.051 }, 00:25:05.051 { 00:25:05.051 "name": "BaseBdev4", 00:25:05.051 "uuid": "2686876e-a2fa-4710-8269-bc158c55d3f3", 00:25:05.051 "is_configured": true, 00:25:05.051 "data_offset": 2048, 00:25:05.051 "data_size": 63488 00:25:05.051 } 00:25:05.051 ] 00:25:05.051 }' 00:25:05.051 11:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:05.051 11:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:05.616 11:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:05.616 11:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:05.616 11:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:05.616 11:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:05.616 11:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:05.616 11:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:05.616 11:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:05.616 11:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:05.873 [2024-06-10 11:48:37.883370] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:05.873 11:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:05.873 "name": "Existed_Raid", 00:25:05.873 "aliases": [ 00:25:05.873 "234a06c6-7c68-4e6e-8ecf-fc8ca033cb47" 00:25:05.873 ], 00:25:05.873 "product_name": "Raid Volume", 00:25:05.873 "block_size": 512, 00:25:05.873 "num_blocks": 253952, 00:25:05.873 "uuid": "234a06c6-7c68-4e6e-8ecf-fc8ca033cb47", 00:25:05.873 "assigned_rate_limits": { 00:25:05.873 "rw_ios_per_sec": 0, 00:25:05.873 "rw_mbytes_per_sec": 0, 00:25:05.873 "r_mbytes_per_sec": 0, 00:25:05.873 "w_mbytes_per_sec": 0 00:25:05.873 }, 00:25:05.873 "claimed": false, 00:25:05.873 "zoned": false, 00:25:05.873 "supported_io_types": { 00:25:05.873 "read": true, 00:25:05.873 "write": true, 00:25:05.873 "unmap": true, 00:25:05.873 "write_zeroes": true, 00:25:05.873 "flush": true, 00:25:05.873 "reset": true, 00:25:05.873 "compare": false, 00:25:05.873 "compare_and_write": false, 00:25:05.873 "abort": false, 00:25:05.873 "nvme_admin": false, 00:25:05.873 "nvme_io": false 00:25:05.873 }, 00:25:05.873 "memory_domains": [ 00:25:05.873 { 00:25:05.873 "dma_device_id": "system", 00:25:05.873 "dma_device_type": 1 00:25:05.873 }, 00:25:05.873 { 00:25:05.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.873 "dma_device_type": 2 00:25:05.873 }, 00:25:05.873 { 00:25:05.873 "dma_device_id": "system", 00:25:05.873 "dma_device_type": 1 00:25:05.873 }, 00:25:05.873 { 00:25:05.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.873 "dma_device_type": 2 00:25:05.873 }, 00:25:05.873 { 00:25:05.873 "dma_device_id": "system", 00:25:05.873 "dma_device_type": 1 00:25:05.873 }, 00:25:05.873 { 00:25:05.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.873 "dma_device_type": 2 00:25:05.873 }, 00:25:05.873 { 00:25:05.873 "dma_device_id": "system", 00:25:05.874 "dma_device_type": 1 00:25:05.874 }, 00:25:05.874 { 00:25:05.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.874 "dma_device_type": 2 00:25:05.874 } 00:25:05.874 ], 00:25:05.874 "driver_specific": { 00:25:05.874 "raid": { 00:25:05.874 "uuid": "234a06c6-7c68-4e6e-8ecf-fc8ca033cb47", 00:25:05.874 "strip_size_kb": 64, 00:25:05.874 "state": "online", 00:25:05.874 "raid_level": "raid0", 00:25:05.874 "superblock": true, 00:25:05.874 "num_base_bdevs": 4, 00:25:05.874 "num_base_bdevs_discovered": 4, 00:25:05.874 "num_base_bdevs_operational": 4, 00:25:05.874 "base_bdevs_list": [ 00:25:05.874 { 00:25:05.874 "name": "BaseBdev1", 00:25:05.874 "uuid": "80c4cf2a-b1e5-43fc-ad6d-a455ee4c1c17", 00:25:05.874 "is_configured": true, 00:25:05.874 "data_offset": 2048, 00:25:05.874 "data_size": 63488 00:25:05.874 }, 00:25:05.874 { 00:25:05.874 "name": "BaseBdev2", 00:25:05.874 "uuid": "c82ff0f2-3b01-4d26-9f47-9ee5e20792b0", 00:25:05.874 "is_configured": true, 00:25:05.874 "data_offset": 2048, 00:25:05.874 "data_size": 63488 00:25:05.874 }, 00:25:05.874 { 00:25:05.874 "name": "BaseBdev3", 00:25:05.874 "uuid": "3f923087-e1d6-4ed2-8206-54f2b063d11f", 00:25:05.874 "is_configured": true, 00:25:05.874 "data_offset": 2048, 00:25:05.874 "data_size": 63488 00:25:05.874 }, 00:25:05.874 { 00:25:05.874 "name": "BaseBdev4", 00:25:05.874 "uuid": "2686876e-a2fa-4710-8269-bc158c55d3f3", 00:25:05.874 "is_configured": true, 00:25:05.874 "data_offset": 2048, 00:25:05.874 "data_size": 63488 00:25:05.874 } 00:25:05.874 ] 00:25:05.874 } 00:25:05.874 } 00:25:05.874 }' 00:25:05.874 11:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:06.131 11:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:06.131 BaseBdev2 00:25:06.131 BaseBdev3 00:25:06.131 BaseBdev4' 00:25:06.131 11:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:06.131 11:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:06.131 11:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:06.131 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:06.131 "name": "BaseBdev1", 00:25:06.131 "aliases": [ 00:25:06.131 "80c4cf2a-b1e5-43fc-ad6d-a455ee4c1c17" 00:25:06.131 ], 00:25:06.131 "product_name": "Malloc disk", 00:25:06.131 "block_size": 512, 00:25:06.131 "num_blocks": 65536, 00:25:06.131 "uuid": "80c4cf2a-b1e5-43fc-ad6d-a455ee4c1c17", 00:25:06.131 "assigned_rate_limits": { 00:25:06.131 "rw_ios_per_sec": 0, 00:25:06.131 "rw_mbytes_per_sec": 0, 00:25:06.131 "r_mbytes_per_sec": 0, 00:25:06.131 "w_mbytes_per_sec": 0 00:25:06.131 }, 00:25:06.131 "claimed": true, 00:25:06.131 "claim_type": "exclusive_write", 00:25:06.131 "zoned": false, 00:25:06.131 "supported_io_types": { 00:25:06.131 "read": true, 00:25:06.131 "write": true, 00:25:06.131 "unmap": true, 00:25:06.131 "write_zeroes": true, 00:25:06.131 "flush": true, 00:25:06.131 "reset": true, 00:25:06.131 "compare": false, 00:25:06.131 "compare_and_write": false, 00:25:06.131 "abort": true, 00:25:06.131 "nvme_admin": false, 00:25:06.131 "nvme_io": false 00:25:06.131 }, 00:25:06.131 "memory_domains": [ 00:25:06.131 { 00:25:06.131 "dma_device_id": "system", 00:25:06.131 "dma_device_type": 1 00:25:06.131 }, 00:25:06.131 { 00:25:06.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.131 "dma_device_type": 2 00:25:06.131 } 00:25:06.131 ], 00:25:06.131 "driver_specific": {} 00:25:06.131 }' 00:25:06.131 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:06.449 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:06.449 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:06.449 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:06.449 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:06.449 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:06.449 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:06.449 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:06.449 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:06.449 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:06.449 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:06.723 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:06.723 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:06.723 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:06.723 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:06.723 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:06.723 "name": "BaseBdev2", 00:25:06.723 "aliases": [ 00:25:06.723 "c82ff0f2-3b01-4d26-9f47-9ee5e20792b0" 00:25:06.723 ], 00:25:06.723 "product_name": "Malloc disk", 00:25:06.723 "block_size": 512, 00:25:06.723 "num_blocks": 65536, 00:25:06.723 "uuid": "c82ff0f2-3b01-4d26-9f47-9ee5e20792b0", 00:25:06.723 "assigned_rate_limits": { 00:25:06.723 "rw_ios_per_sec": 0, 00:25:06.723 "rw_mbytes_per_sec": 0, 00:25:06.723 "r_mbytes_per_sec": 0, 00:25:06.723 "w_mbytes_per_sec": 0 00:25:06.723 }, 00:25:06.723 "claimed": true, 00:25:06.723 "claim_type": "exclusive_write", 00:25:06.723 "zoned": false, 00:25:06.723 "supported_io_types": { 00:25:06.723 "read": true, 00:25:06.723 "write": true, 00:25:06.723 "unmap": true, 00:25:06.723 "write_zeroes": true, 00:25:06.723 "flush": true, 00:25:06.723 "reset": true, 00:25:06.723 "compare": false, 00:25:06.723 "compare_and_write": false, 00:25:06.723 "abort": true, 00:25:06.723 "nvme_admin": false, 00:25:06.723 "nvme_io": false 00:25:06.723 }, 00:25:06.723 "memory_domains": [ 00:25:06.723 { 00:25:06.723 "dma_device_id": "system", 00:25:06.723 "dma_device_type": 1 00:25:06.723 }, 00:25:06.723 { 00:25:06.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.723 "dma_device_type": 2 00:25:06.723 } 00:25:06.723 ], 00:25:06.723 "driver_specific": {} 00:25:06.723 }' 00:25:06.723 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:06.723 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:06.981 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:06.981 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:06.981 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:06.981 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:06.981 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:06.981 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:06.981 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:06.981 11:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:06.981 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:07.239 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:07.239 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:07.239 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:07.239 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:07.239 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:07.239 "name": "BaseBdev3", 00:25:07.239 "aliases": [ 00:25:07.239 "3f923087-e1d6-4ed2-8206-54f2b063d11f" 00:25:07.239 ], 00:25:07.239 "product_name": "Malloc disk", 00:25:07.239 "block_size": 512, 00:25:07.239 "num_blocks": 65536, 00:25:07.239 "uuid": "3f923087-e1d6-4ed2-8206-54f2b063d11f", 00:25:07.239 "assigned_rate_limits": { 00:25:07.239 "rw_ios_per_sec": 0, 00:25:07.239 "rw_mbytes_per_sec": 0, 00:25:07.239 "r_mbytes_per_sec": 0, 00:25:07.239 "w_mbytes_per_sec": 0 00:25:07.239 }, 00:25:07.239 "claimed": true, 00:25:07.239 "claim_type": "exclusive_write", 00:25:07.239 "zoned": false, 00:25:07.239 "supported_io_types": { 00:25:07.239 "read": true, 00:25:07.239 "write": true, 00:25:07.239 "unmap": true, 00:25:07.239 "write_zeroes": true, 00:25:07.239 "flush": true, 00:25:07.239 "reset": true, 00:25:07.239 "compare": false, 00:25:07.239 "compare_and_write": false, 00:25:07.239 "abort": true, 00:25:07.239 "nvme_admin": false, 00:25:07.239 "nvme_io": false 00:25:07.239 }, 00:25:07.239 "memory_domains": [ 00:25:07.239 { 00:25:07.239 "dma_device_id": "system", 00:25:07.239 "dma_device_type": 1 00:25:07.239 }, 00:25:07.239 { 00:25:07.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.239 "dma_device_type": 2 00:25:07.239 } 00:25:07.239 ], 00:25:07.239 "driver_specific": {} 00:25:07.239 }' 00:25:07.239 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:07.498 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:07.498 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:07.498 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:07.498 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:07.498 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:07.498 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:07.498 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:07.755 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:07.755 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:07.755 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:07.756 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:07.756 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:07.756 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:07.756 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:08.014 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:08.014 "name": "BaseBdev4", 00:25:08.014 "aliases": [ 00:25:08.014 "2686876e-a2fa-4710-8269-bc158c55d3f3" 00:25:08.014 ], 00:25:08.014 "product_name": "Malloc disk", 00:25:08.014 "block_size": 512, 00:25:08.014 "num_blocks": 65536, 00:25:08.014 "uuid": "2686876e-a2fa-4710-8269-bc158c55d3f3", 00:25:08.014 "assigned_rate_limits": { 00:25:08.014 "rw_ios_per_sec": 0, 00:25:08.014 "rw_mbytes_per_sec": 0, 00:25:08.014 "r_mbytes_per_sec": 0, 00:25:08.014 "w_mbytes_per_sec": 0 00:25:08.014 }, 00:25:08.014 "claimed": true, 00:25:08.014 "claim_type": "exclusive_write", 00:25:08.014 "zoned": false, 00:25:08.014 "supported_io_types": { 00:25:08.014 "read": true, 00:25:08.014 "write": true, 00:25:08.014 "unmap": true, 00:25:08.014 "write_zeroes": true, 00:25:08.014 "flush": true, 00:25:08.014 "reset": true, 00:25:08.014 "compare": false, 00:25:08.014 "compare_and_write": false, 00:25:08.014 "abort": true, 00:25:08.014 "nvme_admin": false, 00:25:08.014 "nvme_io": false 00:25:08.014 }, 00:25:08.014 "memory_domains": [ 00:25:08.014 { 00:25:08.014 "dma_device_id": "system", 00:25:08.014 "dma_device_type": 1 00:25:08.014 }, 00:25:08.014 { 00:25:08.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.014 "dma_device_type": 2 00:25:08.014 } 00:25:08.014 ], 00:25:08.014 "driver_specific": {} 00:25:08.014 }' 00:25:08.014 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:08.014 11:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:08.014 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:08.014 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:08.271 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:08.271 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:08.271 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:08.271 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:08.271 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:08.271 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:08.271 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:08.271 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:08.271 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:08.529 [2024-06-10 11:48:40.555802] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:08.529 [2024-06-10 11:48:40.556066] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:08.529 [2024-06-10 11:48:40.556230] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:08.787 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:08.787 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:25:08.787 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:08.787 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:25:08.787 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:25:08.787 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:25:08.787 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:08.787 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:25:08.787 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:08.787 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:08.787 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:08.787 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:08.787 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:08.787 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:08.787 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:08.787 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.787 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.046 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:09.046 "name": "Existed_Raid", 00:25:09.046 "uuid": "234a06c6-7c68-4e6e-8ecf-fc8ca033cb47", 00:25:09.046 "strip_size_kb": 64, 00:25:09.046 "state": "offline", 00:25:09.046 "raid_level": "raid0", 00:25:09.046 "superblock": true, 00:25:09.046 "num_base_bdevs": 4, 00:25:09.046 "num_base_bdevs_discovered": 3, 00:25:09.046 "num_base_bdevs_operational": 3, 00:25:09.046 "base_bdevs_list": [ 00:25:09.046 { 00:25:09.046 "name": null, 00:25:09.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.046 "is_configured": false, 00:25:09.046 "data_offset": 2048, 00:25:09.046 "data_size": 63488 00:25:09.046 }, 00:25:09.046 { 00:25:09.046 "name": "BaseBdev2", 00:25:09.046 "uuid": "c82ff0f2-3b01-4d26-9f47-9ee5e20792b0", 00:25:09.046 "is_configured": true, 00:25:09.046 "data_offset": 2048, 00:25:09.046 "data_size": 63488 00:25:09.046 }, 00:25:09.046 { 00:25:09.046 "name": "BaseBdev3", 00:25:09.046 "uuid": "3f923087-e1d6-4ed2-8206-54f2b063d11f", 00:25:09.046 "is_configured": true, 00:25:09.046 "data_offset": 2048, 00:25:09.046 "data_size": 63488 00:25:09.046 }, 00:25:09.046 { 00:25:09.046 "name": "BaseBdev4", 00:25:09.046 "uuid": "2686876e-a2fa-4710-8269-bc158c55d3f3", 00:25:09.046 "is_configured": true, 00:25:09.046 "data_offset": 2048, 00:25:09.046 "data_size": 63488 00:25:09.046 } 00:25:09.046 ] 00:25:09.046 }' 00:25:09.046 11:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:09.046 11:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.616 11:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:09.616 11:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:09.616 11:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.616 11:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:09.882 11:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:09.883 11:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:09.883 11:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:10.141 [2024-06-10 11:48:42.060984] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:10.141 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:10.141 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:10.141 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:10.141 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.707 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:10.707 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:10.707 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:10.707 [2024-06-10 11:48:42.704953] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:10.965 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:10.965 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:10.965 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.965 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:11.224 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:11.224 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:11.224 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:11.482 [2024-06-10 11:48:43.312242] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:11.482 [2024-06-10 11:48:43.312502] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:25:11.482 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:11.482 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:11.482 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.482 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:11.740 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:11.740 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:11.740 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:11.740 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:11.740 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:11.740 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:11.998 BaseBdev2 00:25:11.998 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:11.998 11:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:25:11.998 11:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:25:11.998 11:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:25:11.998 11:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:25:11.998 11:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:25:11.998 11:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:12.256 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:12.514 [ 00:25:12.514 { 00:25:12.514 "name": "BaseBdev2", 00:25:12.514 "aliases": [ 00:25:12.514 "8b28928f-55b4-452b-a663-a7962a4427ff" 00:25:12.514 ], 00:25:12.514 "product_name": "Malloc disk", 00:25:12.514 "block_size": 512, 00:25:12.514 "num_blocks": 65536, 00:25:12.514 "uuid": "8b28928f-55b4-452b-a663-a7962a4427ff", 00:25:12.515 "assigned_rate_limits": { 00:25:12.515 "rw_ios_per_sec": 0, 00:25:12.515 "rw_mbytes_per_sec": 0, 00:25:12.515 "r_mbytes_per_sec": 0, 00:25:12.515 "w_mbytes_per_sec": 0 00:25:12.515 }, 00:25:12.515 "claimed": false, 00:25:12.515 "zoned": false, 00:25:12.515 "supported_io_types": { 00:25:12.515 "read": true, 00:25:12.515 "write": true, 00:25:12.515 "unmap": true, 00:25:12.515 "write_zeroes": true, 00:25:12.515 "flush": true, 00:25:12.515 "reset": true, 00:25:12.515 "compare": false, 00:25:12.515 "compare_and_write": false, 00:25:12.515 "abort": true, 00:25:12.515 "nvme_admin": false, 00:25:12.515 "nvme_io": false 00:25:12.515 }, 00:25:12.515 "memory_domains": [ 00:25:12.515 { 00:25:12.515 "dma_device_id": "system", 00:25:12.515 "dma_device_type": 1 00:25:12.515 }, 00:25:12.515 { 00:25:12.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.515 "dma_device_type": 2 00:25:12.515 } 00:25:12.515 ], 00:25:12.515 "driver_specific": {} 00:25:12.515 } 00:25:12.515 ] 00:25:12.515 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:25:12.515 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:12.515 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:12.515 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:12.515 BaseBdev3 00:25:12.772 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:12.772 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:25:12.772 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:25:12.772 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:25:12.772 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:25:12.772 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:25:12.772 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:12.772 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:13.030 [ 00:25:13.031 { 00:25:13.031 "name": "BaseBdev3", 00:25:13.031 "aliases": [ 00:25:13.031 "b511b6b6-9776-4718-9c8a-4c8d1f7bfe3b" 00:25:13.031 ], 00:25:13.031 "product_name": "Malloc disk", 00:25:13.031 "block_size": 512, 00:25:13.031 "num_blocks": 65536, 00:25:13.031 "uuid": "b511b6b6-9776-4718-9c8a-4c8d1f7bfe3b", 00:25:13.031 "assigned_rate_limits": { 00:25:13.031 "rw_ios_per_sec": 0, 00:25:13.031 "rw_mbytes_per_sec": 0, 00:25:13.031 "r_mbytes_per_sec": 0, 00:25:13.031 "w_mbytes_per_sec": 0 00:25:13.031 }, 00:25:13.031 "claimed": false, 00:25:13.031 "zoned": false, 00:25:13.031 "supported_io_types": { 00:25:13.031 "read": true, 00:25:13.031 "write": true, 00:25:13.031 "unmap": true, 00:25:13.031 "write_zeroes": true, 00:25:13.031 "flush": true, 00:25:13.031 "reset": true, 00:25:13.031 "compare": false, 00:25:13.031 "compare_and_write": false, 00:25:13.031 "abort": true, 00:25:13.031 "nvme_admin": false, 00:25:13.031 "nvme_io": false 00:25:13.031 }, 00:25:13.031 "memory_domains": [ 00:25:13.031 { 00:25:13.031 "dma_device_id": "system", 00:25:13.031 "dma_device_type": 1 00:25:13.031 }, 00:25:13.031 { 00:25:13.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.031 "dma_device_type": 2 00:25:13.031 } 00:25:13.031 ], 00:25:13.031 "driver_specific": {} 00:25:13.031 } 00:25:13.031 ] 00:25:13.031 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:25:13.031 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:13.031 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:13.031 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:13.288 BaseBdev4 00:25:13.288 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:13.288 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:25:13.288 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:25:13.288 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:25:13.288 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:25:13.288 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:25:13.289 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:13.547 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:13.805 [ 00:25:13.805 { 00:25:13.805 "name": "BaseBdev4", 00:25:13.805 "aliases": [ 00:25:13.805 "b9775f8f-ab54-4ac1-b918-cc27a02a106c" 00:25:13.805 ], 00:25:13.805 "product_name": "Malloc disk", 00:25:13.805 "block_size": 512, 00:25:13.805 "num_blocks": 65536, 00:25:13.805 "uuid": "b9775f8f-ab54-4ac1-b918-cc27a02a106c", 00:25:13.805 "assigned_rate_limits": { 00:25:13.805 "rw_ios_per_sec": 0, 00:25:13.805 "rw_mbytes_per_sec": 0, 00:25:13.805 "r_mbytes_per_sec": 0, 00:25:13.805 "w_mbytes_per_sec": 0 00:25:13.805 }, 00:25:13.805 "claimed": false, 00:25:13.805 "zoned": false, 00:25:13.805 "supported_io_types": { 00:25:13.805 "read": true, 00:25:13.805 "write": true, 00:25:13.805 "unmap": true, 00:25:13.805 "write_zeroes": true, 00:25:13.805 "flush": true, 00:25:13.805 "reset": true, 00:25:13.805 "compare": false, 00:25:13.805 "compare_and_write": false, 00:25:13.806 "abort": true, 00:25:13.806 "nvme_admin": false, 00:25:13.806 "nvme_io": false 00:25:13.806 }, 00:25:13.806 "memory_domains": [ 00:25:13.806 { 00:25:13.806 "dma_device_id": "system", 00:25:13.806 "dma_device_type": 1 00:25:13.806 }, 00:25:13.806 { 00:25:13.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.806 "dma_device_type": 2 00:25:13.806 } 00:25:13.806 ], 00:25:13.806 "driver_specific": {} 00:25:13.806 } 00:25:13.806 ] 00:25:13.806 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:25:13.806 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:13.806 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:13.806 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:14.064 [2024-06-10 11:48:45.946510] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:14.064 [2024-06-10 11:48:45.946832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:14.064 [2024-06-10 11:48:45.946937] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:14.064 [2024-06-10 11:48:45.949033] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:14.064 [2024-06-10 11:48:45.949214] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:14.064 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:25:14.064 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:14.065 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:14.065 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:14.065 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:14.065 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:14.065 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:14.065 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:14.065 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:14.065 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:14.065 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.065 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.324 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:14.324 "name": "Existed_Raid", 00:25:14.324 "uuid": "1af147c6-4566-471e-9694-a7538ceed712", 00:25:14.324 "strip_size_kb": 64, 00:25:14.324 "state": "configuring", 00:25:14.324 "raid_level": "raid0", 00:25:14.324 "superblock": true, 00:25:14.324 "num_base_bdevs": 4, 00:25:14.324 "num_base_bdevs_discovered": 3, 00:25:14.324 "num_base_bdevs_operational": 4, 00:25:14.324 "base_bdevs_list": [ 00:25:14.324 { 00:25:14.324 "name": "BaseBdev1", 00:25:14.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.324 "is_configured": false, 00:25:14.324 "data_offset": 0, 00:25:14.324 "data_size": 0 00:25:14.324 }, 00:25:14.324 { 00:25:14.324 "name": "BaseBdev2", 00:25:14.324 "uuid": "8b28928f-55b4-452b-a663-a7962a4427ff", 00:25:14.324 "is_configured": true, 00:25:14.324 "data_offset": 2048, 00:25:14.324 "data_size": 63488 00:25:14.324 }, 00:25:14.324 { 00:25:14.324 "name": "BaseBdev3", 00:25:14.324 "uuid": "b511b6b6-9776-4718-9c8a-4c8d1f7bfe3b", 00:25:14.324 "is_configured": true, 00:25:14.324 "data_offset": 2048, 00:25:14.324 "data_size": 63488 00:25:14.324 }, 00:25:14.324 { 00:25:14.324 "name": "BaseBdev4", 00:25:14.324 "uuid": "b9775f8f-ab54-4ac1-b918-cc27a02a106c", 00:25:14.324 "is_configured": true, 00:25:14.324 "data_offset": 2048, 00:25:14.324 "data_size": 63488 00:25:14.324 } 00:25:14.324 ] 00:25:14.324 }' 00:25:14.324 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:14.324 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.890 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:15.149 [2024-06-10 11:48:47.082695] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:15.149 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:25:15.149 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:15.149 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:15.149 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:15.149 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:15.149 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:15.149 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:15.149 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:15.149 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:15.149 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:15.149 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.149 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:15.407 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:15.407 "name": "Existed_Raid", 00:25:15.407 "uuid": "1af147c6-4566-471e-9694-a7538ceed712", 00:25:15.407 "strip_size_kb": 64, 00:25:15.407 "state": "configuring", 00:25:15.407 "raid_level": "raid0", 00:25:15.407 "superblock": true, 00:25:15.407 "num_base_bdevs": 4, 00:25:15.407 "num_base_bdevs_discovered": 2, 00:25:15.407 "num_base_bdevs_operational": 4, 00:25:15.407 "base_bdevs_list": [ 00:25:15.407 { 00:25:15.407 "name": "BaseBdev1", 00:25:15.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.407 "is_configured": false, 00:25:15.407 "data_offset": 0, 00:25:15.407 "data_size": 0 00:25:15.407 }, 00:25:15.407 { 00:25:15.407 "name": null, 00:25:15.407 "uuid": "8b28928f-55b4-452b-a663-a7962a4427ff", 00:25:15.407 "is_configured": false, 00:25:15.407 "data_offset": 2048, 00:25:15.407 "data_size": 63488 00:25:15.407 }, 00:25:15.407 { 00:25:15.407 "name": "BaseBdev3", 00:25:15.407 "uuid": "b511b6b6-9776-4718-9c8a-4c8d1f7bfe3b", 00:25:15.407 "is_configured": true, 00:25:15.407 "data_offset": 2048, 00:25:15.407 "data_size": 63488 00:25:15.407 }, 00:25:15.407 { 00:25:15.407 "name": "BaseBdev4", 00:25:15.407 "uuid": "b9775f8f-ab54-4ac1-b918-cc27a02a106c", 00:25:15.407 "is_configured": true, 00:25:15.407 "data_offset": 2048, 00:25:15.407 "data_size": 63488 00:25:15.407 } 00:25:15.407 ] 00:25:15.407 }' 00:25:15.407 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:15.407 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.973 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.973 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:16.231 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:16.231 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:16.492 [2024-06-10 11:48:48.338447] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:16.492 BaseBdev1 00:25:16.492 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:16.492 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:25:16.492 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:25:16.492 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:25:16.492 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:25:16.492 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:25:16.492 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:16.750 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:16.750 [ 00:25:16.750 { 00:25:16.750 "name": "BaseBdev1", 00:25:16.750 "aliases": [ 00:25:16.750 "eafc3b9d-b65f-4a60-9f27-59d5584bce4b" 00:25:16.750 ], 00:25:16.750 "product_name": "Malloc disk", 00:25:16.750 "block_size": 512, 00:25:16.750 "num_blocks": 65536, 00:25:16.750 "uuid": "eafc3b9d-b65f-4a60-9f27-59d5584bce4b", 00:25:16.750 "assigned_rate_limits": { 00:25:16.750 "rw_ios_per_sec": 0, 00:25:16.750 "rw_mbytes_per_sec": 0, 00:25:16.750 "r_mbytes_per_sec": 0, 00:25:16.750 "w_mbytes_per_sec": 0 00:25:16.750 }, 00:25:16.750 "claimed": true, 00:25:16.750 "claim_type": "exclusive_write", 00:25:16.750 "zoned": false, 00:25:16.750 "supported_io_types": { 00:25:16.750 "read": true, 00:25:16.750 "write": true, 00:25:16.750 "unmap": true, 00:25:16.750 "write_zeroes": true, 00:25:16.750 "flush": true, 00:25:16.750 "reset": true, 00:25:16.750 "compare": false, 00:25:16.750 "compare_and_write": false, 00:25:16.750 "abort": true, 00:25:16.750 "nvme_admin": false, 00:25:16.750 "nvme_io": false 00:25:16.750 }, 00:25:16.750 "memory_domains": [ 00:25:16.750 { 00:25:16.750 "dma_device_id": "system", 00:25:16.750 "dma_device_type": 1 00:25:16.750 }, 00:25:16.750 { 00:25:16.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:16.750 "dma_device_type": 2 00:25:16.750 } 00:25:16.750 ], 00:25:16.750 "driver_specific": {} 00:25:16.750 } 00:25:16.750 ] 00:25:16.750 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:25:16.750 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:25:16.750 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:16.750 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:16.750 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:16.750 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:16.750 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:16.750 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:16.750 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:16.750 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:16.750 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:16.750 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.750 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:17.008 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:17.009 "name": "Existed_Raid", 00:25:17.009 "uuid": "1af147c6-4566-471e-9694-a7538ceed712", 00:25:17.009 "strip_size_kb": 64, 00:25:17.009 "state": "configuring", 00:25:17.009 "raid_level": "raid0", 00:25:17.009 "superblock": true, 00:25:17.009 "num_base_bdevs": 4, 00:25:17.009 "num_base_bdevs_discovered": 3, 00:25:17.009 "num_base_bdevs_operational": 4, 00:25:17.009 "base_bdevs_list": [ 00:25:17.009 { 00:25:17.009 "name": "BaseBdev1", 00:25:17.009 "uuid": "eafc3b9d-b65f-4a60-9f27-59d5584bce4b", 00:25:17.009 "is_configured": true, 00:25:17.009 "data_offset": 2048, 00:25:17.009 "data_size": 63488 00:25:17.009 }, 00:25:17.009 { 00:25:17.009 "name": null, 00:25:17.009 "uuid": "8b28928f-55b4-452b-a663-a7962a4427ff", 00:25:17.009 "is_configured": false, 00:25:17.009 "data_offset": 2048, 00:25:17.009 "data_size": 63488 00:25:17.009 }, 00:25:17.009 { 00:25:17.009 "name": "BaseBdev3", 00:25:17.009 "uuid": "b511b6b6-9776-4718-9c8a-4c8d1f7bfe3b", 00:25:17.009 "is_configured": true, 00:25:17.009 "data_offset": 2048, 00:25:17.009 "data_size": 63488 00:25:17.009 }, 00:25:17.009 { 00:25:17.009 "name": "BaseBdev4", 00:25:17.009 "uuid": "b9775f8f-ab54-4ac1-b918-cc27a02a106c", 00:25:17.009 "is_configured": true, 00:25:17.009 "data_offset": 2048, 00:25:17.009 "data_size": 63488 00:25:17.009 } 00:25:17.009 ] 00:25:17.009 }' 00:25:17.009 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:17.009 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.575 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:17.575 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.835 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:17.835 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:18.108 [2024-06-10 11:48:50.131075] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:18.108 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:25:18.108 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:18.108 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:18.108 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:18.108 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:18.108 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:18.108 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:18.108 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:18.108 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:18.108 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:18.108 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.108 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:18.673 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:18.673 "name": "Existed_Raid", 00:25:18.673 "uuid": "1af147c6-4566-471e-9694-a7538ceed712", 00:25:18.673 "strip_size_kb": 64, 00:25:18.673 "state": "configuring", 00:25:18.673 "raid_level": "raid0", 00:25:18.673 "superblock": true, 00:25:18.673 "num_base_bdevs": 4, 00:25:18.673 "num_base_bdevs_discovered": 2, 00:25:18.673 "num_base_bdevs_operational": 4, 00:25:18.673 "base_bdevs_list": [ 00:25:18.673 { 00:25:18.673 "name": "BaseBdev1", 00:25:18.673 "uuid": "eafc3b9d-b65f-4a60-9f27-59d5584bce4b", 00:25:18.673 "is_configured": true, 00:25:18.673 "data_offset": 2048, 00:25:18.673 "data_size": 63488 00:25:18.673 }, 00:25:18.673 { 00:25:18.673 "name": null, 00:25:18.673 "uuid": "8b28928f-55b4-452b-a663-a7962a4427ff", 00:25:18.673 "is_configured": false, 00:25:18.673 "data_offset": 2048, 00:25:18.673 "data_size": 63488 00:25:18.673 }, 00:25:18.673 { 00:25:18.673 "name": null, 00:25:18.673 "uuid": "b511b6b6-9776-4718-9c8a-4c8d1f7bfe3b", 00:25:18.673 "is_configured": false, 00:25:18.673 "data_offset": 2048, 00:25:18.673 "data_size": 63488 00:25:18.673 }, 00:25:18.673 { 00:25:18.673 "name": "BaseBdev4", 00:25:18.673 "uuid": "b9775f8f-ab54-4ac1-b918-cc27a02a106c", 00:25:18.673 "is_configured": true, 00:25:18.673 "data_offset": 2048, 00:25:18.673 "data_size": 63488 00:25:18.673 } 00:25:18.673 ] 00:25:18.673 }' 00:25:18.673 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:18.673 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.262 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:19.262 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.262 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:19.262 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:19.521 [2024-06-10 11:48:51.499405] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:19.521 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:25:19.521 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:19.521 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:19.521 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:19.521 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:19.521 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:19.521 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:19.521 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:19.521 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:19.521 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:19.521 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:19.521 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.780 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:19.780 "name": "Existed_Raid", 00:25:19.780 "uuid": "1af147c6-4566-471e-9694-a7538ceed712", 00:25:19.780 "strip_size_kb": 64, 00:25:19.780 "state": "configuring", 00:25:19.780 "raid_level": "raid0", 00:25:19.780 "superblock": true, 00:25:19.780 "num_base_bdevs": 4, 00:25:19.780 "num_base_bdevs_discovered": 3, 00:25:19.780 "num_base_bdevs_operational": 4, 00:25:19.780 "base_bdevs_list": [ 00:25:19.780 { 00:25:19.780 "name": "BaseBdev1", 00:25:19.780 "uuid": "eafc3b9d-b65f-4a60-9f27-59d5584bce4b", 00:25:19.780 "is_configured": true, 00:25:19.780 "data_offset": 2048, 00:25:19.780 "data_size": 63488 00:25:19.780 }, 00:25:19.780 { 00:25:19.780 "name": null, 00:25:19.780 "uuid": "8b28928f-55b4-452b-a663-a7962a4427ff", 00:25:19.780 "is_configured": false, 00:25:19.780 "data_offset": 2048, 00:25:19.780 "data_size": 63488 00:25:19.780 }, 00:25:19.780 { 00:25:19.780 "name": "BaseBdev3", 00:25:19.780 "uuid": "b511b6b6-9776-4718-9c8a-4c8d1f7bfe3b", 00:25:19.780 "is_configured": true, 00:25:19.780 "data_offset": 2048, 00:25:19.780 "data_size": 63488 00:25:19.780 }, 00:25:19.780 { 00:25:19.780 "name": "BaseBdev4", 00:25:19.780 "uuid": "b9775f8f-ab54-4ac1-b918-cc27a02a106c", 00:25:19.780 "is_configured": true, 00:25:19.780 "data_offset": 2048, 00:25:19.780 "data_size": 63488 00:25:19.780 } 00:25:19.780 ] 00:25:19.780 }' 00:25:19.780 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:19.780 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.347 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.347 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:20.605 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:20.605 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:20.864 [2024-06-10 11:48:52.783683] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:20.864 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:25:20.864 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:20.864 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:20.864 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:20.864 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:20.864 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:20.864 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:20.864 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:20.864 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:20.864 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:20.864 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:20.864 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.121 11:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:21.121 "name": "Existed_Raid", 00:25:21.121 "uuid": "1af147c6-4566-471e-9694-a7538ceed712", 00:25:21.121 "strip_size_kb": 64, 00:25:21.121 "state": "configuring", 00:25:21.121 "raid_level": "raid0", 00:25:21.121 "superblock": true, 00:25:21.121 "num_base_bdevs": 4, 00:25:21.121 "num_base_bdevs_discovered": 2, 00:25:21.121 "num_base_bdevs_operational": 4, 00:25:21.121 "base_bdevs_list": [ 00:25:21.121 { 00:25:21.121 "name": null, 00:25:21.121 "uuid": "eafc3b9d-b65f-4a60-9f27-59d5584bce4b", 00:25:21.121 "is_configured": false, 00:25:21.121 "data_offset": 2048, 00:25:21.121 "data_size": 63488 00:25:21.121 }, 00:25:21.121 { 00:25:21.121 "name": null, 00:25:21.121 "uuid": "8b28928f-55b4-452b-a663-a7962a4427ff", 00:25:21.121 "is_configured": false, 00:25:21.121 "data_offset": 2048, 00:25:21.121 "data_size": 63488 00:25:21.121 }, 00:25:21.121 { 00:25:21.121 "name": "BaseBdev3", 00:25:21.121 "uuid": "b511b6b6-9776-4718-9c8a-4c8d1f7bfe3b", 00:25:21.121 "is_configured": true, 00:25:21.121 "data_offset": 2048, 00:25:21.121 "data_size": 63488 00:25:21.121 }, 00:25:21.121 { 00:25:21.121 "name": "BaseBdev4", 00:25:21.121 "uuid": "b9775f8f-ab54-4ac1-b918-cc27a02a106c", 00:25:21.121 "is_configured": true, 00:25:21.121 "data_offset": 2048, 00:25:21.121 "data_size": 63488 00:25:21.121 } 00:25:21.121 ] 00:25:21.121 }' 00:25:21.121 11:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:21.121 11:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.056 11:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.056 11:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:22.056 11:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:22.056 11:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:22.314 [2024-06-10 11:48:54.262472] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:22.314 11:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:25:22.314 11:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:22.314 11:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:22.314 11:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:22.314 11:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:22.314 11:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:22.314 11:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:22.314 11:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:22.314 11:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:22.314 11:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:22.314 11:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.314 11:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.572 11:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:22.572 "name": "Existed_Raid", 00:25:22.572 "uuid": "1af147c6-4566-471e-9694-a7538ceed712", 00:25:22.572 "strip_size_kb": 64, 00:25:22.572 "state": "configuring", 00:25:22.572 "raid_level": "raid0", 00:25:22.572 "superblock": true, 00:25:22.572 "num_base_bdevs": 4, 00:25:22.572 "num_base_bdevs_discovered": 3, 00:25:22.572 "num_base_bdevs_operational": 4, 00:25:22.572 "base_bdevs_list": [ 00:25:22.572 { 00:25:22.572 "name": null, 00:25:22.572 "uuid": "eafc3b9d-b65f-4a60-9f27-59d5584bce4b", 00:25:22.572 "is_configured": false, 00:25:22.572 "data_offset": 2048, 00:25:22.572 "data_size": 63488 00:25:22.572 }, 00:25:22.572 { 00:25:22.572 "name": "BaseBdev2", 00:25:22.572 "uuid": "8b28928f-55b4-452b-a663-a7962a4427ff", 00:25:22.572 "is_configured": true, 00:25:22.572 "data_offset": 2048, 00:25:22.572 "data_size": 63488 00:25:22.572 }, 00:25:22.572 { 00:25:22.572 "name": "BaseBdev3", 00:25:22.572 "uuid": "b511b6b6-9776-4718-9c8a-4c8d1f7bfe3b", 00:25:22.572 "is_configured": true, 00:25:22.572 "data_offset": 2048, 00:25:22.572 "data_size": 63488 00:25:22.572 }, 00:25:22.572 { 00:25:22.572 "name": "BaseBdev4", 00:25:22.572 "uuid": "b9775f8f-ab54-4ac1-b918-cc27a02a106c", 00:25:22.572 "is_configured": true, 00:25:22.572 "data_offset": 2048, 00:25:22.572 "data_size": 63488 00:25:22.572 } 00:25:22.572 ] 00:25:22.572 }' 00:25:22.572 11:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:22.572 11:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.139 11:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.139 11:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:23.397 11:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:23.397 11:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:23.397 11:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.655 11:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u eafc3b9d-b65f-4a60-9f27-59d5584bce4b 00:25:23.914 [2024-06-10 11:48:55.781101] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:23.914 [2024-06-10 11:48:55.781790] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:25:23.914 [2024-06-10 11:48:55.782069] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:23.914 [2024-06-10 11:48:55.782428] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:23.914 NewBaseBdev 00:25:23.914 [2024-06-10 11:48:55.783119] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:25:23.914 [2024-06-10 11:48:55.783517] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:25:23.914 [2024-06-10 11:48:55.783932] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:23.914 11:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:23.914 11:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:25:23.914 11:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:25:23.914 11:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:25:23.914 11:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:25:23.914 11:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:25:23.914 11:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:24.173 11:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:24.431 [ 00:25:24.431 { 00:25:24.431 "name": "NewBaseBdev", 00:25:24.431 "aliases": [ 00:25:24.431 "eafc3b9d-b65f-4a60-9f27-59d5584bce4b" 00:25:24.431 ], 00:25:24.431 "product_name": "Malloc disk", 00:25:24.431 "block_size": 512, 00:25:24.431 "num_blocks": 65536, 00:25:24.431 "uuid": "eafc3b9d-b65f-4a60-9f27-59d5584bce4b", 00:25:24.431 "assigned_rate_limits": { 00:25:24.431 "rw_ios_per_sec": 0, 00:25:24.431 "rw_mbytes_per_sec": 0, 00:25:24.431 "r_mbytes_per_sec": 0, 00:25:24.431 "w_mbytes_per_sec": 0 00:25:24.431 }, 00:25:24.431 "claimed": true, 00:25:24.431 "claim_type": "exclusive_write", 00:25:24.431 "zoned": false, 00:25:24.431 "supported_io_types": { 00:25:24.431 "read": true, 00:25:24.431 "write": true, 00:25:24.431 "unmap": true, 00:25:24.431 "write_zeroes": true, 00:25:24.431 "flush": true, 00:25:24.431 "reset": true, 00:25:24.431 "compare": false, 00:25:24.431 "compare_and_write": false, 00:25:24.431 "abort": true, 00:25:24.431 "nvme_admin": false, 00:25:24.431 "nvme_io": false 00:25:24.431 }, 00:25:24.431 "memory_domains": [ 00:25:24.431 { 00:25:24.431 "dma_device_id": "system", 00:25:24.431 "dma_device_type": 1 00:25:24.431 }, 00:25:24.431 { 00:25:24.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.431 "dma_device_type": 2 00:25:24.431 } 00:25:24.431 ], 00:25:24.431 "driver_specific": {} 00:25:24.431 } 00:25:24.431 ] 00:25:24.431 11:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:25:24.431 11:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:25:24.431 11:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:24.431 11:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:24.431 11:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:24.431 11:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:24.431 11:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:24.431 11:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:24.431 11:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:24.431 11:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:24.431 11:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:24.431 11:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:24.431 11:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.689 11:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:24.689 "name": "Existed_Raid", 00:25:24.689 "uuid": "1af147c6-4566-471e-9694-a7538ceed712", 00:25:24.689 "strip_size_kb": 64, 00:25:24.689 "state": "online", 00:25:24.689 "raid_level": "raid0", 00:25:24.689 "superblock": true, 00:25:24.689 "num_base_bdevs": 4, 00:25:24.689 "num_base_bdevs_discovered": 4, 00:25:24.689 "num_base_bdevs_operational": 4, 00:25:24.689 "base_bdevs_list": [ 00:25:24.689 { 00:25:24.689 "name": "NewBaseBdev", 00:25:24.689 "uuid": "eafc3b9d-b65f-4a60-9f27-59d5584bce4b", 00:25:24.689 "is_configured": true, 00:25:24.689 "data_offset": 2048, 00:25:24.689 "data_size": 63488 00:25:24.689 }, 00:25:24.689 { 00:25:24.689 "name": "BaseBdev2", 00:25:24.690 "uuid": "8b28928f-55b4-452b-a663-a7962a4427ff", 00:25:24.690 "is_configured": true, 00:25:24.690 "data_offset": 2048, 00:25:24.690 "data_size": 63488 00:25:24.690 }, 00:25:24.690 { 00:25:24.690 "name": "BaseBdev3", 00:25:24.690 "uuid": "b511b6b6-9776-4718-9c8a-4c8d1f7bfe3b", 00:25:24.690 "is_configured": true, 00:25:24.690 "data_offset": 2048, 00:25:24.690 "data_size": 63488 00:25:24.690 }, 00:25:24.690 { 00:25:24.690 "name": "BaseBdev4", 00:25:24.690 "uuid": "b9775f8f-ab54-4ac1-b918-cc27a02a106c", 00:25:24.690 "is_configured": true, 00:25:24.690 "data_offset": 2048, 00:25:24.690 "data_size": 63488 00:25:24.690 } 00:25:24.690 ] 00:25:24.690 }' 00:25:24.690 11:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:24.690 11:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.258 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:25.258 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:25.258 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:25.258 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:25.258 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:25.258 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:25.258 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:25.258 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:25.516 [2024-06-10 11:48:57.538195] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:25.516 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:25.516 "name": "Existed_Raid", 00:25:25.516 "aliases": [ 00:25:25.516 "1af147c6-4566-471e-9694-a7538ceed712" 00:25:25.516 ], 00:25:25.516 "product_name": "Raid Volume", 00:25:25.516 "block_size": 512, 00:25:25.516 "num_blocks": 253952, 00:25:25.517 "uuid": "1af147c6-4566-471e-9694-a7538ceed712", 00:25:25.517 "assigned_rate_limits": { 00:25:25.517 "rw_ios_per_sec": 0, 00:25:25.517 "rw_mbytes_per_sec": 0, 00:25:25.517 "r_mbytes_per_sec": 0, 00:25:25.517 "w_mbytes_per_sec": 0 00:25:25.517 }, 00:25:25.517 "claimed": false, 00:25:25.517 "zoned": false, 00:25:25.517 "supported_io_types": { 00:25:25.517 "read": true, 00:25:25.517 "write": true, 00:25:25.517 "unmap": true, 00:25:25.517 "write_zeroes": true, 00:25:25.517 "flush": true, 00:25:25.517 "reset": true, 00:25:25.517 "compare": false, 00:25:25.517 "compare_and_write": false, 00:25:25.517 "abort": false, 00:25:25.517 "nvme_admin": false, 00:25:25.517 "nvme_io": false 00:25:25.517 }, 00:25:25.517 "memory_domains": [ 00:25:25.517 { 00:25:25.517 "dma_device_id": "system", 00:25:25.517 "dma_device_type": 1 00:25:25.517 }, 00:25:25.517 { 00:25:25.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:25.517 "dma_device_type": 2 00:25:25.517 }, 00:25:25.517 { 00:25:25.517 "dma_device_id": "system", 00:25:25.517 "dma_device_type": 1 00:25:25.517 }, 00:25:25.517 { 00:25:25.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:25.517 "dma_device_type": 2 00:25:25.517 }, 00:25:25.517 { 00:25:25.517 "dma_device_id": "system", 00:25:25.517 "dma_device_type": 1 00:25:25.517 }, 00:25:25.517 { 00:25:25.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:25.517 "dma_device_type": 2 00:25:25.517 }, 00:25:25.517 { 00:25:25.517 "dma_device_id": "system", 00:25:25.517 "dma_device_type": 1 00:25:25.517 }, 00:25:25.517 { 00:25:25.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:25.517 "dma_device_type": 2 00:25:25.517 } 00:25:25.517 ], 00:25:25.517 "driver_specific": { 00:25:25.517 "raid": { 00:25:25.517 "uuid": "1af147c6-4566-471e-9694-a7538ceed712", 00:25:25.517 "strip_size_kb": 64, 00:25:25.517 "state": "online", 00:25:25.517 "raid_level": "raid0", 00:25:25.517 "superblock": true, 00:25:25.517 "num_base_bdevs": 4, 00:25:25.517 "num_base_bdevs_discovered": 4, 00:25:25.517 "num_base_bdevs_operational": 4, 00:25:25.517 "base_bdevs_list": [ 00:25:25.517 { 00:25:25.517 "name": "NewBaseBdev", 00:25:25.517 "uuid": "eafc3b9d-b65f-4a60-9f27-59d5584bce4b", 00:25:25.517 "is_configured": true, 00:25:25.517 "data_offset": 2048, 00:25:25.517 "data_size": 63488 00:25:25.517 }, 00:25:25.517 { 00:25:25.517 "name": "BaseBdev2", 00:25:25.517 "uuid": "8b28928f-55b4-452b-a663-a7962a4427ff", 00:25:25.517 "is_configured": true, 00:25:25.517 "data_offset": 2048, 00:25:25.517 "data_size": 63488 00:25:25.517 }, 00:25:25.517 { 00:25:25.517 "name": "BaseBdev3", 00:25:25.517 "uuid": "b511b6b6-9776-4718-9c8a-4c8d1f7bfe3b", 00:25:25.517 "is_configured": true, 00:25:25.517 "data_offset": 2048, 00:25:25.517 "data_size": 63488 00:25:25.517 }, 00:25:25.517 { 00:25:25.517 "name": "BaseBdev4", 00:25:25.517 "uuid": "b9775f8f-ab54-4ac1-b918-cc27a02a106c", 00:25:25.517 "is_configured": true, 00:25:25.517 "data_offset": 2048, 00:25:25.517 "data_size": 63488 00:25:25.517 } 00:25:25.517 ] 00:25:25.517 } 00:25:25.517 } 00:25:25.517 }' 00:25:25.517 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:25.775 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:25:25.775 BaseBdev2 00:25:25.775 BaseBdev3 00:25:25.775 BaseBdev4' 00:25:25.775 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:25.775 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:25.775 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:26.034 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:26.034 "name": "NewBaseBdev", 00:25:26.034 "aliases": [ 00:25:26.034 "eafc3b9d-b65f-4a60-9f27-59d5584bce4b" 00:25:26.034 ], 00:25:26.034 "product_name": "Malloc disk", 00:25:26.034 "block_size": 512, 00:25:26.034 "num_blocks": 65536, 00:25:26.034 "uuid": "eafc3b9d-b65f-4a60-9f27-59d5584bce4b", 00:25:26.034 "assigned_rate_limits": { 00:25:26.034 "rw_ios_per_sec": 0, 00:25:26.034 "rw_mbytes_per_sec": 0, 00:25:26.034 "r_mbytes_per_sec": 0, 00:25:26.034 "w_mbytes_per_sec": 0 00:25:26.034 }, 00:25:26.034 "claimed": true, 00:25:26.034 "claim_type": "exclusive_write", 00:25:26.034 "zoned": false, 00:25:26.034 "supported_io_types": { 00:25:26.034 "read": true, 00:25:26.034 "write": true, 00:25:26.034 "unmap": true, 00:25:26.034 "write_zeroes": true, 00:25:26.034 "flush": true, 00:25:26.034 "reset": true, 00:25:26.034 "compare": false, 00:25:26.034 "compare_and_write": false, 00:25:26.034 "abort": true, 00:25:26.034 "nvme_admin": false, 00:25:26.034 "nvme_io": false 00:25:26.034 }, 00:25:26.034 "memory_domains": [ 00:25:26.034 { 00:25:26.034 "dma_device_id": "system", 00:25:26.034 "dma_device_type": 1 00:25:26.034 }, 00:25:26.034 { 00:25:26.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:26.034 "dma_device_type": 2 00:25:26.034 } 00:25:26.034 ], 00:25:26.034 "driver_specific": {} 00:25:26.034 }' 00:25:26.034 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:26.034 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:26.034 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:26.034 11:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:26.034 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:26.034 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:26.034 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:26.292 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:26.292 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:26.292 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:26.292 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:26.292 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:26.292 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:26.292 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:26.292 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:26.551 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:26.551 "name": "BaseBdev2", 00:25:26.551 "aliases": [ 00:25:26.551 "8b28928f-55b4-452b-a663-a7962a4427ff" 00:25:26.551 ], 00:25:26.551 "product_name": "Malloc disk", 00:25:26.551 "block_size": 512, 00:25:26.551 "num_blocks": 65536, 00:25:26.551 "uuid": "8b28928f-55b4-452b-a663-a7962a4427ff", 00:25:26.551 "assigned_rate_limits": { 00:25:26.551 "rw_ios_per_sec": 0, 00:25:26.551 "rw_mbytes_per_sec": 0, 00:25:26.551 "r_mbytes_per_sec": 0, 00:25:26.551 "w_mbytes_per_sec": 0 00:25:26.551 }, 00:25:26.551 "claimed": true, 00:25:26.551 "claim_type": "exclusive_write", 00:25:26.551 "zoned": false, 00:25:26.551 "supported_io_types": { 00:25:26.551 "read": true, 00:25:26.551 "write": true, 00:25:26.551 "unmap": true, 00:25:26.551 "write_zeroes": true, 00:25:26.551 "flush": true, 00:25:26.551 "reset": true, 00:25:26.551 "compare": false, 00:25:26.551 "compare_and_write": false, 00:25:26.551 "abort": true, 00:25:26.551 "nvme_admin": false, 00:25:26.551 "nvme_io": false 00:25:26.551 }, 00:25:26.551 "memory_domains": [ 00:25:26.551 { 00:25:26.551 "dma_device_id": "system", 00:25:26.551 "dma_device_type": 1 00:25:26.551 }, 00:25:26.551 { 00:25:26.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:26.551 "dma_device_type": 2 00:25:26.551 } 00:25:26.551 ], 00:25:26.551 "driver_specific": {} 00:25:26.551 }' 00:25:26.551 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:26.551 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:26.809 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:26.809 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:26.809 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:26.809 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:26.809 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:26.809 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:26.809 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:26.809 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:26.809 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:26.809 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:26.809 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:26.809 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:26.809 11:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:27.376 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:27.376 "name": "BaseBdev3", 00:25:27.376 "aliases": [ 00:25:27.376 "b511b6b6-9776-4718-9c8a-4c8d1f7bfe3b" 00:25:27.376 ], 00:25:27.376 "product_name": "Malloc disk", 00:25:27.376 "block_size": 512, 00:25:27.376 "num_blocks": 65536, 00:25:27.376 "uuid": "b511b6b6-9776-4718-9c8a-4c8d1f7bfe3b", 00:25:27.376 "assigned_rate_limits": { 00:25:27.376 "rw_ios_per_sec": 0, 00:25:27.376 "rw_mbytes_per_sec": 0, 00:25:27.376 "r_mbytes_per_sec": 0, 00:25:27.376 "w_mbytes_per_sec": 0 00:25:27.376 }, 00:25:27.376 "claimed": true, 00:25:27.376 "claim_type": "exclusive_write", 00:25:27.376 "zoned": false, 00:25:27.376 "supported_io_types": { 00:25:27.376 "read": true, 00:25:27.376 "write": true, 00:25:27.376 "unmap": true, 00:25:27.376 "write_zeroes": true, 00:25:27.376 "flush": true, 00:25:27.376 "reset": true, 00:25:27.376 "compare": false, 00:25:27.376 "compare_and_write": false, 00:25:27.376 "abort": true, 00:25:27.376 "nvme_admin": false, 00:25:27.376 "nvme_io": false 00:25:27.376 }, 00:25:27.376 "memory_domains": [ 00:25:27.376 { 00:25:27.376 "dma_device_id": "system", 00:25:27.376 "dma_device_type": 1 00:25:27.376 }, 00:25:27.376 { 00:25:27.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.376 "dma_device_type": 2 00:25:27.376 } 00:25:27.376 ], 00:25:27.376 "driver_specific": {} 00:25:27.376 }' 00:25:27.376 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:27.376 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:27.376 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:27.376 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:27.376 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:27.376 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:27.376 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:27.376 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:27.634 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:27.634 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:27.634 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:27.634 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:27.634 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:27.634 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:27.634 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:27.891 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:27.891 "name": "BaseBdev4", 00:25:27.891 "aliases": [ 00:25:27.891 "b9775f8f-ab54-4ac1-b918-cc27a02a106c" 00:25:27.891 ], 00:25:27.891 "product_name": "Malloc disk", 00:25:27.891 "block_size": 512, 00:25:27.891 "num_blocks": 65536, 00:25:27.891 "uuid": "b9775f8f-ab54-4ac1-b918-cc27a02a106c", 00:25:27.891 "assigned_rate_limits": { 00:25:27.891 "rw_ios_per_sec": 0, 00:25:27.891 "rw_mbytes_per_sec": 0, 00:25:27.891 "r_mbytes_per_sec": 0, 00:25:27.891 "w_mbytes_per_sec": 0 00:25:27.891 }, 00:25:27.891 "claimed": true, 00:25:27.891 "claim_type": "exclusive_write", 00:25:27.891 "zoned": false, 00:25:27.891 "supported_io_types": { 00:25:27.891 "read": true, 00:25:27.891 "write": true, 00:25:27.891 "unmap": true, 00:25:27.891 "write_zeroes": true, 00:25:27.891 "flush": true, 00:25:27.891 "reset": true, 00:25:27.891 "compare": false, 00:25:27.891 "compare_and_write": false, 00:25:27.891 "abort": true, 00:25:27.891 "nvme_admin": false, 00:25:27.891 "nvme_io": false 00:25:27.891 }, 00:25:27.891 "memory_domains": [ 00:25:27.891 { 00:25:27.891 "dma_device_id": "system", 00:25:27.891 "dma_device_type": 1 00:25:27.891 }, 00:25:27.891 { 00:25:27.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.891 "dma_device_type": 2 00:25:27.891 } 00:25:27.891 ], 00:25:27.891 "driver_specific": {} 00:25:27.891 }' 00:25:27.891 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:27.891 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:27.891 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:27.891 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:27.891 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:27.891 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:27.891 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:28.148 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:28.148 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:28.148 11:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:28.148 11:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:28.148 11:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:28.148 11:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:28.522 [2024-06-10 11:49:00.278780] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:28.522 [2024-06-10 11:49:00.279123] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:28.522 [2024-06-10 11:49:00.279366] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:28.523 [2024-06-10 11:49:00.279664] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:28.523 [2024-06-10 11:49:00.279839] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:25:28.523 11:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 136833 00:25:28.523 11:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 136833 ']' 00:25:28.523 11:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 136833 00:25:28.523 11:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:25:28.523 11:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:28.523 11:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 136833 00:25:28.523 11:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:28.523 11:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:28.523 11:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 136833' 00:25:28.523 killing process with pid 136833 00:25:28.523 11:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 136833 00:25:28.523 11:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 136833 00:25:28.523 [2024-06-10 11:49:00.329827] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:28.784 [2024-06-10 11:49:00.775684] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:30.160 11:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:25:30.160 00:25:30.160 real 0m35.610s 00:25:30.160 user 1m4.656s 00:25:30.160 sys 0m5.040s 00:25:30.160 11:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:30.160 11:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.160 ************************************ 00:25:30.160 END TEST raid_state_function_test_sb 00:25:30.160 ************************************ 00:25:30.418 11:49:02 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:25:30.418 11:49:02 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:25:30.418 11:49:02 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:30.418 11:49:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:30.418 ************************************ 00:25:30.418 START TEST raid_superblock_test 00:25:30.418 ************************************ 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid0 4 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=137947 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 137947 /var/tmp/spdk-raid.sock 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 137947 ']' 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:30.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:30.418 11:49:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.418 [2024-06-10 11:49:02.323754] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:25:30.418 [2024-06-10 11:49:02.323926] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137947 ] 00:25:30.676 [2024-06-10 11:49:02.488982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.676 [2024-06-10 11:49:02.720674] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.934 [2024-06-10 11:49:02.961649] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:31.584 11:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:31.584 11:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:25:31.584 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:25:31.584 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:31.584 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:25:31.584 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:25:31.584 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:31.584 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:31.584 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:31.584 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:31.584 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:31.584 malloc1 00:25:31.584 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:31.843 [2024-06-10 11:49:03.804974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:31.843 [2024-06-10 11:49:03.805110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.843 [2024-06-10 11:49:03.805156] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:25:31.843 [2024-06-10 11:49:03.805179] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.843 [2024-06-10 11:49:03.807953] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.843 [2024-06-10 11:49:03.808032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:31.843 pt1 00:25:31.843 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:31.843 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:31.843 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:25:31.843 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:25:31.843 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:31.843 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:31.843 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:31.843 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:31.843 11:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:32.101 malloc2 00:25:32.101 11:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:32.666 [2024-06-10 11:49:04.471981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:32.666 [2024-06-10 11:49:04.472177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:32.666 [2024-06-10 11:49:04.472264] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:25:32.666 [2024-06-10 11:49:04.472298] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:32.666 [2024-06-10 11:49:04.476129] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:32.666 [2024-06-10 11:49:04.476206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:32.666 pt2 00:25:32.666 11:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:32.666 11:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:32.666 11:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:25:32.666 11:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:25:32.666 11:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:32.666 11:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:32.666 11:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:32.666 11:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:32.666 11:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:25:32.925 malloc3 00:25:32.925 11:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:33.183 [2024-06-10 11:49:05.103691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:33.183 [2024-06-10 11:49:05.103864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:33.183 [2024-06-10 11:49:05.103940] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:33.183 [2024-06-10 11:49:05.103987] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:33.183 [2024-06-10 11:49:05.107029] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:33.183 [2024-06-10 11:49:05.107098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:33.183 pt3 00:25:33.183 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:33.183 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:33.183 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:25:33.183 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:25:33.183 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:25:33.183 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:33.183 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:33.183 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:33.183 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:25:33.441 malloc4 00:25:33.441 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:33.699 [2024-06-10 11:49:05.597130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:33.699 [2024-06-10 11:49:05.597300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:33.699 [2024-06-10 11:49:05.597339] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:33.699 [2024-06-10 11:49:05.597374] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:33.699 [2024-06-10 11:49:05.600182] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:33.699 [2024-06-10 11:49:05.600250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:33.699 pt4 00:25:33.699 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:33.699 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:33.699 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:25:33.958 [2024-06-10 11:49:05.889265] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:33.958 [2024-06-10 11:49:05.891820] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:33.958 [2024-06-10 11:49:05.891908] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:33.958 [2024-06-10 11:49:05.891984] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:33.958 [2024-06-10 11:49:05.892221] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:25:33.958 [2024-06-10 11:49:05.892240] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:33.958 [2024-06-10 11:49:05.892483] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:33.958 [2024-06-10 11:49:05.892908] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:25:33.958 [2024-06-10 11:49:05.892929] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:25:33.958 [2024-06-10 11:49:05.893171] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.958 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:25:33.958 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:33.958 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:33.958 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:33.958 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:33.958 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:33.958 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:33.958 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:33.958 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:33.958 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:33.959 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.959 11:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.216 11:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:34.216 "name": "raid_bdev1", 00:25:34.217 "uuid": "7c4366dd-d06a-477a-9822-389ee6304754", 00:25:34.217 "strip_size_kb": 64, 00:25:34.217 "state": "online", 00:25:34.217 "raid_level": "raid0", 00:25:34.217 "superblock": true, 00:25:34.217 "num_base_bdevs": 4, 00:25:34.217 "num_base_bdevs_discovered": 4, 00:25:34.217 "num_base_bdevs_operational": 4, 00:25:34.217 "base_bdevs_list": [ 00:25:34.217 { 00:25:34.217 "name": "pt1", 00:25:34.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:34.217 "is_configured": true, 00:25:34.217 "data_offset": 2048, 00:25:34.217 "data_size": 63488 00:25:34.217 }, 00:25:34.217 { 00:25:34.217 "name": "pt2", 00:25:34.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:34.217 "is_configured": true, 00:25:34.217 "data_offset": 2048, 00:25:34.217 "data_size": 63488 00:25:34.217 }, 00:25:34.217 { 00:25:34.217 "name": "pt3", 00:25:34.217 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:34.217 "is_configured": true, 00:25:34.217 "data_offset": 2048, 00:25:34.217 "data_size": 63488 00:25:34.217 }, 00:25:34.217 { 00:25:34.217 "name": "pt4", 00:25:34.217 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:34.217 "is_configured": true, 00:25:34.217 "data_offset": 2048, 00:25:34.217 "data_size": 63488 00:25:34.217 } 00:25:34.217 ] 00:25:34.217 }' 00:25:34.217 11:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:34.217 11:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.781 11:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:25:34.781 11:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:34.781 11:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:34.781 11:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:34.781 11:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:34.781 11:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:34.781 11:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:34.781 11:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:35.049 [2024-06-10 11:49:06.993793] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:35.049 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:35.049 "name": "raid_bdev1", 00:25:35.049 "aliases": [ 00:25:35.049 "7c4366dd-d06a-477a-9822-389ee6304754" 00:25:35.049 ], 00:25:35.049 "product_name": "Raid Volume", 00:25:35.049 "block_size": 512, 00:25:35.049 "num_blocks": 253952, 00:25:35.049 "uuid": "7c4366dd-d06a-477a-9822-389ee6304754", 00:25:35.049 "assigned_rate_limits": { 00:25:35.049 "rw_ios_per_sec": 0, 00:25:35.049 "rw_mbytes_per_sec": 0, 00:25:35.049 "r_mbytes_per_sec": 0, 00:25:35.049 "w_mbytes_per_sec": 0 00:25:35.049 }, 00:25:35.049 "claimed": false, 00:25:35.049 "zoned": false, 00:25:35.049 "supported_io_types": { 00:25:35.049 "read": true, 00:25:35.049 "write": true, 00:25:35.049 "unmap": true, 00:25:35.049 "write_zeroes": true, 00:25:35.049 "flush": true, 00:25:35.049 "reset": true, 00:25:35.050 "compare": false, 00:25:35.050 "compare_and_write": false, 00:25:35.050 "abort": false, 00:25:35.050 "nvme_admin": false, 00:25:35.050 "nvme_io": false 00:25:35.050 }, 00:25:35.050 "memory_domains": [ 00:25:35.050 { 00:25:35.050 "dma_device_id": "system", 00:25:35.050 "dma_device_type": 1 00:25:35.050 }, 00:25:35.050 { 00:25:35.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.050 "dma_device_type": 2 00:25:35.050 }, 00:25:35.050 { 00:25:35.050 "dma_device_id": "system", 00:25:35.050 "dma_device_type": 1 00:25:35.050 }, 00:25:35.050 { 00:25:35.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.050 "dma_device_type": 2 00:25:35.050 }, 00:25:35.050 { 00:25:35.050 "dma_device_id": "system", 00:25:35.050 "dma_device_type": 1 00:25:35.050 }, 00:25:35.050 { 00:25:35.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.050 "dma_device_type": 2 00:25:35.050 }, 00:25:35.050 { 00:25:35.050 "dma_device_id": "system", 00:25:35.050 "dma_device_type": 1 00:25:35.050 }, 00:25:35.050 { 00:25:35.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.050 "dma_device_type": 2 00:25:35.050 } 00:25:35.050 ], 00:25:35.050 "driver_specific": { 00:25:35.050 "raid": { 00:25:35.050 "uuid": "7c4366dd-d06a-477a-9822-389ee6304754", 00:25:35.050 "strip_size_kb": 64, 00:25:35.050 "state": "online", 00:25:35.050 "raid_level": "raid0", 00:25:35.050 "superblock": true, 00:25:35.050 "num_base_bdevs": 4, 00:25:35.050 "num_base_bdevs_discovered": 4, 00:25:35.050 "num_base_bdevs_operational": 4, 00:25:35.050 "base_bdevs_list": [ 00:25:35.050 { 00:25:35.050 "name": "pt1", 00:25:35.050 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:35.050 "is_configured": true, 00:25:35.050 "data_offset": 2048, 00:25:35.050 "data_size": 63488 00:25:35.050 }, 00:25:35.050 { 00:25:35.050 "name": "pt2", 00:25:35.050 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:35.050 "is_configured": true, 00:25:35.050 "data_offset": 2048, 00:25:35.050 "data_size": 63488 00:25:35.050 }, 00:25:35.050 { 00:25:35.050 "name": "pt3", 00:25:35.050 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:35.050 "is_configured": true, 00:25:35.050 "data_offset": 2048, 00:25:35.050 "data_size": 63488 00:25:35.050 }, 00:25:35.050 { 00:25:35.050 "name": "pt4", 00:25:35.050 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:35.050 "is_configured": true, 00:25:35.050 "data_offset": 2048, 00:25:35.050 "data_size": 63488 00:25:35.050 } 00:25:35.050 ] 00:25:35.050 } 00:25:35.050 } 00:25:35.050 }' 00:25:35.050 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:35.050 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:35.050 pt2 00:25:35.050 pt3 00:25:35.050 pt4' 00:25:35.050 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:35.050 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:35.050 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:35.307 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:35.307 "name": "pt1", 00:25:35.307 "aliases": [ 00:25:35.307 "00000000-0000-0000-0000-000000000001" 00:25:35.307 ], 00:25:35.307 "product_name": "passthru", 00:25:35.307 "block_size": 512, 00:25:35.307 "num_blocks": 65536, 00:25:35.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:35.307 "assigned_rate_limits": { 00:25:35.307 "rw_ios_per_sec": 0, 00:25:35.307 "rw_mbytes_per_sec": 0, 00:25:35.307 "r_mbytes_per_sec": 0, 00:25:35.307 "w_mbytes_per_sec": 0 00:25:35.307 }, 00:25:35.307 "claimed": true, 00:25:35.307 "claim_type": "exclusive_write", 00:25:35.307 "zoned": false, 00:25:35.307 "supported_io_types": { 00:25:35.307 "read": true, 00:25:35.307 "write": true, 00:25:35.307 "unmap": true, 00:25:35.307 "write_zeroes": true, 00:25:35.307 "flush": true, 00:25:35.307 "reset": true, 00:25:35.307 "compare": false, 00:25:35.307 "compare_and_write": false, 00:25:35.307 "abort": true, 00:25:35.307 "nvme_admin": false, 00:25:35.307 "nvme_io": false 00:25:35.307 }, 00:25:35.307 "memory_domains": [ 00:25:35.307 { 00:25:35.307 "dma_device_id": "system", 00:25:35.307 "dma_device_type": 1 00:25:35.307 }, 00:25:35.307 { 00:25:35.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.307 "dma_device_type": 2 00:25:35.307 } 00:25:35.307 ], 00:25:35.307 "driver_specific": { 00:25:35.307 "passthru": { 00:25:35.307 "name": "pt1", 00:25:35.307 "base_bdev_name": "malloc1" 00:25:35.307 } 00:25:35.307 } 00:25:35.307 }' 00:25:35.307 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:35.307 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:35.564 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:35.564 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:35.564 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:35.564 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:35.564 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:35.564 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:35.564 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:35.564 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:35.564 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:35.822 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:35.822 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:35.822 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:35.822 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:36.080 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:36.080 "name": "pt2", 00:25:36.080 "aliases": [ 00:25:36.080 "00000000-0000-0000-0000-000000000002" 00:25:36.080 ], 00:25:36.080 "product_name": "passthru", 00:25:36.080 "block_size": 512, 00:25:36.080 "num_blocks": 65536, 00:25:36.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:36.080 "assigned_rate_limits": { 00:25:36.080 "rw_ios_per_sec": 0, 00:25:36.080 "rw_mbytes_per_sec": 0, 00:25:36.080 "r_mbytes_per_sec": 0, 00:25:36.080 "w_mbytes_per_sec": 0 00:25:36.080 }, 00:25:36.080 "claimed": true, 00:25:36.080 "claim_type": "exclusive_write", 00:25:36.080 "zoned": false, 00:25:36.080 "supported_io_types": { 00:25:36.080 "read": true, 00:25:36.080 "write": true, 00:25:36.080 "unmap": true, 00:25:36.080 "write_zeroes": true, 00:25:36.080 "flush": true, 00:25:36.080 "reset": true, 00:25:36.080 "compare": false, 00:25:36.080 "compare_and_write": false, 00:25:36.080 "abort": true, 00:25:36.080 "nvme_admin": false, 00:25:36.080 "nvme_io": false 00:25:36.080 }, 00:25:36.080 "memory_domains": [ 00:25:36.080 { 00:25:36.080 "dma_device_id": "system", 00:25:36.080 "dma_device_type": 1 00:25:36.080 }, 00:25:36.081 { 00:25:36.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.081 "dma_device_type": 2 00:25:36.081 } 00:25:36.081 ], 00:25:36.081 "driver_specific": { 00:25:36.081 "passthru": { 00:25:36.081 "name": "pt2", 00:25:36.081 "base_bdev_name": "malloc2" 00:25:36.081 } 00:25:36.081 } 00:25:36.081 }' 00:25:36.081 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:36.081 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:36.081 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:36.081 11:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:36.081 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:36.081 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:36.081 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:36.081 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:36.338 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:36.338 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:36.338 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:36.338 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:36.338 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:36.338 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:36.338 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:36.596 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:36.596 "name": "pt3", 00:25:36.596 "aliases": [ 00:25:36.596 "00000000-0000-0000-0000-000000000003" 00:25:36.596 ], 00:25:36.596 "product_name": "passthru", 00:25:36.596 "block_size": 512, 00:25:36.596 "num_blocks": 65536, 00:25:36.596 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:36.596 "assigned_rate_limits": { 00:25:36.596 "rw_ios_per_sec": 0, 00:25:36.596 "rw_mbytes_per_sec": 0, 00:25:36.596 "r_mbytes_per_sec": 0, 00:25:36.596 "w_mbytes_per_sec": 0 00:25:36.596 }, 00:25:36.596 "claimed": true, 00:25:36.596 "claim_type": "exclusive_write", 00:25:36.596 "zoned": false, 00:25:36.596 "supported_io_types": { 00:25:36.596 "read": true, 00:25:36.596 "write": true, 00:25:36.596 "unmap": true, 00:25:36.596 "write_zeroes": true, 00:25:36.596 "flush": true, 00:25:36.596 "reset": true, 00:25:36.596 "compare": false, 00:25:36.596 "compare_and_write": false, 00:25:36.596 "abort": true, 00:25:36.596 "nvme_admin": false, 00:25:36.596 "nvme_io": false 00:25:36.596 }, 00:25:36.596 "memory_domains": [ 00:25:36.596 { 00:25:36.596 "dma_device_id": "system", 00:25:36.596 "dma_device_type": 1 00:25:36.596 }, 00:25:36.596 { 00:25:36.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.596 "dma_device_type": 2 00:25:36.596 } 00:25:36.596 ], 00:25:36.596 "driver_specific": { 00:25:36.596 "passthru": { 00:25:36.596 "name": "pt3", 00:25:36.596 "base_bdev_name": "malloc3" 00:25:36.596 } 00:25:36.596 } 00:25:36.596 }' 00:25:36.596 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:36.596 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:36.596 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:36.596 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:36.596 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:36.855 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:36.855 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:36.855 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:36.855 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:36.855 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:36.855 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:36.855 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:36.855 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:36.855 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:36.855 11:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:37.114 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:37.114 "name": "pt4", 00:25:37.114 "aliases": [ 00:25:37.114 "00000000-0000-0000-0000-000000000004" 00:25:37.114 ], 00:25:37.114 "product_name": "passthru", 00:25:37.114 "block_size": 512, 00:25:37.114 "num_blocks": 65536, 00:25:37.114 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:37.114 "assigned_rate_limits": { 00:25:37.114 "rw_ios_per_sec": 0, 00:25:37.114 "rw_mbytes_per_sec": 0, 00:25:37.114 "r_mbytes_per_sec": 0, 00:25:37.114 "w_mbytes_per_sec": 0 00:25:37.114 }, 00:25:37.114 "claimed": true, 00:25:37.114 "claim_type": "exclusive_write", 00:25:37.114 "zoned": false, 00:25:37.114 "supported_io_types": { 00:25:37.114 "read": true, 00:25:37.114 "write": true, 00:25:37.114 "unmap": true, 00:25:37.114 "write_zeroes": true, 00:25:37.114 "flush": true, 00:25:37.114 "reset": true, 00:25:37.114 "compare": false, 00:25:37.114 "compare_and_write": false, 00:25:37.114 "abort": true, 00:25:37.114 "nvme_admin": false, 00:25:37.114 "nvme_io": false 00:25:37.114 }, 00:25:37.114 "memory_domains": [ 00:25:37.114 { 00:25:37.114 "dma_device_id": "system", 00:25:37.114 "dma_device_type": 1 00:25:37.114 }, 00:25:37.114 { 00:25:37.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:37.114 "dma_device_type": 2 00:25:37.114 } 00:25:37.114 ], 00:25:37.114 "driver_specific": { 00:25:37.114 "passthru": { 00:25:37.114 "name": "pt4", 00:25:37.114 "base_bdev_name": "malloc4" 00:25:37.114 } 00:25:37.114 } 00:25:37.114 }' 00:25:37.114 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:37.114 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:37.374 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:37.374 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:37.374 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:37.374 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:37.374 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:37.374 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:37.374 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:37.374 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:37.632 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:37.632 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:37.632 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:37.632 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:25:37.991 [2024-06-10 11:49:09.695506] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:37.991 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=7c4366dd-d06a-477a-9822-389ee6304754 00:25:37.991 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 7c4366dd-d06a-477a-9822-389ee6304754 ']' 00:25:37.991 11:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:37.991 [2024-06-10 11:49:09.983314] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:37.991 [2024-06-10 11:49:09.983365] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:37.991 [2024-06-10 11:49:09.983467] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:37.991 [2024-06-10 11:49:09.983546] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:37.991 [2024-06-10 11:49:09.983557] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:25:37.991 11:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.991 11:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:25:38.252 11:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:25:38.252 11:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:25:38.252 11:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:38.252 11:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:38.509 11:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:38.509 11:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:38.767 11:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:38.767 11:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:39.025 11:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:39.025 11:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:39.284 11:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:39.284 11:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:39.541 11:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:25:39.541 11:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:39.541 11:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:25:39.541 11:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:39.541 11:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:39.541 11:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:39.541 11:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:39.541 11:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:39.541 11:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:39.541 11:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:39.541 11:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:39.541 11:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:39.541 11:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:39.799 [2024-06-10 11:49:11.675119] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:39.799 [2024-06-10 11:49:11.677360] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:39.799 [2024-06-10 11:49:11.677436] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:39.799 [2024-06-10 11:49:11.677470] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:39.799 [2024-06-10 11:49:11.677521] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:39.799 [2024-06-10 11:49:11.677620] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:39.799 [2024-06-10 11:49:11.677662] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:39.799 [2024-06-10 11:49:11.677696] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:25:39.799 [2024-06-10 11:49:11.677719] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:39.799 [2024-06-10 11:49:11.677729] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:25:39.799 request: 00:25:39.799 { 00:25:39.799 "name": "raid_bdev1", 00:25:39.799 "raid_level": "raid0", 00:25:39.799 "base_bdevs": [ 00:25:39.799 "malloc1", 00:25:39.799 "malloc2", 00:25:39.799 "malloc3", 00:25:39.799 "malloc4" 00:25:39.799 ], 00:25:39.799 "strip_size_kb": 64, 00:25:39.799 "superblock": false, 00:25:39.799 "method": "bdev_raid_create", 00:25:39.799 "req_id": 1 00:25:39.799 } 00:25:39.799 Got JSON-RPC error response 00:25:39.799 response: 00:25:39.799 { 00:25:39.799 "code": -17, 00:25:39.799 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:39.799 } 00:25:39.799 11:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:25:39.799 11:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:39.799 11:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:39.799 11:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:39.799 11:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:25:39.799 11:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.057 11:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:25:40.057 11:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:25:40.057 11:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:40.316 [2024-06-10 11:49:12.119399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:40.316 [2024-06-10 11:49:12.119587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:40.316 [2024-06-10 11:49:12.119661] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:40.316 [2024-06-10 11:49:12.119751] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:40.316 [2024-06-10 11:49:12.124731] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:40.316 [2024-06-10 11:49:12.124807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:40.316 [2024-06-10 11:49:12.124993] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:40.316 [2024-06-10 11:49:12.125080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:40.316 pt1 00:25:40.316 11:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:25:40.316 11:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:40.316 11:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:40.316 11:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:40.316 11:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:40.316 11:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:40.316 11:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:40.316 11:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:40.316 11:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:40.316 11:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:40.316 11:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.316 11:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.574 11:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:40.574 "name": "raid_bdev1", 00:25:40.574 "uuid": "7c4366dd-d06a-477a-9822-389ee6304754", 00:25:40.574 "strip_size_kb": 64, 00:25:40.574 "state": "configuring", 00:25:40.574 "raid_level": "raid0", 00:25:40.574 "superblock": true, 00:25:40.574 "num_base_bdevs": 4, 00:25:40.574 "num_base_bdevs_discovered": 1, 00:25:40.574 "num_base_bdevs_operational": 4, 00:25:40.574 "base_bdevs_list": [ 00:25:40.574 { 00:25:40.574 "name": "pt1", 00:25:40.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:40.574 "is_configured": true, 00:25:40.574 "data_offset": 2048, 00:25:40.574 "data_size": 63488 00:25:40.574 }, 00:25:40.574 { 00:25:40.574 "name": null, 00:25:40.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:40.574 "is_configured": false, 00:25:40.574 "data_offset": 2048, 00:25:40.574 "data_size": 63488 00:25:40.574 }, 00:25:40.574 { 00:25:40.574 "name": null, 00:25:40.574 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:40.574 "is_configured": false, 00:25:40.574 "data_offset": 2048, 00:25:40.574 "data_size": 63488 00:25:40.574 }, 00:25:40.574 { 00:25:40.574 "name": null, 00:25:40.574 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:40.574 "is_configured": false, 00:25:40.574 "data_offset": 2048, 00:25:40.574 "data_size": 63488 00:25:40.574 } 00:25:40.574 ] 00:25:40.574 }' 00:25:40.574 11:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:40.574 11:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.137 11:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:25:41.137 11:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:41.395 [2024-06-10 11:49:13.335270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:41.395 [2024-06-10 11:49:13.335389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.395 [2024-06-10 11:49:13.335438] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:41.395 [2024-06-10 11:49:13.335487] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.395 [2024-06-10 11:49:13.336024] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.395 [2024-06-10 11:49:13.336065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:41.395 [2024-06-10 11:49:13.336191] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:41.395 [2024-06-10 11:49:13.336215] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:41.395 pt2 00:25:41.395 11:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:41.654 [2024-06-10 11:49:13.627413] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:41.654 11:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:25:41.654 11:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:41.654 11:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:41.654 11:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:41.654 11:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:41.654 11:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:41.654 11:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:41.654 11:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:41.654 11:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:41.654 11:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:41.654 11:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.654 11:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.913 11:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:41.913 "name": "raid_bdev1", 00:25:41.913 "uuid": "7c4366dd-d06a-477a-9822-389ee6304754", 00:25:41.913 "strip_size_kb": 64, 00:25:41.913 "state": "configuring", 00:25:41.913 "raid_level": "raid0", 00:25:41.913 "superblock": true, 00:25:41.913 "num_base_bdevs": 4, 00:25:41.913 "num_base_bdevs_discovered": 1, 00:25:41.913 "num_base_bdevs_operational": 4, 00:25:41.913 "base_bdevs_list": [ 00:25:41.913 { 00:25:41.913 "name": "pt1", 00:25:41.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:41.913 "is_configured": true, 00:25:41.913 "data_offset": 2048, 00:25:41.913 "data_size": 63488 00:25:41.913 }, 00:25:41.913 { 00:25:41.913 "name": null, 00:25:41.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:41.913 "is_configured": false, 00:25:41.913 "data_offset": 2048, 00:25:41.913 "data_size": 63488 00:25:41.913 }, 00:25:41.913 { 00:25:41.913 "name": null, 00:25:41.913 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:41.913 "is_configured": false, 00:25:41.913 "data_offset": 2048, 00:25:41.913 "data_size": 63488 00:25:41.913 }, 00:25:41.913 { 00:25:41.913 "name": null, 00:25:41.913 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:41.913 "is_configured": false, 00:25:41.913 "data_offset": 2048, 00:25:41.913 "data_size": 63488 00:25:41.913 } 00:25:41.913 ] 00:25:41.913 }' 00:25:41.913 11:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:41.913 11:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.847 11:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:25:42.847 11:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:42.847 11:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:42.847 [2024-06-10 11:49:14.756411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:42.847 [2024-06-10 11:49:14.756524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:42.847 [2024-06-10 11:49:14.756577] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:42.847 [2024-06-10 11:49:14.756634] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:42.847 [2024-06-10 11:49:14.757257] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:42.847 [2024-06-10 11:49:14.757326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:42.847 [2024-06-10 11:49:14.757489] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:42.847 [2024-06-10 11:49:14.757523] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:42.847 pt2 00:25:42.847 11:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:42.847 11:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:42.847 11:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:43.105 [2024-06-10 11:49:15.064462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:43.105 [2024-06-10 11:49:15.064576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:43.105 [2024-06-10 11:49:15.064611] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:43.105 [2024-06-10 11:49:15.064662] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:43.105 [2024-06-10 11:49:15.065161] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:43.105 [2024-06-10 11:49:15.065209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:43.105 [2024-06-10 11:49:15.065330] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:43.105 [2024-06-10 11:49:15.065353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:43.105 pt3 00:25:43.105 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:43.105 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:43.105 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:43.363 [2024-06-10 11:49:15.308485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:43.363 [2024-06-10 11:49:15.308606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:43.363 [2024-06-10 11:49:15.308657] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:43.363 [2024-06-10 11:49:15.308713] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:43.363 [2024-06-10 11:49:15.309221] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:43.363 [2024-06-10 11:49:15.309287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:43.363 [2024-06-10 11:49:15.309400] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:43.363 [2024-06-10 11:49:15.309431] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:43.363 [2024-06-10 11:49:15.309568] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:25:43.363 [2024-06-10 11:49:15.309585] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:43.363 [2024-06-10 11:49:15.309696] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:43.363 [2024-06-10 11:49:15.310052] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:25:43.363 [2024-06-10 11:49:15.310076] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:25:43.363 [2024-06-10 11:49:15.310225] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:43.363 pt4 00:25:43.363 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:43.363 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:43.363 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:25:43.363 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:43.363 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:43.363 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:43.363 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:43.363 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:43.363 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:43.363 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:43.363 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:43.363 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:43.363 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.363 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:43.621 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:43.621 "name": "raid_bdev1", 00:25:43.621 "uuid": "7c4366dd-d06a-477a-9822-389ee6304754", 00:25:43.621 "strip_size_kb": 64, 00:25:43.621 "state": "online", 00:25:43.621 "raid_level": "raid0", 00:25:43.621 "superblock": true, 00:25:43.621 "num_base_bdevs": 4, 00:25:43.621 "num_base_bdevs_discovered": 4, 00:25:43.621 "num_base_bdevs_operational": 4, 00:25:43.621 "base_bdevs_list": [ 00:25:43.621 { 00:25:43.621 "name": "pt1", 00:25:43.621 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:43.621 "is_configured": true, 00:25:43.621 "data_offset": 2048, 00:25:43.621 "data_size": 63488 00:25:43.621 }, 00:25:43.621 { 00:25:43.621 "name": "pt2", 00:25:43.621 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:43.621 "is_configured": true, 00:25:43.621 "data_offset": 2048, 00:25:43.621 "data_size": 63488 00:25:43.621 }, 00:25:43.621 { 00:25:43.621 "name": "pt3", 00:25:43.621 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:43.621 "is_configured": true, 00:25:43.621 "data_offset": 2048, 00:25:43.622 "data_size": 63488 00:25:43.622 }, 00:25:43.622 { 00:25:43.622 "name": "pt4", 00:25:43.622 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:43.622 "is_configured": true, 00:25:43.622 "data_offset": 2048, 00:25:43.622 "data_size": 63488 00:25:43.622 } 00:25:43.622 ] 00:25:43.622 }' 00:25:43.622 11:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:43.622 11:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.188 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:25:44.188 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:44.188 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:44.188 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:44.188 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:44.188 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:44.188 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:44.188 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:44.445 [2024-06-10 11:49:16.321164] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:44.445 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:44.445 "name": "raid_bdev1", 00:25:44.445 "aliases": [ 00:25:44.445 "7c4366dd-d06a-477a-9822-389ee6304754" 00:25:44.445 ], 00:25:44.445 "product_name": "Raid Volume", 00:25:44.445 "block_size": 512, 00:25:44.445 "num_blocks": 253952, 00:25:44.445 "uuid": "7c4366dd-d06a-477a-9822-389ee6304754", 00:25:44.445 "assigned_rate_limits": { 00:25:44.445 "rw_ios_per_sec": 0, 00:25:44.445 "rw_mbytes_per_sec": 0, 00:25:44.445 "r_mbytes_per_sec": 0, 00:25:44.445 "w_mbytes_per_sec": 0 00:25:44.445 }, 00:25:44.445 "claimed": false, 00:25:44.445 "zoned": false, 00:25:44.445 "supported_io_types": { 00:25:44.445 "read": true, 00:25:44.445 "write": true, 00:25:44.445 "unmap": true, 00:25:44.445 "write_zeroes": true, 00:25:44.445 "flush": true, 00:25:44.445 "reset": true, 00:25:44.445 "compare": false, 00:25:44.445 "compare_and_write": false, 00:25:44.445 "abort": false, 00:25:44.445 "nvme_admin": false, 00:25:44.445 "nvme_io": false 00:25:44.445 }, 00:25:44.445 "memory_domains": [ 00:25:44.445 { 00:25:44.445 "dma_device_id": "system", 00:25:44.445 "dma_device_type": 1 00:25:44.445 }, 00:25:44.445 { 00:25:44.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.445 "dma_device_type": 2 00:25:44.445 }, 00:25:44.445 { 00:25:44.445 "dma_device_id": "system", 00:25:44.445 "dma_device_type": 1 00:25:44.445 }, 00:25:44.445 { 00:25:44.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.445 "dma_device_type": 2 00:25:44.445 }, 00:25:44.445 { 00:25:44.445 "dma_device_id": "system", 00:25:44.446 "dma_device_type": 1 00:25:44.446 }, 00:25:44.446 { 00:25:44.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.446 "dma_device_type": 2 00:25:44.446 }, 00:25:44.446 { 00:25:44.446 "dma_device_id": "system", 00:25:44.446 "dma_device_type": 1 00:25:44.446 }, 00:25:44.446 { 00:25:44.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.446 "dma_device_type": 2 00:25:44.446 } 00:25:44.446 ], 00:25:44.446 "driver_specific": { 00:25:44.446 "raid": { 00:25:44.446 "uuid": "7c4366dd-d06a-477a-9822-389ee6304754", 00:25:44.446 "strip_size_kb": 64, 00:25:44.446 "state": "online", 00:25:44.446 "raid_level": "raid0", 00:25:44.446 "superblock": true, 00:25:44.446 "num_base_bdevs": 4, 00:25:44.446 "num_base_bdevs_discovered": 4, 00:25:44.446 "num_base_bdevs_operational": 4, 00:25:44.446 "base_bdevs_list": [ 00:25:44.446 { 00:25:44.446 "name": "pt1", 00:25:44.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:44.446 "is_configured": true, 00:25:44.446 "data_offset": 2048, 00:25:44.446 "data_size": 63488 00:25:44.446 }, 00:25:44.446 { 00:25:44.446 "name": "pt2", 00:25:44.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:44.446 "is_configured": true, 00:25:44.446 "data_offset": 2048, 00:25:44.446 "data_size": 63488 00:25:44.446 }, 00:25:44.446 { 00:25:44.446 "name": "pt3", 00:25:44.446 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:44.446 "is_configured": true, 00:25:44.446 "data_offset": 2048, 00:25:44.446 "data_size": 63488 00:25:44.446 }, 00:25:44.446 { 00:25:44.446 "name": "pt4", 00:25:44.446 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:44.446 "is_configured": true, 00:25:44.446 "data_offset": 2048, 00:25:44.446 "data_size": 63488 00:25:44.446 } 00:25:44.446 ] 00:25:44.446 } 00:25:44.446 } 00:25:44.446 }' 00:25:44.446 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:44.446 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:44.446 pt2 00:25:44.446 pt3 00:25:44.446 pt4' 00:25:44.446 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:44.446 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:44.446 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:45.011 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:45.011 "name": "pt1", 00:25:45.011 "aliases": [ 00:25:45.011 "00000000-0000-0000-0000-000000000001" 00:25:45.011 ], 00:25:45.011 "product_name": "passthru", 00:25:45.011 "block_size": 512, 00:25:45.011 "num_blocks": 65536, 00:25:45.011 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:45.011 "assigned_rate_limits": { 00:25:45.011 "rw_ios_per_sec": 0, 00:25:45.011 "rw_mbytes_per_sec": 0, 00:25:45.011 "r_mbytes_per_sec": 0, 00:25:45.011 "w_mbytes_per_sec": 0 00:25:45.011 }, 00:25:45.011 "claimed": true, 00:25:45.011 "claim_type": "exclusive_write", 00:25:45.011 "zoned": false, 00:25:45.011 "supported_io_types": { 00:25:45.011 "read": true, 00:25:45.011 "write": true, 00:25:45.011 "unmap": true, 00:25:45.011 "write_zeroes": true, 00:25:45.011 "flush": true, 00:25:45.011 "reset": true, 00:25:45.011 "compare": false, 00:25:45.011 "compare_and_write": false, 00:25:45.011 "abort": true, 00:25:45.011 "nvme_admin": false, 00:25:45.011 "nvme_io": false 00:25:45.011 }, 00:25:45.011 "memory_domains": [ 00:25:45.011 { 00:25:45.011 "dma_device_id": "system", 00:25:45.011 "dma_device_type": 1 00:25:45.011 }, 00:25:45.011 { 00:25:45.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.011 "dma_device_type": 2 00:25:45.011 } 00:25:45.011 ], 00:25:45.011 "driver_specific": { 00:25:45.011 "passthru": { 00:25:45.011 "name": "pt1", 00:25:45.011 "base_bdev_name": "malloc1" 00:25:45.011 } 00:25:45.011 } 00:25:45.011 }' 00:25:45.011 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:45.011 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:45.011 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:45.011 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:45.011 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:45.011 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:45.011 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:45.011 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:45.011 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:45.011 11:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:45.011 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:45.011 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:45.011 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:45.011 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:45.011 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:45.269 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:45.269 "name": "pt2", 00:25:45.269 "aliases": [ 00:25:45.269 "00000000-0000-0000-0000-000000000002" 00:25:45.269 ], 00:25:45.269 "product_name": "passthru", 00:25:45.269 "block_size": 512, 00:25:45.269 "num_blocks": 65536, 00:25:45.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:45.269 "assigned_rate_limits": { 00:25:45.269 "rw_ios_per_sec": 0, 00:25:45.269 "rw_mbytes_per_sec": 0, 00:25:45.269 "r_mbytes_per_sec": 0, 00:25:45.269 "w_mbytes_per_sec": 0 00:25:45.269 }, 00:25:45.269 "claimed": true, 00:25:45.269 "claim_type": "exclusive_write", 00:25:45.269 "zoned": false, 00:25:45.269 "supported_io_types": { 00:25:45.269 "read": true, 00:25:45.269 "write": true, 00:25:45.269 "unmap": true, 00:25:45.269 "write_zeroes": true, 00:25:45.269 "flush": true, 00:25:45.269 "reset": true, 00:25:45.269 "compare": false, 00:25:45.269 "compare_and_write": false, 00:25:45.269 "abort": true, 00:25:45.269 "nvme_admin": false, 00:25:45.269 "nvme_io": false 00:25:45.269 }, 00:25:45.269 "memory_domains": [ 00:25:45.269 { 00:25:45.269 "dma_device_id": "system", 00:25:45.269 "dma_device_type": 1 00:25:45.269 }, 00:25:45.269 { 00:25:45.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.269 "dma_device_type": 2 00:25:45.269 } 00:25:45.269 ], 00:25:45.269 "driver_specific": { 00:25:45.269 "passthru": { 00:25:45.269 "name": "pt2", 00:25:45.269 "base_bdev_name": "malloc2" 00:25:45.269 } 00:25:45.269 } 00:25:45.269 }' 00:25:45.269 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:45.527 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:45.527 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:45.527 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:45.527 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:45.527 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:45.527 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:45.527 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:45.527 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:45.527 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:45.846 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:45.846 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:45.846 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:45.846 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:45.846 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:45.846 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:45.846 "name": "pt3", 00:25:45.846 "aliases": [ 00:25:45.846 "00000000-0000-0000-0000-000000000003" 00:25:45.846 ], 00:25:45.846 "product_name": "passthru", 00:25:45.846 "block_size": 512, 00:25:45.846 "num_blocks": 65536, 00:25:45.846 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:45.846 "assigned_rate_limits": { 00:25:45.846 "rw_ios_per_sec": 0, 00:25:45.846 "rw_mbytes_per_sec": 0, 00:25:45.846 "r_mbytes_per_sec": 0, 00:25:45.846 "w_mbytes_per_sec": 0 00:25:45.846 }, 00:25:45.846 "claimed": true, 00:25:45.846 "claim_type": "exclusive_write", 00:25:45.846 "zoned": false, 00:25:45.846 "supported_io_types": { 00:25:45.846 "read": true, 00:25:45.846 "write": true, 00:25:45.846 "unmap": true, 00:25:45.846 "write_zeroes": true, 00:25:45.846 "flush": true, 00:25:45.846 "reset": true, 00:25:45.846 "compare": false, 00:25:45.846 "compare_and_write": false, 00:25:45.846 "abort": true, 00:25:45.846 "nvme_admin": false, 00:25:45.846 "nvme_io": false 00:25:45.846 }, 00:25:45.846 "memory_domains": [ 00:25:45.846 { 00:25:45.846 "dma_device_id": "system", 00:25:45.846 "dma_device_type": 1 00:25:45.846 }, 00:25:45.846 { 00:25:45.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.846 "dma_device_type": 2 00:25:45.846 } 00:25:45.846 ], 00:25:45.846 "driver_specific": { 00:25:45.846 "passthru": { 00:25:45.846 "name": "pt3", 00:25:45.846 "base_bdev_name": "malloc3" 00:25:45.846 } 00:25:45.846 } 00:25:45.846 }' 00:25:45.846 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:45.846 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:46.104 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:46.104 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:46.104 11:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:46.104 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:46.104 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:46.104 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:46.104 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:46.104 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:46.361 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:46.361 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:46.361 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:46.361 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:46.361 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:46.618 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:46.618 "name": "pt4", 00:25:46.618 "aliases": [ 00:25:46.618 "00000000-0000-0000-0000-000000000004" 00:25:46.618 ], 00:25:46.618 "product_name": "passthru", 00:25:46.618 "block_size": 512, 00:25:46.618 "num_blocks": 65536, 00:25:46.618 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:46.618 "assigned_rate_limits": { 00:25:46.618 "rw_ios_per_sec": 0, 00:25:46.618 "rw_mbytes_per_sec": 0, 00:25:46.618 "r_mbytes_per_sec": 0, 00:25:46.618 "w_mbytes_per_sec": 0 00:25:46.618 }, 00:25:46.618 "claimed": true, 00:25:46.618 "claim_type": "exclusive_write", 00:25:46.618 "zoned": false, 00:25:46.618 "supported_io_types": { 00:25:46.618 "read": true, 00:25:46.618 "write": true, 00:25:46.618 "unmap": true, 00:25:46.618 "write_zeroes": true, 00:25:46.618 "flush": true, 00:25:46.618 "reset": true, 00:25:46.618 "compare": false, 00:25:46.618 "compare_and_write": false, 00:25:46.618 "abort": true, 00:25:46.618 "nvme_admin": false, 00:25:46.618 "nvme_io": false 00:25:46.618 }, 00:25:46.618 "memory_domains": [ 00:25:46.618 { 00:25:46.618 "dma_device_id": "system", 00:25:46.618 "dma_device_type": 1 00:25:46.618 }, 00:25:46.618 { 00:25:46.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:46.618 "dma_device_type": 2 00:25:46.618 } 00:25:46.618 ], 00:25:46.618 "driver_specific": { 00:25:46.618 "passthru": { 00:25:46.618 "name": "pt4", 00:25:46.618 "base_bdev_name": "malloc4" 00:25:46.618 } 00:25:46.618 } 00:25:46.618 }' 00:25:46.618 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:46.618 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:46.618 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:46.618 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:46.618 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:46.876 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:46.876 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:46.876 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:46.876 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:46.876 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:46.876 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:46.876 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:46.876 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:46.876 11:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:25:47.134 [2024-06-10 11:49:19.132014] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:47.134 11:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 7c4366dd-d06a-477a-9822-389ee6304754 '!=' 7c4366dd-d06a-477a-9822-389ee6304754 ']' 00:25:47.134 11:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:25:47.134 11:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:47.134 11:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:47.134 11:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 137947 00:25:47.134 11:49:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 137947 ']' 00:25:47.134 11:49:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 137947 00:25:47.134 11:49:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:25:47.134 11:49:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:47.134 11:49:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 137947 00:25:47.134 11:49:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:47.134 11:49:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:47.134 11:49:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 137947' 00:25:47.134 killing process with pid 137947 00:25:47.134 11:49:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 137947 00:25:47.134 11:49:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 137947 00:25:47.134 [2024-06-10 11:49:19.172634] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:47.134 [2024-06-10 11:49:19.172820] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:47.134 [2024-06-10 11:49:19.173089] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:47.134 [2024-06-10 11:49:19.173109] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:25:47.703 [2024-06-10 11:49:19.623575] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:49.104 ************************************ 00:25:49.104 END TEST raid_superblock_test 00:25:49.104 ************************************ 00:25:49.104 11:49:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:25:49.104 00:25:49.104 real 0m18.839s 00:25:49.104 user 0m33.072s 00:25:49.104 sys 0m2.478s 00:25:49.104 11:49:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:49.104 11:49:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.104 11:49:21 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:25:49.104 11:49:21 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:25:49.104 11:49:21 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:49.104 11:49:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:49.104 ************************************ 00:25:49.104 START TEST raid_read_error_test 00:25:49.104 ************************************ 00:25:49.104 11:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 4 read 00:25:49.104 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:25:49.104 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:25:49.104 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:25:49.104 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:25:49.104 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:49.104 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:25:49.104 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.NAnJ2auQUz 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=138514 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 138514 /var/tmp/spdk-raid.sock 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 138514 ']' 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:49.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:49.363 11:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.363 [2024-06-10 11:49:21.254819] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:25:49.363 [2024-06-10 11:49:21.255025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138514 ] 00:25:49.621 [2024-06-10 11:49:21.430491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.880 [2024-06-10 11:49:21.696672] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.140 [2024-06-10 11:49:21.962161] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:50.140 11:49:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:50.140 11:49:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:25:50.140 11:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:50.140 11:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:50.397 BaseBdev1_malloc 00:25:50.397 11:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:25:50.654 true 00:25:50.654 11:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:51.220 [2024-06-10 11:49:23.035093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:51.220 [2024-06-10 11:49:23.035214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:51.220 [2024-06-10 11:49:23.035270] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:25:51.220 [2024-06-10 11:49:23.035303] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:51.220 [2024-06-10 11:49:23.038112] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:51.220 [2024-06-10 11:49:23.038187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:51.220 BaseBdev1 00:25:51.220 11:49:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:51.220 11:49:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:51.479 BaseBdev2_malloc 00:25:51.479 11:49:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:25:51.737 true 00:25:51.737 11:49:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:51.995 [2024-06-10 11:49:23.917993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:51.995 [2024-06-10 11:49:23.918100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:51.995 [2024-06-10 11:49:23.918157] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:51.995 [2024-06-10 11:49:23.918178] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:51.995 [2024-06-10 11:49:23.920831] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:51.995 [2024-06-10 11:49:23.920889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:51.995 BaseBdev2 00:25:51.995 11:49:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:51.995 11:49:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:52.253 BaseBdev3_malloc 00:25:52.253 11:49:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:25:52.511 true 00:25:52.511 11:49:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:52.771 [2024-06-10 11:49:24.709563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:52.771 [2024-06-10 11:49:24.709694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:52.771 [2024-06-10 11:49:24.709731] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:25:52.771 [2024-06-10 11:49:24.709763] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:52.771 [2024-06-10 11:49:24.712434] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:52.771 [2024-06-10 11:49:24.712511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:52.771 BaseBdev3 00:25:52.771 11:49:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:52.771 11:49:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:53.029 BaseBdev4_malloc 00:25:53.029 11:49:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:25:53.287 true 00:25:53.287 11:49:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:53.544 [2024-06-10 11:49:25.499773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:53.544 [2024-06-10 11:49:25.499891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:53.544 [2024-06-10 11:49:25.499930] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:53.544 [2024-06-10 11:49:25.500008] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:53.544 [2024-06-10 11:49:25.502663] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:53.544 [2024-06-10 11:49:25.502751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:53.544 BaseBdev4 00:25:53.544 11:49:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:25:53.803 [2024-06-10 11:49:25.755831] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:53.803 [2024-06-10 11:49:25.758130] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:53.803 [2024-06-10 11:49:25.758228] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:53.803 [2024-06-10 11:49:25.758290] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:53.803 [2024-06-10 11:49:25.758538] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:25:53.803 [2024-06-10 11:49:25.758557] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:53.803 [2024-06-10 11:49:25.758695] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:53.803 [2024-06-10 11:49:25.759093] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:25:53.803 [2024-06-10 11:49:25.759114] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:25:53.803 [2024-06-10 11:49:25.759277] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:53.803 11:49:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:25:53.803 11:49:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:53.803 11:49:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:53.803 11:49:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:53.803 11:49:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:53.803 11:49:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:53.803 11:49:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:53.803 11:49:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:53.803 11:49:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:53.803 11:49:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:53.803 11:49:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.803 11:49:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.062 11:49:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:54.062 "name": "raid_bdev1", 00:25:54.062 "uuid": "4e3d1b36-07d0-437e-adb4-3c9abb3611ce", 00:25:54.062 "strip_size_kb": 64, 00:25:54.062 "state": "online", 00:25:54.062 "raid_level": "raid0", 00:25:54.062 "superblock": true, 00:25:54.062 "num_base_bdevs": 4, 00:25:54.062 "num_base_bdevs_discovered": 4, 00:25:54.062 "num_base_bdevs_operational": 4, 00:25:54.062 "base_bdevs_list": [ 00:25:54.062 { 00:25:54.062 "name": "BaseBdev1", 00:25:54.062 "uuid": "0fd17008-57cb-5c68-90bf-79c7befa4e0d", 00:25:54.062 "is_configured": true, 00:25:54.062 "data_offset": 2048, 00:25:54.062 "data_size": 63488 00:25:54.062 }, 00:25:54.062 { 00:25:54.062 "name": "BaseBdev2", 00:25:54.062 "uuid": "4cd234c5-2b51-5acd-b832-5eb0b15d95f9", 00:25:54.062 "is_configured": true, 00:25:54.062 "data_offset": 2048, 00:25:54.062 "data_size": 63488 00:25:54.062 }, 00:25:54.062 { 00:25:54.062 "name": "BaseBdev3", 00:25:54.062 "uuid": "8176a36e-a063-585c-9f4f-ce6f8fb88f13", 00:25:54.062 "is_configured": true, 00:25:54.062 "data_offset": 2048, 00:25:54.062 "data_size": 63488 00:25:54.062 }, 00:25:54.062 { 00:25:54.062 "name": "BaseBdev4", 00:25:54.062 "uuid": "e39cb39b-276a-5ea6-b4b8-2ac414a46a66", 00:25:54.062 "is_configured": true, 00:25:54.062 "data_offset": 2048, 00:25:54.062 "data_size": 63488 00:25:54.062 } 00:25:54.062 ] 00:25:54.062 }' 00:25:54.062 11:49:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:54.062 11:49:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.629 11:49:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:25:54.630 11:49:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:54.630 [2024-06-10 11:49:26.681710] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:55.564 11:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:25:56.130 11:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:25:56.130 11:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:25:56.130 11:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:25:56.130 11:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:25:56.130 11:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:56.130 11:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:56.130 11:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:56.130 11:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:56.130 11:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:56.130 11:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:56.130 11:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:56.130 11:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:56.130 11:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:56.130 11:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.130 11:49:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.388 11:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:56.388 "name": "raid_bdev1", 00:25:56.388 "uuid": "4e3d1b36-07d0-437e-adb4-3c9abb3611ce", 00:25:56.388 "strip_size_kb": 64, 00:25:56.388 "state": "online", 00:25:56.388 "raid_level": "raid0", 00:25:56.388 "superblock": true, 00:25:56.388 "num_base_bdevs": 4, 00:25:56.388 "num_base_bdevs_discovered": 4, 00:25:56.388 "num_base_bdevs_operational": 4, 00:25:56.388 "base_bdevs_list": [ 00:25:56.388 { 00:25:56.388 "name": "BaseBdev1", 00:25:56.388 "uuid": "0fd17008-57cb-5c68-90bf-79c7befa4e0d", 00:25:56.388 "is_configured": true, 00:25:56.388 "data_offset": 2048, 00:25:56.388 "data_size": 63488 00:25:56.388 }, 00:25:56.388 { 00:25:56.388 "name": "BaseBdev2", 00:25:56.388 "uuid": "4cd234c5-2b51-5acd-b832-5eb0b15d95f9", 00:25:56.388 "is_configured": true, 00:25:56.388 "data_offset": 2048, 00:25:56.388 "data_size": 63488 00:25:56.388 }, 00:25:56.388 { 00:25:56.388 "name": "BaseBdev3", 00:25:56.388 "uuid": "8176a36e-a063-585c-9f4f-ce6f8fb88f13", 00:25:56.388 "is_configured": true, 00:25:56.388 "data_offset": 2048, 00:25:56.388 "data_size": 63488 00:25:56.388 }, 00:25:56.388 { 00:25:56.388 "name": "BaseBdev4", 00:25:56.388 "uuid": "e39cb39b-276a-5ea6-b4b8-2ac414a46a66", 00:25:56.388 "is_configured": true, 00:25:56.388 "data_offset": 2048, 00:25:56.388 "data_size": 63488 00:25:56.388 } 00:25:56.388 ] 00:25:56.388 }' 00:25:56.388 11:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:56.388 11:49:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.955 11:49:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:56.955 [2024-06-10 11:49:28.991421] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:56.955 [2024-06-10 11:49:28.991469] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:56.955 [2024-06-10 11:49:28.994400] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:56.955 [2024-06-10 11:49:28.994459] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:56.955 [2024-06-10 11:49:28.994505] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:56.955 [2024-06-10 11:49:28.994514] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:25:56.955 0 00:25:56.955 11:49:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 138514 00:25:56.955 11:49:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 138514 ']' 00:25:56.955 11:49:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 138514 00:25:57.213 11:49:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:25:57.213 11:49:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:57.213 11:49:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 138514 00:25:57.213 11:49:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:57.213 11:49:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:57.213 11:49:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 138514' 00:25:57.213 killing process with pid 138514 00:25:57.213 11:49:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 138514 00:25:57.213 [2024-06-10 11:49:29.034734] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:57.213 11:49:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 138514 00:25:57.470 [2024-06-10 11:49:29.466776] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:59.370 11:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:25:59.370 11:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.NAnJ2auQUz 00:25:59.370 11:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:25:59.370 11:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:25:59.370 11:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:25:59.370 11:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:59.370 11:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:59.370 11:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:25:59.370 00:25:59.370 real 0m10.076s 00:25:59.370 user 0m15.064s 00:25:59.370 sys 0m1.213s 00:25:59.370 11:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:59.370 11:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.370 ************************************ 00:25:59.370 END TEST raid_read_error_test 00:25:59.370 ************************************ 00:25:59.370 11:49:31 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:25:59.370 11:49:31 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:25:59.370 11:49:31 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:59.370 11:49:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:59.370 ************************************ 00:25:59.370 START TEST raid_write_error_test 00:25:59.370 ************************************ 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 4 write 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.XGRi3tG2tC 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=138741 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 138741 /var/tmp/spdk-raid.sock 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 138741 ']' 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:59.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:59.370 11:49:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.370 [2024-06-10 11:49:31.398074] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:25:59.370 [2024-06-10 11:49:31.398388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138741 ] 00:25:59.629 [2024-06-10 11:49:31.604100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.887 [2024-06-10 11:49:31.853134] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.146 [2024-06-10 11:49:32.112213] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:00.404 11:49:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:00.404 11:49:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:26:00.404 11:49:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:00.404 11:49:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:00.661 BaseBdev1_malloc 00:26:00.661 11:49:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:01.286 true 00:26:01.286 11:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:01.286 [2024-06-10 11:49:33.229546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:01.286 [2024-06-10 11:49:33.229682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:01.286 [2024-06-10 11:49:33.229733] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:26:01.286 [2024-06-10 11:49:33.229756] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:01.286 [2024-06-10 11:49:33.232572] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:01.286 [2024-06-10 11:49:33.232654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:01.286 BaseBdev1 00:26:01.286 11:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:01.286 11:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:01.543 BaseBdev2_malloc 00:26:01.544 11:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:01.801 true 00:26:01.801 11:49:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:02.059 [2024-06-10 11:49:34.018339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:02.059 [2024-06-10 11:49:34.018461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.059 [2024-06-10 11:49:34.018524] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:02.059 [2024-06-10 11:49:34.018549] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.059 [2024-06-10 11:49:34.021417] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.059 [2024-06-10 11:49:34.021695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:02.059 BaseBdev2 00:26:02.059 11:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:02.059 11:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:02.316 BaseBdev3_malloc 00:26:02.316 11:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:02.574 true 00:26:02.574 11:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:02.832 [2024-06-10 11:49:34.788589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:02.832 [2024-06-10 11:49:34.788935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.832 [2024-06-10 11:49:34.789016] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:26:02.832 [2024-06-10 11:49:34.789250] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.832 [2024-06-10 11:49:34.792030] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.832 [2024-06-10 11:49:34.792255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:02.832 BaseBdev3 00:26:02.832 11:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:02.832 11:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:03.089 BaseBdev4_malloc 00:26:03.089 11:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:03.346 true 00:26:03.346 11:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:03.603 [2024-06-10 11:49:35.506218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:03.603 [2024-06-10 11:49:35.506554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.603 [2024-06-10 11:49:35.506730] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:26:03.603 [2024-06-10 11:49:35.506891] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.603 [2024-06-10 11:49:35.509679] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.603 [2024-06-10 11:49:35.509913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:03.603 BaseBdev4 00:26:03.603 11:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:03.860 [2024-06-10 11:49:35.834350] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:03.860 [2024-06-10 11:49:35.836874] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:03.860 [2024-06-10 11:49:35.837178] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:03.860 [2024-06-10 11:49:35.837354] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:03.860 [2024-06-10 11:49:35.837737] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:26:03.860 [2024-06-10 11:49:35.837856] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:03.860 [2024-06-10 11:49:35.838058] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:03.860 [2024-06-10 11:49:35.838497] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:26:03.860 [2024-06-10 11:49:35.838637] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:26:03.860 [2024-06-10 11:49:35.839023] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:03.860 11:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:26:03.860 11:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:03.860 11:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:03.860 11:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:26:03.860 11:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:03.860 11:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:03.860 11:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:03.860 11:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:03.860 11:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:03.860 11:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:03.860 11:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.860 11:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.118 11:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:04.118 "name": "raid_bdev1", 00:26:04.118 "uuid": "0b6b49c2-99a3-46eb-bef5-187cd790e08f", 00:26:04.118 "strip_size_kb": 64, 00:26:04.118 "state": "online", 00:26:04.118 "raid_level": "raid0", 00:26:04.118 "superblock": true, 00:26:04.118 "num_base_bdevs": 4, 00:26:04.118 "num_base_bdevs_discovered": 4, 00:26:04.118 "num_base_bdevs_operational": 4, 00:26:04.118 "base_bdevs_list": [ 00:26:04.118 { 00:26:04.118 "name": "BaseBdev1", 00:26:04.118 "uuid": "a66b8b06-ff4f-5401-9a01-b0eeba8e23ca", 00:26:04.118 "is_configured": true, 00:26:04.118 "data_offset": 2048, 00:26:04.118 "data_size": 63488 00:26:04.118 }, 00:26:04.118 { 00:26:04.118 "name": "BaseBdev2", 00:26:04.118 "uuid": "bad79bf9-2a42-5201-9658-7937ee8d2c23", 00:26:04.118 "is_configured": true, 00:26:04.118 "data_offset": 2048, 00:26:04.118 "data_size": 63488 00:26:04.118 }, 00:26:04.118 { 00:26:04.118 "name": "BaseBdev3", 00:26:04.118 "uuid": "b8c08c45-365b-5ec5-a60c-d4c4a54ff4e0", 00:26:04.118 "is_configured": true, 00:26:04.118 "data_offset": 2048, 00:26:04.118 "data_size": 63488 00:26:04.118 }, 00:26:04.118 { 00:26:04.118 "name": "BaseBdev4", 00:26:04.118 "uuid": "547e45fd-b6af-5a1f-b032-8fb0d50e71bf", 00:26:04.118 "is_configured": true, 00:26:04.118 "data_offset": 2048, 00:26:04.118 "data_size": 63488 00:26:04.118 } 00:26:04.118 ] 00:26:04.118 }' 00:26:04.118 11:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:04.118 11:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.724 11:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:26:04.724 11:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:04.724 [2024-06-10 11:49:36.736790] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:05.658 11:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:05.916 11:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:26:05.916 11:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:26:05.916 11:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:26:05.916 11:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:26:05.916 11:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:05.916 11:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:05.916 11:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:26:05.916 11:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:05.916 11:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:05.916 11:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:05.916 11:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:05.917 11:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:05.917 11:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:05.917 11:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.917 11:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.174 11:49:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:06.174 "name": "raid_bdev1", 00:26:06.174 "uuid": "0b6b49c2-99a3-46eb-bef5-187cd790e08f", 00:26:06.174 "strip_size_kb": 64, 00:26:06.174 "state": "online", 00:26:06.174 "raid_level": "raid0", 00:26:06.174 "superblock": true, 00:26:06.174 "num_base_bdevs": 4, 00:26:06.174 "num_base_bdevs_discovered": 4, 00:26:06.174 "num_base_bdevs_operational": 4, 00:26:06.174 "base_bdevs_list": [ 00:26:06.174 { 00:26:06.174 "name": "BaseBdev1", 00:26:06.174 "uuid": "a66b8b06-ff4f-5401-9a01-b0eeba8e23ca", 00:26:06.174 "is_configured": true, 00:26:06.174 "data_offset": 2048, 00:26:06.174 "data_size": 63488 00:26:06.174 }, 00:26:06.174 { 00:26:06.174 "name": "BaseBdev2", 00:26:06.174 "uuid": "bad79bf9-2a42-5201-9658-7937ee8d2c23", 00:26:06.174 "is_configured": true, 00:26:06.174 "data_offset": 2048, 00:26:06.174 "data_size": 63488 00:26:06.174 }, 00:26:06.174 { 00:26:06.174 "name": "BaseBdev3", 00:26:06.174 "uuid": "b8c08c45-365b-5ec5-a60c-d4c4a54ff4e0", 00:26:06.174 "is_configured": true, 00:26:06.175 "data_offset": 2048, 00:26:06.175 "data_size": 63488 00:26:06.175 }, 00:26:06.175 { 00:26:06.175 "name": "BaseBdev4", 00:26:06.175 "uuid": "547e45fd-b6af-5a1f-b032-8fb0d50e71bf", 00:26:06.175 "is_configured": true, 00:26:06.175 "data_offset": 2048, 00:26:06.175 "data_size": 63488 00:26:06.175 } 00:26:06.175 ] 00:26:06.175 }' 00:26:06.175 11:49:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:06.175 11:49:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.107 11:49:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:07.107 [2024-06-10 11:49:39.148737] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:07.107 [2024-06-10 11:49:39.148985] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:07.107 [2024-06-10 11:49:39.152105] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:07.107 [2024-06-10 11:49:39.152387] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:07.107 [2024-06-10 11:49:39.152508] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:07.107 [2024-06-10 11:49:39.152692] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:26:07.107 0 00:26:07.370 11:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 138741 00:26:07.370 11:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 138741 ']' 00:26:07.370 11:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 138741 00:26:07.370 11:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:26:07.370 11:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:07.370 11:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 138741 00:26:07.370 11:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:07.370 11:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:07.370 11:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 138741' 00:26:07.370 killing process with pid 138741 00:26:07.370 11:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 138741 00:26:07.370 [2024-06-10 11:49:39.192821] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:07.370 11:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 138741 00:26:07.645 [2024-06-10 11:49:39.623334] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:09.542 11:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.XGRi3tG2tC 00:26:09.542 11:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:26:09.542 11:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:26:09.542 11:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.42 00:26:09.542 11:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:26:09.542 11:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:09.542 11:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:26:09.542 11:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.42 != \0\.\0\0 ]] 00:26:09.542 00:26:09.542 real 0m10.114s 00:26:09.542 user 0m15.150s 00:26:09.542 sys 0m1.221s 00:26:09.542 11:49:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:09.542 11:49:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.542 ************************************ 00:26:09.542 END TEST raid_write_error_test 00:26:09.542 ************************************ 00:26:09.542 11:49:41 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:26:09.542 11:49:41 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:26:09.542 11:49:41 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:26:09.542 11:49:41 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:09.542 11:49:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:09.542 ************************************ 00:26:09.542 START TEST raid_state_function_test 00:26:09.542 ************************************ 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 4 false 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=138968 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 138968' 00:26:09.542 Process raid pid: 138968 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 138968 /var/tmp/spdk-raid.sock 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 138968 ']' 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:09.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:09.542 11:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.542 [2024-06-10 11:49:41.536728] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:26:09.542 [2024-06-10 11:49:41.537434] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.800 [2024-06-10 11:49:41.702532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.058 [2024-06-10 11:49:41.956102] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.316 [2024-06-10 11:49:42.184450] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:10.574 11:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:10.574 11:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:26:10.574 11:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:10.832 [2024-06-10 11:49:42.646104] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:10.832 [2024-06-10 11:49:42.646207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:10.832 [2024-06-10 11:49:42.646220] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:10.832 [2024-06-10 11:49:42.646247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:10.832 [2024-06-10 11:49:42.646256] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:10.832 [2024-06-10 11:49:42.646274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:10.832 [2024-06-10 11:49:42.646282] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:10.832 [2024-06-10 11:49:42.646306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:10.832 11:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:10.832 11:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:10.832 11:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:10.832 11:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:10.832 11:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:10.832 11:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:10.832 11:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:10.832 11:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:10.832 11:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:10.832 11:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:10.832 11:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.832 11:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:11.090 11:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:11.090 "name": "Existed_Raid", 00:26:11.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.090 "strip_size_kb": 64, 00:26:11.090 "state": "configuring", 00:26:11.090 "raid_level": "concat", 00:26:11.090 "superblock": false, 00:26:11.090 "num_base_bdevs": 4, 00:26:11.090 "num_base_bdevs_discovered": 0, 00:26:11.090 "num_base_bdevs_operational": 4, 00:26:11.090 "base_bdevs_list": [ 00:26:11.090 { 00:26:11.090 "name": "BaseBdev1", 00:26:11.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.090 "is_configured": false, 00:26:11.090 "data_offset": 0, 00:26:11.090 "data_size": 0 00:26:11.090 }, 00:26:11.090 { 00:26:11.090 "name": "BaseBdev2", 00:26:11.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.090 "is_configured": false, 00:26:11.090 "data_offset": 0, 00:26:11.090 "data_size": 0 00:26:11.090 }, 00:26:11.090 { 00:26:11.090 "name": "BaseBdev3", 00:26:11.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.090 "is_configured": false, 00:26:11.090 "data_offset": 0, 00:26:11.090 "data_size": 0 00:26:11.090 }, 00:26:11.090 { 00:26:11.090 "name": "BaseBdev4", 00:26:11.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.090 "is_configured": false, 00:26:11.090 "data_offset": 0, 00:26:11.090 "data_size": 0 00:26:11.090 } 00:26:11.090 ] 00:26:11.090 }' 00:26:11.090 11:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:11.090 11:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.657 11:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:11.915 [2024-06-10 11:49:43.718240] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:11.915 [2024-06-10 11:49:43.718290] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:26:11.915 11:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:11.915 [2024-06-10 11:49:43.930295] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:11.915 [2024-06-10 11:49:43.930381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:11.915 [2024-06-10 11:49:43.930393] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:11.915 [2024-06-10 11:49:43.930447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:11.915 [2024-06-10 11:49:43.930457] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:11.915 [2024-06-10 11:49:43.930495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:11.915 [2024-06-10 11:49:43.930504] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:11.915 [2024-06-10 11:49:43.930529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:11.915 11:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:12.174 [2024-06-10 11:49:44.191757] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:12.174 BaseBdev1 00:26:12.174 11:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:12.174 11:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:26:12.174 11:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:26:12.174 11:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:26:12.174 11:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:26:12.174 11:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:26:12.174 11:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:12.740 11:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:12.740 [ 00:26:12.740 { 00:26:12.740 "name": "BaseBdev1", 00:26:12.740 "aliases": [ 00:26:12.740 "bce85d39-d1fa-4d5b-ae23-4d5edc10d417" 00:26:12.740 ], 00:26:12.740 "product_name": "Malloc disk", 00:26:12.740 "block_size": 512, 00:26:12.740 "num_blocks": 65536, 00:26:12.740 "uuid": "bce85d39-d1fa-4d5b-ae23-4d5edc10d417", 00:26:12.740 "assigned_rate_limits": { 00:26:12.740 "rw_ios_per_sec": 0, 00:26:12.740 "rw_mbytes_per_sec": 0, 00:26:12.740 "r_mbytes_per_sec": 0, 00:26:12.740 "w_mbytes_per_sec": 0 00:26:12.740 }, 00:26:12.740 "claimed": true, 00:26:12.740 "claim_type": "exclusive_write", 00:26:12.740 "zoned": false, 00:26:12.740 "supported_io_types": { 00:26:12.740 "read": true, 00:26:12.740 "write": true, 00:26:12.740 "unmap": true, 00:26:12.740 "write_zeroes": true, 00:26:12.740 "flush": true, 00:26:12.740 "reset": true, 00:26:12.740 "compare": false, 00:26:12.740 "compare_and_write": false, 00:26:12.740 "abort": true, 00:26:12.740 "nvme_admin": false, 00:26:12.740 "nvme_io": false 00:26:12.740 }, 00:26:12.740 "memory_domains": [ 00:26:12.740 { 00:26:12.740 "dma_device_id": "system", 00:26:12.740 "dma_device_type": 1 00:26:12.740 }, 00:26:12.740 { 00:26:12.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:12.740 "dma_device_type": 2 00:26:12.740 } 00:26:12.740 ], 00:26:12.740 "driver_specific": {} 00:26:12.740 } 00:26:12.740 ] 00:26:12.740 11:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:26:12.740 11:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:12.740 11:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:12.740 11:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:12.740 11:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:12.740 11:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:12.740 11:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:12.740 11:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:12.740 11:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:12.740 11:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:12.740 11:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:12.740 11:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.740 11:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:12.998 11:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:12.999 "name": "Existed_Raid", 00:26:12.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.999 "strip_size_kb": 64, 00:26:12.999 "state": "configuring", 00:26:12.999 "raid_level": "concat", 00:26:12.999 "superblock": false, 00:26:12.999 "num_base_bdevs": 4, 00:26:12.999 "num_base_bdevs_discovered": 1, 00:26:12.999 "num_base_bdevs_operational": 4, 00:26:12.999 "base_bdevs_list": [ 00:26:12.999 { 00:26:12.999 "name": "BaseBdev1", 00:26:12.999 "uuid": "bce85d39-d1fa-4d5b-ae23-4d5edc10d417", 00:26:12.999 "is_configured": true, 00:26:12.999 "data_offset": 0, 00:26:12.999 "data_size": 65536 00:26:12.999 }, 00:26:12.999 { 00:26:12.999 "name": "BaseBdev2", 00:26:12.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.999 "is_configured": false, 00:26:12.999 "data_offset": 0, 00:26:12.999 "data_size": 0 00:26:12.999 }, 00:26:12.999 { 00:26:12.999 "name": "BaseBdev3", 00:26:12.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.999 "is_configured": false, 00:26:12.999 "data_offset": 0, 00:26:12.999 "data_size": 0 00:26:12.999 }, 00:26:12.999 { 00:26:12.999 "name": "BaseBdev4", 00:26:12.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.999 "is_configured": false, 00:26:12.999 "data_offset": 0, 00:26:12.999 "data_size": 0 00:26:12.999 } 00:26:12.999 ] 00:26:12.999 }' 00:26:12.999 11:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:12.999 11:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.564 11:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:13.822 [2024-06-10 11:49:45.812184] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:13.822 [2024-06-10 11:49:45.812258] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:26:13.822 11:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:14.080 [2024-06-10 11:49:46.064265] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:14.080 [2024-06-10 11:49:46.066490] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:14.080 [2024-06-10 11:49:46.066565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:14.080 [2024-06-10 11:49:46.066575] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:14.080 [2024-06-10 11:49:46.066603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:14.080 [2024-06-10 11:49:46.066612] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:14.080 [2024-06-10 11:49:46.066635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:14.080 11:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:14.080 11:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:14.080 11:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:14.080 11:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:14.080 11:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:14.080 11:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:14.080 11:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:14.080 11:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:14.080 11:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:14.080 11:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:14.080 11:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:14.080 11:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:14.080 11:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:14.080 11:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.338 11:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:14.338 "name": "Existed_Raid", 00:26:14.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.338 "strip_size_kb": 64, 00:26:14.338 "state": "configuring", 00:26:14.338 "raid_level": "concat", 00:26:14.338 "superblock": false, 00:26:14.338 "num_base_bdevs": 4, 00:26:14.338 "num_base_bdevs_discovered": 1, 00:26:14.338 "num_base_bdevs_operational": 4, 00:26:14.338 "base_bdevs_list": [ 00:26:14.338 { 00:26:14.338 "name": "BaseBdev1", 00:26:14.338 "uuid": "bce85d39-d1fa-4d5b-ae23-4d5edc10d417", 00:26:14.338 "is_configured": true, 00:26:14.338 "data_offset": 0, 00:26:14.338 "data_size": 65536 00:26:14.338 }, 00:26:14.338 { 00:26:14.338 "name": "BaseBdev2", 00:26:14.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.338 "is_configured": false, 00:26:14.338 "data_offset": 0, 00:26:14.338 "data_size": 0 00:26:14.338 }, 00:26:14.338 { 00:26:14.338 "name": "BaseBdev3", 00:26:14.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.338 "is_configured": false, 00:26:14.338 "data_offset": 0, 00:26:14.338 "data_size": 0 00:26:14.338 }, 00:26:14.338 { 00:26:14.338 "name": "BaseBdev4", 00:26:14.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.338 "is_configured": false, 00:26:14.338 "data_offset": 0, 00:26:14.338 "data_size": 0 00:26:14.338 } 00:26:14.338 ] 00:26:14.338 }' 00:26:14.338 11:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:14.338 11:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.904 11:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:15.166 [2024-06-10 11:49:47.198896] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:15.166 BaseBdev2 00:26:15.166 11:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:15.166 11:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:26:15.166 11:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:26:15.166 11:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:26:15.166 11:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:26:15.166 11:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:26:15.166 11:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:15.424 11:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:15.683 [ 00:26:15.683 { 00:26:15.683 "name": "BaseBdev2", 00:26:15.683 "aliases": [ 00:26:15.683 "8d2d8c43-93be-417d-9c14-ca8dca0cebbf" 00:26:15.683 ], 00:26:15.683 "product_name": "Malloc disk", 00:26:15.683 "block_size": 512, 00:26:15.683 "num_blocks": 65536, 00:26:15.683 "uuid": "8d2d8c43-93be-417d-9c14-ca8dca0cebbf", 00:26:15.683 "assigned_rate_limits": { 00:26:15.683 "rw_ios_per_sec": 0, 00:26:15.683 "rw_mbytes_per_sec": 0, 00:26:15.683 "r_mbytes_per_sec": 0, 00:26:15.683 "w_mbytes_per_sec": 0 00:26:15.683 }, 00:26:15.683 "claimed": true, 00:26:15.683 "claim_type": "exclusive_write", 00:26:15.683 "zoned": false, 00:26:15.683 "supported_io_types": { 00:26:15.683 "read": true, 00:26:15.683 "write": true, 00:26:15.683 "unmap": true, 00:26:15.683 "write_zeroes": true, 00:26:15.683 "flush": true, 00:26:15.683 "reset": true, 00:26:15.683 "compare": false, 00:26:15.683 "compare_and_write": false, 00:26:15.683 "abort": true, 00:26:15.683 "nvme_admin": false, 00:26:15.683 "nvme_io": false 00:26:15.683 }, 00:26:15.683 "memory_domains": [ 00:26:15.683 { 00:26:15.683 "dma_device_id": "system", 00:26:15.683 "dma_device_type": 1 00:26:15.683 }, 00:26:15.683 { 00:26:15.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.683 "dma_device_type": 2 00:26:15.683 } 00:26:15.683 ], 00:26:15.683 "driver_specific": {} 00:26:15.683 } 00:26:15.683 ] 00:26:15.683 11:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:26:15.683 11:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:15.683 11:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:15.683 11:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:15.683 11:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:15.683 11:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:15.683 11:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:15.683 11:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:15.683 11:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:15.683 11:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:15.683 11:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:15.683 11:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:15.683 11:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:15.683 11:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.683 11:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.941 11:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:15.941 "name": "Existed_Raid", 00:26:15.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.941 "strip_size_kb": 64, 00:26:15.941 "state": "configuring", 00:26:15.941 "raid_level": "concat", 00:26:15.941 "superblock": false, 00:26:15.941 "num_base_bdevs": 4, 00:26:15.941 "num_base_bdevs_discovered": 2, 00:26:15.941 "num_base_bdevs_operational": 4, 00:26:15.941 "base_bdevs_list": [ 00:26:15.941 { 00:26:15.941 "name": "BaseBdev1", 00:26:15.941 "uuid": "bce85d39-d1fa-4d5b-ae23-4d5edc10d417", 00:26:15.941 "is_configured": true, 00:26:15.941 "data_offset": 0, 00:26:15.941 "data_size": 65536 00:26:15.941 }, 00:26:15.941 { 00:26:15.941 "name": "BaseBdev2", 00:26:15.941 "uuid": "8d2d8c43-93be-417d-9c14-ca8dca0cebbf", 00:26:15.941 "is_configured": true, 00:26:15.941 "data_offset": 0, 00:26:15.941 "data_size": 65536 00:26:15.941 }, 00:26:15.941 { 00:26:15.941 "name": "BaseBdev3", 00:26:15.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.941 "is_configured": false, 00:26:15.941 "data_offset": 0, 00:26:15.941 "data_size": 0 00:26:15.941 }, 00:26:15.941 { 00:26:15.941 "name": "BaseBdev4", 00:26:15.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.941 "is_configured": false, 00:26:15.941 "data_offset": 0, 00:26:15.941 "data_size": 0 00:26:15.941 } 00:26:15.941 ] 00:26:15.941 }' 00:26:15.941 11:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:15.941 11:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.929 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:16.929 [2024-06-10 11:49:48.883445] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:16.929 BaseBdev3 00:26:16.929 11:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:26:16.929 11:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:26:16.929 11:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:26:16.929 11:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:26:16.929 11:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:26:16.929 11:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:26:16.929 11:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:17.188 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:17.446 [ 00:26:17.446 { 00:26:17.446 "name": "BaseBdev3", 00:26:17.446 "aliases": [ 00:26:17.446 "388a16c3-a057-4c91-8817-a8355ed329b1" 00:26:17.446 ], 00:26:17.446 "product_name": "Malloc disk", 00:26:17.446 "block_size": 512, 00:26:17.446 "num_blocks": 65536, 00:26:17.446 "uuid": "388a16c3-a057-4c91-8817-a8355ed329b1", 00:26:17.446 "assigned_rate_limits": { 00:26:17.446 "rw_ios_per_sec": 0, 00:26:17.446 "rw_mbytes_per_sec": 0, 00:26:17.446 "r_mbytes_per_sec": 0, 00:26:17.446 "w_mbytes_per_sec": 0 00:26:17.446 }, 00:26:17.446 "claimed": true, 00:26:17.446 "claim_type": "exclusive_write", 00:26:17.446 "zoned": false, 00:26:17.446 "supported_io_types": { 00:26:17.446 "read": true, 00:26:17.446 "write": true, 00:26:17.446 "unmap": true, 00:26:17.446 "write_zeroes": true, 00:26:17.446 "flush": true, 00:26:17.446 "reset": true, 00:26:17.446 "compare": false, 00:26:17.446 "compare_and_write": false, 00:26:17.446 "abort": true, 00:26:17.446 "nvme_admin": false, 00:26:17.446 "nvme_io": false 00:26:17.446 }, 00:26:17.446 "memory_domains": [ 00:26:17.446 { 00:26:17.446 "dma_device_id": "system", 00:26:17.446 "dma_device_type": 1 00:26:17.446 }, 00:26:17.446 { 00:26:17.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.446 "dma_device_type": 2 00:26:17.446 } 00:26:17.446 ], 00:26:17.446 "driver_specific": {} 00:26:17.446 } 00:26:17.446 ] 00:26:17.446 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:26:17.446 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:17.446 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:17.446 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:17.446 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:17.446 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:17.446 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:17.446 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:17.446 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:17.447 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:17.447 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:17.447 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:17.447 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:17.447 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.447 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:17.705 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:17.705 "name": "Existed_Raid", 00:26:17.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.705 "strip_size_kb": 64, 00:26:17.705 "state": "configuring", 00:26:17.705 "raid_level": "concat", 00:26:17.705 "superblock": false, 00:26:17.705 "num_base_bdevs": 4, 00:26:17.705 "num_base_bdevs_discovered": 3, 00:26:17.705 "num_base_bdevs_operational": 4, 00:26:17.705 "base_bdevs_list": [ 00:26:17.705 { 00:26:17.705 "name": "BaseBdev1", 00:26:17.705 "uuid": "bce85d39-d1fa-4d5b-ae23-4d5edc10d417", 00:26:17.705 "is_configured": true, 00:26:17.705 "data_offset": 0, 00:26:17.705 "data_size": 65536 00:26:17.705 }, 00:26:17.705 { 00:26:17.705 "name": "BaseBdev2", 00:26:17.705 "uuid": "8d2d8c43-93be-417d-9c14-ca8dca0cebbf", 00:26:17.705 "is_configured": true, 00:26:17.705 "data_offset": 0, 00:26:17.705 "data_size": 65536 00:26:17.705 }, 00:26:17.705 { 00:26:17.705 "name": "BaseBdev3", 00:26:17.705 "uuid": "388a16c3-a057-4c91-8817-a8355ed329b1", 00:26:17.705 "is_configured": true, 00:26:17.705 "data_offset": 0, 00:26:17.705 "data_size": 65536 00:26:17.705 }, 00:26:17.705 { 00:26:17.705 "name": "BaseBdev4", 00:26:17.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.705 "is_configured": false, 00:26:17.705 "data_offset": 0, 00:26:17.705 "data_size": 0 00:26:17.705 } 00:26:17.705 ] 00:26:17.705 }' 00:26:17.705 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:17.705 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.271 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:18.529 [2024-06-10 11:49:50.489534] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:18.529 [2024-06-10 11:49:50.489601] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:26:18.529 [2024-06-10 11:49:50.489620] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:26:18.529 [2024-06-10 11:49:50.489756] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:26:18.529 [2024-06-10 11:49:50.490106] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:26:18.529 [2024-06-10 11:49:50.490118] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:26:18.529 [2024-06-10 11:49:50.490368] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:18.529 BaseBdev4 00:26:18.529 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:26:18.529 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:26:18.529 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:26:18.529 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:26:18.529 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:26:18.529 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:26:18.529 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:18.787 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:19.045 [ 00:26:19.045 { 00:26:19.045 "name": "BaseBdev4", 00:26:19.045 "aliases": [ 00:26:19.045 "49f7b338-569c-4bd3-82e7-c62d38dd6f5a" 00:26:19.045 ], 00:26:19.045 "product_name": "Malloc disk", 00:26:19.045 "block_size": 512, 00:26:19.045 "num_blocks": 65536, 00:26:19.045 "uuid": "49f7b338-569c-4bd3-82e7-c62d38dd6f5a", 00:26:19.045 "assigned_rate_limits": { 00:26:19.045 "rw_ios_per_sec": 0, 00:26:19.046 "rw_mbytes_per_sec": 0, 00:26:19.046 "r_mbytes_per_sec": 0, 00:26:19.046 "w_mbytes_per_sec": 0 00:26:19.046 }, 00:26:19.046 "claimed": true, 00:26:19.046 "claim_type": "exclusive_write", 00:26:19.046 "zoned": false, 00:26:19.046 "supported_io_types": { 00:26:19.046 "read": true, 00:26:19.046 "write": true, 00:26:19.046 "unmap": true, 00:26:19.046 "write_zeroes": true, 00:26:19.046 "flush": true, 00:26:19.046 "reset": true, 00:26:19.046 "compare": false, 00:26:19.046 "compare_and_write": false, 00:26:19.046 "abort": true, 00:26:19.046 "nvme_admin": false, 00:26:19.046 "nvme_io": false 00:26:19.046 }, 00:26:19.046 "memory_domains": [ 00:26:19.046 { 00:26:19.046 "dma_device_id": "system", 00:26:19.046 "dma_device_type": 1 00:26:19.046 }, 00:26:19.046 { 00:26:19.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:19.046 "dma_device_type": 2 00:26:19.046 } 00:26:19.046 ], 00:26:19.046 "driver_specific": {} 00:26:19.046 } 00:26:19.046 ] 00:26:19.046 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:26:19.046 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:19.046 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:19.046 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:26:19.046 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:19.046 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:19.046 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:19.046 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:19.046 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:19.046 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:19.046 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:19.046 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:19.046 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:19.046 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.046 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:19.304 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:19.304 "name": "Existed_Raid", 00:26:19.304 "uuid": "8ffcde63-5af8-4288-89d8-451a1cf7c282", 00:26:19.304 "strip_size_kb": 64, 00:26:19.304 "state": "online", 00:26:19.304 "raid_level": "concat", 00:26:19.304 "superblock": false, 00:26:19.304 "num_base_bdevs": 4, 00:26:19.304 "num_base_bdevs_discovered": 4, 00:26:19.304 "num_base_bdevs_operational": 4, 00:26:19.304 "base_bdevs_list": [ 00:26:19.304 { 00:26:19.304 "name": "BaseBdev1", 00:26:19.304 "uuid": "bce85d39-d1fa-4d5b-ae23-4d5edc10d417", 00:26:19.304 "is_configured": true, 00:26:19.304 "data_offset": 0, 00:26:19.304 "data_size": 65536 00:26:19.304 }, 00:26:19.304 { 00:26:19.304 "name": "BaseBdev2", 00:26:19.304 "uuid": "8d2d8c43-93be-417d-9c14-ca8dca0cebbf", 00:26:19.304 "is_configured": true, 00:26:19.304 "data_offset": 0, 00:26:19.304 "data_size": 65536 00:26:19.304 }, 00:26:19.304 { 00:26:19.304 "name": "BaseBdev3", 00:26:19.304 "uuid": "388a16c3-a057-4c91-8817-a8355ed329b1", 00:26:19.304 "is_configured": true, 00:26:19.304 "data_offset": 0, 00:26:19.304 "data_size": 65536 00:26:19.304 }, 00:26:19.304 { 00:26:19.304 "name": "BaseBdev4", 00:26:19.304 "uuid": "49f7b338-569c-4bd3-82e7-c62d38dd6f5a", 00:26:19.304 "is_configured": true, 00:26:19.304 "data_offset": 0, 00:26:19.304 "data_size": 65536 00:26:19.304 } 00:26:19.304 ] 00:26:19.304 }' 00:26:19.304 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:19.304 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.882 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:19.882 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:19.882 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:19.882 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:19.882 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:19.882 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:19.882 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:19.882 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:20.140 [2024-06-10 11:49:52.042212] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:20.140 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:20.140 "name": "Existed_Raid", 00:26:20.140 "aliases": [ 00:26:20.140 "8ffcde63-5af8-4288-89d8-451a1cf7c282" 00:26:20.140 ], 00:26:20.140 "product_name": "Raid Volume", 00:26:20.140 "block_size": 512, 00:26:20.140 "num_blocks": 262144, 00:26:20.140 "uuid": "8ffcde63-5af8-4288-89d8-451a1cf7c282", 00:26:20.140 "assigned_rate_limits": { 00:26:20.140 "rw_ios_per_sec": 0, 00:26:20.140 "rw_mbytes_per_sec": 0, 00:26:20.140 "r_mbytes_per_sec": 0, 00:26:20.140 "w_mbytes_per_sec": 0 00:26:20.140 }, 00:26:20.140 "claimed": false, 00:26:20.140 "zoned": false, 00:26:20.140 "supported_io_types": { 00:26:20.140 "read": true, 00:26:20.140 "write": true, 00:26:20.140 "unmap": true, 00:26:20.140 "write_zeroes": true, 00:26:20.140 "flush": true, 00:26:20.140 "reset": true, 00:26:20.140 "compare": false, 00:26:20.140 "compare_and_write": false, 00:26:20.140 "abort": false, 00:26:20.140 "nvme_admin": false, 00:26:20.140 "nvme_io": false 00:26:20.140 }, 00:26:20.140 "memory_domains": [ 00:26:20.140 { 00:26:20.140 "dma_device_id": "system", 00:26:20.140 "dma_device_type": 1 00:26:20.140 }, 00:26:20.140 { 00:26:20.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:20.140 "dma_device_type": 2 00:26:20.140 }, 00:26:20.140 { 00:26:20.140 "dma_device_id": "system", 00:26:20.140 "dma_device_type": 1 00:26:20.140 }, 00:26:20.140 { 00:26:20.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:20.140 "dma_device_type": 2 00:26:20.140 }, 00:26:20.140 { 00:26:20.140 "dma_device_id": "system", 00:26:20.140 "dma_device_type": 1 00:26:20.140 }, 00:26:20.140 { 00:26:20.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:20.140 "dma_device_type": 2 00:26:20.140 }, 00:26:20.140 { 00:26:20.140 "dma_device_id": "system", 00:26:20.140 "dma_device_type": 1 00:26:20.140 }, 00:26:20.140 { 00:26:20.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:20.140 "dma_device_type": 2 00:26:20.140 } 00:26:20.140 ], 00:26:20.140 "driver_specific": { 00:26:20.140 "raid": { 00:26:20.140 "uuid": "8ffcde63-5af8-4288-89d8-451a1cf7c282", 00:26:20.140 "strip_size_kb": 64, 00:26:20.140 "state": "online", 00:26:20.140 "raid_level": "concat", 00:26:20.140 "superblock": false, 00:26:20.140 "num_base_bdevs": 4, 00:26:20.140 "num_base_bdevs_discovered": 4, 00:26:20.140 "num_base_bdevs_operational": 4, 00:26:20.140 "base_bdevs_list": [ 00:26:20.140 { 00:26:20.140 "name": "BaseBdev1", 00:26:20.140 "uuid": "bce85d39-d1fa-4d5b-ae23-4d5edc10d417", 00:26:20.140 "is_configured": true, 00:26:20.140 "data_offset": 0, 00:26:20.140 "data_size": 65536 00:26:20.140 }, 00:26:20.140 { 00:26:20.140 "name": "BaseBdev2", 00:26:20.140 "uuid": "8d2d8c43-93be-417d-9c14-ca8dca0cebbf", 00:26:20.140 "is_configured": true, 00:26:20.140 "data_offset": 0, 00:26:20.140 "data_size": 65536 00:26:20.140 }, 00:26:20.140 { 00:26:20.140 "name": "BaseBdev3", 00:26:20.140 "uuid": "388a16c3-a057-4c91-8817-a8355ed329b1", 00:26:20.140 "is_configured": true, 00:26:20.140 "data_offset": 0, 00:26:20.140 "data_size": 65536 00:26:20.140 }, 00:26:20.140 { 00:26:20.140 "name": "BaseBdev4", 00:26:20.140 "uuid": "49f7b338-569c-4bd3-82e7-c62d38dd6f5a", 00:26:20.140 "is_configured": true, 00:26:20.140 "data_offset": 0, 00:26:20.140 "data_size": 65536 00:26:20.140 } 00:26:20.140 ] 00:26:20.140 } 00:26:20.140 } 00:26:20.140 }' 00:26:20.140 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:20.140 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:20.140 BaseBdev2 00:26:20.140 BaseBdev3 00:26:20.140 BaseBdev4' 00:26:20.140 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:20.140 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:20.140 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:20.398 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:20.398 "name": "BaseBdev1", 00:26:20.399 "aliases": [ 00:26:20.399 "bce85d39-d1fa-4d5b-ae23-4d5edc10d417" 00:26:20.399 ], 00:26:20.399 "product_name": "Malloc disk", 00:26:20.399 "block_size": 512, 00:26:20.399 "num_blocks": 65536, 00:26:20.399 "uuid": "bce85d39-d1fa-4d5b-ae23-4d5edc10d417", 00:26:20.399 "assigned_rate_limits": { 00:26:20.399 "rw_ios_per_sec": 0, 00:26:20.399 "rw_mbytes_per_sec": 0, 00:26:20.399 "r_mbytes_per_sec": 0, 00:26:20.399 "w_mbytes_per_sec": 0 00:26:20.399 }, 00:26:20.399 "claimed": true, 00:26:20.399 "claim_type": "exclusive_write", 00:26:20.399 "zoned": false, 00:26:20.399 "supported_io_types": { 00:26:20.399 "read": true, 00:26:20.399 "write": true, 00:26:20.399 "unmap": true, 00:26:20.399 "write_zeroes": true, 00:26:20.399 "flush": true, 00:26:20.399 "reset": true, 00:26:20.399 "compare": false, 00:26:20.399 "compare_and_write": false, 00:26:20.399 "abort": true, 00:26:20.399 "nvme_admin": false, 00:26:20.399 "nvme_io": false 00:26:20.399 }, 00:26:20.399 "memory_domains": [ 00:26:20.399 { 00:26:20.399 "dma_device_id": "system", 00:26:20.399 "dma_device_type": 1 00:26:20.399 }, 00:26:20.399 { 00:26:20.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:20.399 "dma_device_type": 2 00:26:20.399 } 00:26:20.399 ], 00:26:20.399 "driver_specific": {} 00:26:20.399 }' 00:26:20.399 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:20.399 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:20.399 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:20.399 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:20.656 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:20.656 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:20.656 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:20.656 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:20.656 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:20.656 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:20.656 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:20.656 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:20.656 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:20.656 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:20.656 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:20.914 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:20.914 "name": "BaseBdev2", 00:26:20.914 "aliases": [ 00:26:20.914 "8d2d8c43-93be-417d-9c14-ca8dca0cebbf" 00:26:20.914 ], 00:26:20.914 "product_name": "Malloc disk", 00:26:20.914 "block_size": 512, 00:26:20.914 "num_blocks": 65536, 00:26:20.914 "uuid": "8d2d8c43-93be-417d-9c14-ca8dca0cebbf", 00:26:20.914 "assigned_rate_limits": { 00:26:20.914 "rw_ios_per_sec": 0, 00:26:20.914 "rw_mbytes_per_sec": 0, 00:26:20.914 "r_mbytes_per_sec": 0, 00:26:20.914 "w_mbytes_per_sec": 0 00:26:20.914 }, 00:26:20.914 "claimed": true, 00:26:20.914 "claim_type": "exclusive_write", 00:26:20.914 "zoned": false, 00:26:20.914 "supported_io_types": { 00:26:20.914 "read": true, 00:26:20.914 "write": true, 00:26:20.914 "unmap": true, 00:26:20.914 "write_zeroes": true, 00:26:20.914 "flush": true, 00:26:20.914 "reset": true, 00:26:20.914 "compare": false, 00:26:20.914 "compare_and_write": false, 00:26:20.914 "abort": true, 00:26:20.914 "nvme_admin": false, 00:26:20.914 "nvme_io": false 00:26:20.914 }, 00:26:20.914 "memory_domains": [ 00:26:20.914 { 00:26:20.914 "dma_device_id": "system", 00:26:20.914 "dma_device_type": 1 00:26:20.914 }, 00:26:20.914 { 00:26:20.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:20.914 "dma_device_type": 2 00:26:20.914 } 00:26:20.914 ], 00:26:20.914 "driver_specific": {} 00:26:20.914 }' 00:26:20.914 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:21.186 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:21.186 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:21.186 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:21.186 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:21.186 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:21.186 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:21.186 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:21.187 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:21.447 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:21.447 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:21.447 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:21.447 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:21.447 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:21.447 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:21.705 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:21.705 "name": "BaseBdev3", 00:26:21.705 "aliases": [ 00:26:21.705 "388a16c3-a057-4c91-8817-a8355ed329b1" 00:26:21.705 ], 00:26:21.705 "product_name": "Malloc disk", 00:26:21.705 "block_size": 512, 00:26:21.705 "num_blocks": 65536, 00:26:21.705 "uuid": "388a16c3-a057-4c91-8817-a8355ed329b1", 00:26:21.705 "assigned_rate_limits": { 00:26:21.705 "rw_ios_per_sec": 0, 00:26:21.705 "rw_mbytes_per_sec": 0, 00:26:21.705 "r_mbytes_per_sec": 0, 00:26:21.705 "w_mbytes_per_sec": 0 00:26:21.705 }, 00:26:21.705 "claimed": true, 00:26:21.705 "claim_type": "exclusive_write", 00:26:21.705 "zoned": false, 00:26:21.705 "supported_io_types": { 00:26:21.705 "read": true, 00:26:21.705 "write": true, 00:26:21.705 "unmap": true, 00:26:21.705 "write_zeroes": true, 00:26:21.705 "flush": true, 00:26:21.705 "reset": true, 00:26:21.705 "compare": false, 00:26:21.705 "compare_and_write": false, 00:26:21.705 "abort": true, 00:26:21.705 "nvme_admin": false, 00:26:21.705 "nvme_io": false 00:26:21.705 }, 00:26:21.705 "memory_domains": [ 00:26:21.705 { 00:26:21.705 "dma_device_id": "system", 00:26:21.705 "dma_device_type": 1 00:26:21.705 }, 00:26:21.705 { 00:26:21.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:21.705 "dma_device_type": 2 00:26:21.705 } 00:26:21.705 ], 00:26:21.705 "driver_specific": {} 00:26:21.705 }' 00:26:21.705 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:21.705 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:21.705 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:21.705 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:21.705 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:21.963 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:21.963 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:21.963 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:21.963 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:21.963 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:21.963 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:21.963 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:21.963 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:21.963 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:21.963 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:22.222 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:22.222 "name": "BaseBdev4", 00:26:22.222 "aliases": [ 00:26:22.222 "49f7b338-569c-4bd3-82e7-c62d38dd6f5a" 00:26:22.222 ], 00:26:22.222 "product_name": "Malloc disk", 00:26:22.222 "block_size": 512, 00:26:22.222 "num_blocks": 65536, 00:26:22.222 "uuid": "49f7b338-569c-4bd3-82e7-c62d38dd6f5a", 00:26:22.222 "assigned_rate_limits": { 00:26:22.222 "rw_ios_per_sec": 0, 00:26:22.222 "rw_mbytes_per_sec": 0, 00:26:22.222 "r_mbytes_per_sec": 0, 00:26:22.222 "w_mbytes_per_sec": 0 00:26:22.222 }, 00:26:22.222 "claimed": true, 00:26:22.222 "claim_type": "exclusive_write", 00:26:22.222 "zoned": false, 00:26:22.222 "supported_io_types": { 00:26:22.222 "read": true, 00:26:22.222 "write": true, 00:26:22.222 "unmap": true, 00:26:22.222 "write_zeroes": true, 00:26:22.222 "flush": true, 00:26:22.222 "reset": true, 00:26:22.222 "compare": false, 00:26:22.222 "compare_and_write": false, 00:26:22.222 "abort": true, 00:26:22.222 "nvme_admin": false, 00:26:22.222 "nvme_io": false 00:26:22.222 }, 00:26:22.222 "memory_domains": [ 00:26:22.222 { 00:26:22.222 "dma_device_id": "system", 00:26:22.222 "dma_device_type": 1 00:26:22.222 }, 00:26:22.222 { 00:26:22.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.222 "dma_device_type": 2 00:26:22.222 } 00:26:22.222 ], 00:26:22.222 "driver_specific": {} 00:26:22.222 }' 00:26:22.222 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:22.222 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:22.222 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:22.222 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:22.480 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:22.480 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:22.480 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:22.480 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:22.480 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:22.480 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:22.738 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:22.739 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:22.739 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:23.032 [2024-06-10 11:49:54.850726] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:23.032 [2024-06-10 11:49:54.850766] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:23.032 [2024-06-10 11:49:54.850825] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:23.032 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:23.032 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:26:23.032 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:23.032 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:26:23.032 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:26:23.032 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:26:23.032 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:23.032 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:26:23.032 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:23.032 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:23.032 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:23.032 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:23.032 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:23.032 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:23.032 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:23.032 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.032 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:23.289 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:23.289 "name": "Existed_Raid", 00:26:23.289 "uuid": "8ffcde63-5af8-4288-89d8-451a1cf7c282", 00:26:23.289 "strip_size_kb": 64, 00:26:23.289 "state": "offline", 00:26:23.289 "raid_level": "concat", 00:26:23.289 "superblock": false, 00:26:23.289 "num_base_bdevs": 4, 00:26:23.289 "num_base_bdevs_discovered": 3, 00:26:23.289 "num_base_bdevs_operational": 3, 00:26:23.289 "base_bdevs_list": [ 00:26:23.289 { 00:26:23.289 "name": null, 00:26:23.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.289 "is_configured": false, 00:26:23.289 "data_offset": 0, 00:26:23.289 "data_size": 65536 00:26:23.289 }, 00:26:23.289 { 00:26:23.289 "name": "BaseBdev2", 00:26:23.289 "uuid": "8d2d8c43-93be-417d-9c14-ca8dca0cebbf", 00:26:23.289 "is_configured": true, 00:26:23.289 "data_offset": 0, 00:26:23.289 "data_size": 65536 00:26:23.289 }, 00:26:23.289 { 00:26:23.289 "name": "BaseBdev3", 00:26:23.289 "uuid": "388a16c3-a057-4c91-8817-a8355ed329b1", 00:26:23.289 "is_configured": true, 00:26:23.289 "data_offset": 0, 00:26:23.289 "data_size": 65536 00:26:23.289 }, 00:26:23.289 { 00:26:23.289 "name": "BaseBdev4", 00:26:23.289 "uuid": "49f7b338-569c-4bd3-82e7-c62d38dd6f5a", 00:26:23.289 "is_configured": true, 00:26:23.289 "data_offset": 0, 00:26:23.289 "data_size": 65536 00:26:23.289 } 00:26:23.289 ] 00:26:23.289 }' 00:26:23.289 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:23.289 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.854 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:23.854 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:23.854 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.854 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:24.112 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:24.112 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:24.112 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:24.369 [2024-06-10 11:49:56.305740] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:24.627 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:24.627 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:24.627 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:24.627 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.903 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:24.903 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:24.903 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:24.903 [2024-06-10 11:49:56.929680] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:25.160 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:25.161 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:25.161 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:25.161 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.419 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:25.419 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:25.419 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:25.677 [2024-06-10 11:49:57.495494] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:25.677 [2024-06-10 11:49:57.495570] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:26:25.677 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:25.677 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:25.677 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:25.677 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.935 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:25.935 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:25.935 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:26:25.935 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:26:25.935 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:25.935 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:26.193 BaseBdev2 00:26:26.193 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:26:26.193 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:26:26.193 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:26:26.193 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:26:26.194 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:26:26.194 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:26:26.194 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:26.477 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:26.738 [ 00:26:26.738 { 00:26:26.738 "name": "BaseBdev2", 00:26:26.738 "aliases": [ 00:26:26.738 "fff07d83-9d8b-4b02-9000-12f5dbd1d5ea" 00:26:26.738 ], 00:26:26.738 "product_name": "Malloc disk", 00:26:26.738 "block_size": 512, 00:26:26.738 "num_blocks": 65536, 00:26:26.738 "uuid": "fff07d83-9d8b-4b02-9000-12f5dbd1d5ea", 00:26:26.738 "assigned_rate_limits": { 00:26:26.738 "rw_ios_per_sec": 0, 00:26:26.738 "rw_mbytes_per_sec": 0, 00:26:26.738 "r_mbytes_per_sec": 0, 00:26:26.738 "w_mbytes_per_sec": 0 00:26:26.738 }, 00:26:26.738 "claimed": false, 00:26:26.738 "zoned": false, 00:26:26.738 "supported_io_types": { 00:26:26.738 "read": true, 00:26:26.738 "write": true, 00:26:26.738 "unmap": true, 00:26:26.738 "write_zeroes": true, 00:26:26.738 "flush": true, 00:26:26.738 "reset": true, 00:26:26.738 "compare": false, 00:26:26.738 "compare_and_write": false, 00:26:26.738 "abort": true, 00:26:26.738 "nvme_admin": false, 00:26:26.738 "nvme_io": false 00:26:26.738 }, 00:26:26.738 "memory_domains": [ 00:26:26.738 { 00:26:26.738 "dma_device_id": "system", 00:26:26.738 "dma_device_type": 1 00:26:26.738 }, 00:26:26.738 { 00:26:26.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.738 "dma_device_type": 2 00:26:26.738 } 00:26:26.738 ], 00:26:26.738 "driver_specific": {} 00:26:26.738 } 00:26:26.738 ] 00:26:26.738 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:26:26.738 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:26.738 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:26.738 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:26.996 BaseBdev3 00:26:26.996 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:26:26.996 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:26:26.996 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:26:26.996 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:26:26.996 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:26:26.996 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:26:26.996 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:27.254 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:27.512 [ 00:26:27.512 { 00:26:27.512 "name": "BaseBdev3", 00:26:27.512 "aliases": [ 00:26:27.512 "497545df-5644-4961-96ad-bf41ff2dd36b" 00:26:27.512 ], 00:26:27.512 "product_name": "Malloc disk", 00:26:27.512 "block_size": 512, 00:26:27.512 "num_blocks": 65536, 00:26:27.512 "uuid": "497545df-5644-4961-96ad-bf41ff2dd36b", 00:26:27.512 "assigned_rate_limits": { 00:26:27.512 "rw_ios_per_sec": 0, 00:26:27.512 "rw_mbytes_per_sec": 0, 00:26:27.512 "r_mbytes_per_sec": 0, 00:26:27.512 "w_mbytes_per_sec": 0 00:26:27.512 }, 00:26:27.512 "claimed": false, 00:26:27.512 "zoned": false, 00:26:27.512 "supported_io_types": { 00:26:27.512 "read": true, 00:26:27.512 "write": true, 00:26:27.512 "unmap": true, 00:26:27.512 "write_zeroes": true, 00:26:27.512 "flush": true, 00:26:27.512 "reset": true, 00:26:27.512 "compare": false, 00:26:27.512 "compare_and_write": false, 00:26:27.512 "abort": true, 00:26:27.512 "nvme_admin": false, 00:26:27.512 "nvme_io": false 00:26:27.512 }, 00:26:27.513 "memory_domains": [ 00:26:27.513 { 00:26:27.513 "dma_device_id": "system", 00:26:27.513 "dma_device_type": 1 00:26:27.513 }, 00:26:27.513 { 00:26:27.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:27.513 "dma_device_type": 2 00:26:27.513 } 00:26:27.513 ], 00:26:27.513 "driver_specific": {} 00:26:27.513 } 00:26:27.513 ] 00:26:27.513 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:26:27.513 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:27.513 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:27.513 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:27.771 BaseBdev4 00:26:27.771 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:26:27.771 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:26:27.771 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:26:27.771 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:26:27.771 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:26:27.771 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:26:27.771 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:28.029 11:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:28.286 [ 00:26:28.286 { 00:26:28.286 "name": "BaseBdev4", 00:26:28.286 "aliases": [ 00:26:28.286 "47dfa2bd-b4f0-4f96-8ced-cb506570bf97" 00:26:28.286 ], 00:26:28.286 "product_name": "Malloc disk", 00:26:28.286 "block_size": 512, 00:26:28.286 "num_blocks": 65536, 00:26:28.286 "uuid": "47dfa2bd-b4f0-4f96-8ced-cb506570bf97", 00:26:28.286 "assigned_rate_limits": { 00:26:28.286 "rw_ios_per_sec": 0, 00:26:28.286 "rw_mbytes_per_sec": 0, 00:26:28.286 "r_mbytes_per_sec": 0, 00:26:28.286 "w_mbytes_per_sec": 0 00:26:28.286 }, 00:26:28.286 "claimed": false, 00:26:28.286 "zoned": false, 00:26:28.286 "supported_io_types": { 00:26:28.286 "read": true, 00:26:28.286 "write": true, 00:26:28.286 "unmap": true, 00:26:28.286 "write_zeroes": true, 00:26:28.286 "flush": true, 00:26:28.286 "reset": true, 00:26:28.286 "compare": false, 00:26:28.286 "compare_and_write": false, 00:26:28.286 "abort": true, 00:26:28.286 "nvme_admin": false, 00:26:28.286 "nvme_io": false 00:26:28.286 }, 00:26:28.286 "memory_domains": [ 00:26:28.286 { 00:26:28.286 "dma_device_id": "system", 00:26:28.286 "dma_device_type": 1 00:26:28.286 }, 00:26:28.286 { 00:26:28.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:28.286 "dma_device_type": 2 00:26:28.286 } 00:26:28.286 ], 00:26:28.286 "driver_specific": {} 00:26:28.286 } 00:26:28.286 ] 00:26:28.286 11:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:26:28.286 11:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:28.286 11:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:28.286 11:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:28.544 [2024-06-10 11:50:00.511784] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:28.544 [2024-06-10 11:50:00.511879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:28.544 [2024-06-10 11:50:00.511909] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:28.544 [2024-06-10 11:50:00.514377] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:28.545 [2024-06-10 11:50:00.514460] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:28.545 11:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:28.545 11:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:28.545 11:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:28.545 11:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:28.545 11:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:28.545 11:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:28.545 11:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:28.545 11:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:28.545 11:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:28.545 11:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:28.545 11:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.545 11:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:28.803 11:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:28.803 "name": "Existed_Raid", 00:26:28.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.803 "strip_size_kb": 64, 00:26:28.803 "state": "configuring", 00:26:28.803 "raid_level": "concat", 00:26:28.803 "superblock": false, 00:26:28.803 "num_base_bdevs": 4, 00:26:28.803 "num_base_bdevs_discovered": 3, 00:26:28.803 "num_base_bdevs_operational": 4, 00:26:28.803 "base_bdevs_list": [ 00:26:28.803 { 00:26:28.803 "name": "BaseBdev1", 00:26:28.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.803 "is_configured": false, 00:26:28.803 "data_offset": 0, 00:26:28.803 "data_size": 0 00:26:28.803 }, 00:26:28.803 { 00:26:28.803 "name": "BaseBdev2", 00:26:28.803 "uuid": "fff07d83-9d8b-4b02-9000-12f5dbd1d5ea", 00:26:28.803 "is_configured": true, 00:26:28.803 "data_offset": 0, 00:26:28.803 "data_size": 65536 00:26:28.803 }, 00:26:28.803 { 00:26:28.803 "name": "BaseBdev3", 00:26:28.803 "uuid": "497545df-5644-4961-96ad-bf41ff2dd36b", 00:26:28.803 "is_configured": true, 00:26:28.803 "data_offset": 0, 00:26:28.803 "data_size": 65536 00:26:28.803 }, 00:26:28.803 { 00:26:28.803 "name": "BaseBdev4", 00:26:28.803 "uuid": "47dfa2bd-b4f0-4f96-8ced-cb506570bf97", 00:26:28.803 "is_configured": true, 00:26:28.803 "data_offset": 0, 00:26:28.803 "data_size": 65536 00:26:28.803 } 00:26:28.803 ] 00:26:28.803 }' 00:26:28.803 11:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:28.803 11:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:29.369 11:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:29.626 [2024-06-10 11:50:01.648144] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:29.626 11:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:29.626 11:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:29.626 11:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:29.626 11:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:29.626 11:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:29.626 11:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:29.626 11:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:29.626 11:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:29.626 11:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:29.626 11:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:29.626 11:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.626 11:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:29.884 11:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:29.884 "name": "Existed_Raid", 00:26:29.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.884 "strip_size_kb": 64, 00:26:29.884 "state": "configuring", 00:26:29.884 "raid_level": "concat", 00:26:29.884 "superblock": false, 00:26:29.884 "num_base_bdevs": 4, 00:26:29.884 "num_base_bdevs_discovered": 2, 00:26:29.885 "num_base_bdevs_operational": 4, 00:26:29.885 "base_bdevs_list": [ 00:26:29.885 { 00:26:29.885 "name": "BaseBdev1", 00:26:29.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.885 "is_configured": false, 00:26:29.885 "data_offset": 0, 00:26:29.885 "data_size": 0 00:26:29.885 }, 00:26:29.885 { 00:26:29.885 "name": null, 00:26:29.885 "uuid": "fff07d83-9d8b-4b02-9000-12f5dbd1d5ea", 00:26:29.885 "is_configured": false, 00:26:29.885 "data_offset": 0, 00:26:29.885 "data_size": 65536 00:26:29.885 }, 00:26:29.885 { 00:26:29.885 "name": "BaseBdev3", 00:26:29.885 "uuid": "497545df-5644-4961-96ad-bf41ff2dd36b", 00:26:29.885 "is_configured": true, 00:26:29.885 "data_offset": 0, 00:26:29.885 "data_size": 65536 00:26:29.885 }, 00:26:29.885 { 00:26:29.885 "name": "BaseBdev4", 00:26:29.885 "uuid": "47dfa2bd-b4f0-4f96-8ced-cb506570bf97", 00:26:29.885 "is_configured": true, 00:26:29.885 "data_offset": 0, 00:26:29.885 "data_size": 65536 00:26:29.885 } 00:26:29.885 ] 00:26:29.885 }' 00:26:29.885 11:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:29.885 11:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.842 11:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.842 11:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:30.842 11:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:26:30.842 11:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:31.101 [2024-06-10 11:50:03.123559] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:31.101 BaseBdev1 00:26:31.101 11:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:26:31.101 11:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:26:31.101 11:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:26:31.101 11:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:26:31.101 11:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:26:31.101 11:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:26:31.101 11:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:31.666 11:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:31.666 [ 00:26:31.666 { 00:26:31.666 "name": "BaseBdev1", 00:26:31.666 "aliases": [ 00:26:31.666 "6c2d2308-faac-4ede-b6c5-dc935cf6584d" 00:26:31.666 ], 00:26:31.666 "product_name": "Malloc disk", 00:26:31.666 "block_size": 512, 00:26:31.666 "num_blocks": 65536, 00:26:31.666 "uuid": "6c2d2308-faac-4ede-b6c5-dc935cf6584d", 00:26:31.666 "assigned_rate_limits": { 00:26:31.666 "rw_ios_per_sec": 0, 00:26:31.666 "rw_mbytes_per_sec": 0, 00:26:31.666 "r_mbytes_per_sec": 0, 00:26:31.666 "w_mbytes_per_sec": 0 00:26:31.666 }, 00:26:31.666 "claimed": true, 00:26:31.666 "claim_type": "exclusive_write", 00:26:31.666 "zoned": false, 00:26:31.666 "supported_io_types": { 00:26:31.666 "read": true, 00:26:31.666 "write": true, 00:26:31.666 "unmap": true, 00:26:31.666 "write_zeroes": true, 00:26:31.666 "flush": true, 00:26:31.666 "reset": true, 00:26:31.666 "compare": false, 00:26:31.666 "compare_and_write": false, 00:26:31.666 "abort": true, 00:26:31.666 "nvme_admin": false, 00:26:31.666 "nvme_io": false 00:26:31.666 }, 00:26:31.666 "memory_domains": [ 00:26:31.666 { 00:26:31.666 "dma_device_id": "system", 00:26:31.666 "dma_device_type": 1 00:26:31.666 }, 00:26:31.666 { 00:26:31.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.666 "dma_device_type": 2 00:26:31.666 } 00:26:31.666 ], 00:26:31.666 "driver_specific": {} 00:26:31.666 } 00:26:31.666 ] 00:26:31.666 11:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:26:31.666 11:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:31.666 11:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:31.666 11:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:31.666 11:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:31.666 11:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:31.666 11:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:31.666 11:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:31.666 11:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:31.666 11:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:31.666 11:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:31.666 11:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.666 11:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:31.924 11:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:31.924 "name": "Existed_Raid", 00:26:31.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.924 "strip_size_kb": 64, 00:26:31.924 "state": "configuring", 00:26:31.924 "raid_level": "concat", 00:26:31.924 "superblock": false, 00:26:31.924 "num_base_bdevs": 4, 00:26:31.924 "num_base_bdevs_discovered": 3, 00:26:31.924 "num_base_bdevs_operational": 4, 00:26:31.924 "base_bdevs_list": [ 00:26:31.924 { 00:26:31.924 "name": "BaseBdev1", 00:26:31.924 "uuid": "6c2d2308-faac-4ede-b6c5-dc935cf6584d", 00:26:31.924 "is_configured": true, 00:26:31.924 "data_offset": 0, 00:26:31.924 "data_size": 65536 00:26:31.924 }, 00:26:31.924 { 00:26:31.924 "name": null, 00:26:31.924 "uuid": "fff07d83-9d8b-4b02-9000-12f5dbd1d5ea", 00:26:31.924 "is_configured": false, 00:26:31.924 "data_offset": 0, 00:26:31.924 "data_size": 65536 00:26:31.924 }, 00:26:31.924 { 00:26:31.924 "name": "BaseBdev3", 00:26:31.924 "uuid": "497545df-5644-4961-96ad-bf41ff2dd36b", 00:26:31.924 "is_configured": true, 00:26:31.924 "data_offset": 0, 00:26:31.924 "data_size": 65536 00:26:31.924 }, 00:26:31.924 { 00:26:31.924 "name": "BaseBdev4", 00:26:31.924 "uuid": "47dfa2bd-b4f0-4f96-8ced-cb506570bf97", 00:26:31.924 "is_configured": true, 00:26:31.924 "data_offset": 0, 00:26:31.924 "data_size": 65536 00:26:31.924 } 00:26:31.924 ] 00:26:31.924 }' 00:26:31.924 11:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:31.924 11:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.858 11:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.858 11:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:32.858 11:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:26:32.858 11:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:33.147 [2024-06-10 11:50:05.071275] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:33.147 11:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:33.147 11:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:33.147 11:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:33.147 11:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:33.147 11:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:33.147 11:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:33.147 11:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:33.147 11:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:33.148 11:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:33.148 11:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:33.148 11:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:33.148 11:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.405 11:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:33.405 "name": "Existed_Raid", 00:26:33.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:33.405 "strip_size_kb": 64, 00:26:33.405 "state": "configuring", 00:26:33.405 "raid_level": "concat", 00:26:33.405 "superblock": false, 00:26:33.405 "num_base_bdevs": 4, 00:26:33.405 "num_base_bdevs_discovered": 2, 00:26:33.405 "num_base_bdevs_operational": 4, 00:26:33.405 "base_bdevs_list": [ 00:26:33.405 { 00:26:33.405 "name": "BaseBdev1", 00:26:33.405 "uuid": "6c2d2308-faac-4ede-b6c5-dc935cf6584d", 00:26:33.405 "is_configured": true, 00:26:33.405 "data_offset": 0, 00:26:33.405 "data_size": 65536 00:26:33.405 }, 00:26:33.405 { 00:26:33.405 "name": null, 00:26:33.405 "uuid": "fff07d83-9d8b-4b02-9000-12f5dbd1d5ea", 00:26:33.405 "is_configured": false, 00:26:33.405 "data_offset": 0, 00:26:33.405 "data_size": 65536 00:26:33.405 }, 00:26:33.405 { 00:26:33.405 "name": null, 00:26:33.405 "uuid": "497545df-5644-4961-96ad-bf41ff2dd36b", 00:26:33.405 "is_configured": false, 00:26:33.405 "data_offset": 0, 00:26:33.405 "data_size": 65536 00:26:33.406 }, 00:26:33.406 { 00:26:33.406 "name": "BaseBdev4", 00:26:33.406 "uuid": "47dfa2bd-b4f0-4f96-8ced-cb506570bf97", 00:26:33.406 "is_configured": true, 00:26:33.406 "data_offset": 0, 00:26:33.406 "data_size": 65536 00:26:33.406 } 00:26:33.406 ] 00:26:33.406 }' 00:26:33.406 11:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:33.406 11:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.336 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.336 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:34.336 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:26:34.336 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:34.594 [2024-06-10 11:50:06.483664] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:34.594 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:34.594 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:34.594 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:34.594 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:34.594 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:34.594 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:34.594 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:34.594 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:34.594 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:34.594 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:34.594 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.594 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:34.852 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:34.852 "name": "Existed_Raid", 00:26:34.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.852 "strip_size_kb": 64, 00:26:34.852 "state": "configuring", 00:26:34.852 "raid_level": "concat", 00:26:34.852 "superblock": false, 00:26:34.852 "num_base_bdevs": 4, 00:26:34.852 "num_base_bdevs_discovered": 3, 00:26:34.852 "num_base_bdevs_operational": 4, 00:26:34.852 "base_bdevs_list": [ 00:26:34.852 { 00:26:34.852 "name": "BaseBdev1", 00:26:34.852 "uuid": "6c2d2308-faac-4ede-b6c5-dc935cf6584d", 00:26:34.852 "is_configured": true, 00:26:34.852 "data_offset": 0, 00:26:34.852 "data_size": 65536 00:26:34.852 }, 00:26:34.852 { 00:26:34.852 "name": null, 00:26:34.852 "uuid": "fff07d83-9d8b-4b02-9000-12f5dbd1d5ea", 00:26:34.852 "is_configured": false, 00:26:34.852 "data_offset": 0, 00:26:34.852 "data_size": 65536 00:26:34.852 }, 00:26:34.852 { 00:26:34.852 "name": "BaseBdev3", 00:26:34.852 "uuid": "497545df-5644-4961-96ad-bf41ff2dd36b", 00:26:34.852 "is_configured": true, 00:26:34.852 "data_offset": 0, 00:26:34.852 "data_size": 65536 00:26:34.852 }, 00:26:34.852 { 00:26:34.852 "name": "BaseBdev4", 00:26:34.852 "uuid": "47dfa2bd-b4f0-4f96-8ced-cb506570bf97", 00:26:34.852 "is_configured": true, 00:26:34.852 "data_offset": 0, 00:26:34.852 "data_size": 65536 00:26:34.852 } 00:26:34.852 ] 00:26:34.852 }' 00:26:34.852 11:50:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:34.852 11:50:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.418 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.418 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:35.677 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:26:35.677 11:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:35.935 [2024-06-10 11:50:07.940046] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:36.193 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:36.193 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:36.193 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:36.193 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:36.193 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:36.193 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:36.193 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:36.193 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:36.193 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:36.193 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:36.193 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:36.193 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.453 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:36.453 "name": "Existed_Raid", 00:26:36.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.453 "strip_size_kb": 64, 00:26:36.453 "state": "configuring", 00:26:36.453 "raid_level": "concat", 00:26:36.453 "superblock": false, 00:26:36.453 "num_base_bdevs": 4, 00:26:36.453 "num_base_bdevs_discovered": 2, 00:26:36.453 "num_base_bdevs_operational": 4, 00:26:36.453 "base_bdevs_list": [ 00:26:36.453 { 00:26:36.453 "name": null, 00:26:36.453 "uuid": "6c2d2308-faac-4ede-b6c5-dc935cf6584d", 00:26:36.453 "is_configured": false, 00:26:36.453 "data_offset": 0, 00:26:36.453 "data_size": 65536 00:26:36.453 }, 00:26:36.453 { 00:26:36.453 "name": null, 00:26:36.453 "uuid": "fff07d83-9d8b-4b02-9000-12f5dbd1d5ea", 00:26:36.453 "is_configured": false, 00:26:36.453 "data_offset": 0, 00:26:36.453 "data_size": 65536 00:26:36.453 }, 00:26:36.453 { 00:26:36.453 "name": "BaseBdev3", 00:26:36.453 "uuid": "497545df-5644-4961-96ad-bf41ff2dd36b", 00:26:36.453 "is_configured": true, 00:26:36.453 "data_offset": 0, 00:26:36.453 "data_size": 65536 00:26:36.453 }, 00:26:36.453 { 00:26:36.453 "name": "BaseBdev4", 00:26:36.454 "uuid": "47dfa2bd-b4f0-4f96-8ced-cb506570bf97", 00:26:36.454 "is_configured": true, 00:26:36.454 "data_offset": 0, 00:26:36.454 "data_size": 65536 00:26:36.454 } 00:26:36.454 ] 00:26:36.454 }' 00:26:36.454 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:36.454 11:50:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.019 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.019 11:50:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:37.276 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:26:37.276 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:37.534 [2024-06-10 11:50:09.449550] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:37.534 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:37.534 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:37.534 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:37.534 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:37.535 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:37.535 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:37.535 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:37.535 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:37.535 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:37.535 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:37.535 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:37.535 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.792 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:37.792 "name": "Existed_Raid", 00:26:37.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:37.792 "strip_size_kb": 64, 00:26:37.792 "state": "configuring", 00:26:37.792 "raid_level": "concat", 00:26:37.792 "superblock": false, 00:26:37.792 "num_base_bdevs": 4, 00:26:37.792 "num_base_bdevs_discovered": 3, 00:26:37.792 "num_base_bdevs_operational": 4, 00:26:37.792 "base_bdevs_list": [ 00:26:37.792 { 00:26:37.792 "name": null, 00:26:37.792 "uuid": "6c2d2308-faac-4ede-b6c5-dc935cf6584d", 00:26:37.792 "is_configured": false, 00:26:37.792 "data_offset": 0, 00:26:37.792 "data_size": 65536 00:26:37.792 }, 00:26:37.792 { 00:26:37.792 "name": "BaseBdev2", 00:26:37.792 "uuid": "fff07d83-9d8b-4b02-9000-12f5dbd1d5ea", 00:26:37.792 "is_configured": true, 00:26:37.792 "data_offset": 0, 00:26:37.792 "data_size": 65536 00:26:37.792 }, 00:26:37.792 { 00:26:37.792 "name": "BaseBdev3", 00:26:37.792 "uuid": "497545df-5644-4961-96ad-bf41ff2dd36b", 00:26:37.792 "is_configured": true, 00:26:37.792 "data_offset": 0, 00:26:37.792 "data_size": 65536 00:26:37.792 }, 00:26:37.792 { 00:26:37.792 "name": "BaseBdev4", 00:26:37.792 "uuid": "47dfa2bd-b4f0-4f96-8ced-cb506570bf97", 00:26:37.792 "is_configured": true, 00:26:37.792 "data_offset": 0, 00:26:37.792 "data_size": 65536 00:26:37.792 } 00:26:37.792 ] 00:26:37.792 }' 00:26:37.792 11:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:37.792 11:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.357 11:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.357 11:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:38.614 11:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:26:38.614 11:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:38.614 11:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.871 11:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6c2d2308-faac-4ede-b6c5-dc935cf6584d 00:26:39.129 [2024-06-10 11:50:11.073627] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:39.129 [2024-06-10 11:50:11.073696] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:26:39.129 [2024-06-10 11:50:11.073705] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:26:39.129 [2024-06-10 11:50:11.073823] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:39.129 [2024-06-10 11:50:11.074182] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:26:39.129 [2024-06-10 11:50:11.074213] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:26:39.129 [2024-06-10 11:50:11.074472] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:39.129 NewBaseBdev 00:26:39.129 11:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:26:39.129 11:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:26:39.129 11:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:26:39.129 11:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:26:39.129 11:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:26:39.129 11:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:26:39.129 11:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:39.446 11:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:39.703 [ 00:26:39.703 { 00:26:39.703 "name": "NewBaseBdev", 00:26:39.703 "aliases": [ 00:26:39.703 "6c2d2308-faac-4ede-b6c5-dc935cf6584d" 00:26:39.703 ], 00:26:39.703 "product_name": "Malloc disk", 00:26:39.704 "block_size": 512, 00:26:39.704 "num_blocks": 65536, 00:26:39.704 "uuid": "6c2d2308-faac-4ede-b6c5-dc935cf6584d", 00:26:39.704 "assigned_rate_limits": { 00:26:39.704 "rw_ios_per_sec": 0, 00:26:39.704 "rw_mbytes_per_sec": 0, 00:26:39.704 "r_mbytes_per_sec": 0, 00:26:39.704 "w_mbytes_per_sec": 0 00:26:39.704 }, 00:26:39.704 "claimed": true, 00:26:39.704 "claim_type": "exclusive_write", 00:26:39.704 "zoned": false, 00:26:39.704 "supported_io_types": { 00:26:39.704 "read": true, 00:26:39.704 "write": true, 00:26:39.704 "unmap": true, 00:26:39.704 "write_zeroes": true, 00:26:39.704 "flush": true, 00:26:39.704 "reset": true, 00:26:39.704 "compare": false, 00:26:39.704 "compare_and_write": false, 00:26:39.704 "abort": true, 00:26:39.704 "nvme_admin": false, 00:26:39.704 "nvme_io": false 00:26:39.704 }, 00:26:39.704 "memory_domains": [ 00:26:39.704 { 00:26:39.704 "dma_device_id": "system", 00:26:39.704 "dma_device_type": 1 00:26:39.704 }, 00:26:39.704 { 00:26:39.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:39.704 "dma_device_type": 2 00:26:39.704 } 00:26:39.704 ], 00:26:39.704 "driver_specific": {} 00:26:39.704 } 00:26:39.704 ] 00:26:39.704 11:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:26:39.704 11:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:26:39.704 11:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:39.704 11:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:39.704 11:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:39.704 11:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:39.704 11:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:39.704 11:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:39.704 11:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:39.704 11:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:39.704 11:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:39.704 11:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.704 11:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:39.961 11:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:39.961 "name": "Existed_Raid", 00:26:39.961 "uuid": "3c94b127-2377-4557-9da1-5b063ced0d0a", 00:26:39.961 "strip_size_kb": 64, 00:26:39.961 "state": "online", 00:26:39.961 "raid_level": "concat", 00:26:39.961 "superblock": false, 00:26:39.961 "num_base_bdevs": 4, 00:26:39.961 "num_base_bdevs_discovered": 4, 00:26:39.961 "num_base_bdevs_operational": 4, 00:26:39.961 "base_bdevs_list": [ 00:26:39.961 { 00:26:39.961 "name": "NewBaseBdev", 00:26:39.961 "uuid": "6c2d2308-faac-4ede-b6c5-dc935cf6584d", 00:26:39.961 "is_configured": true, 00:26:39.961 "data_offset": 0, 00:26:39.961 "data_size": 65536 00:26:39.961 }, 00:26:39.961 { 00:26:39.961 "name": "BaseBdev2", 00:26:39.961 "uuid": "fff07d83-9d8b-4b02-9000-12f5dbd1d5ea", 00:26:39.961 "is_configured": true, 00:26:39.961 "data_offset": 0, 00:26:39.961 "data_size": 65536 00:26:39.961 }, 00:26:39.961 { 00:26:39.961 "name": "BaseBdev3", 00:26:39.961 "uuid": "497545df-5644-4961-96ad-bf41ff2dd36b", 00:26:39.961 "is_configured": true, 00:26:39.961 "data_offset": 0, 00:26:39.961 "data_size": 65536 00:26:39.961 }, 00:26:39.961 { 00:26:39.961 "name": "BaseBdev4", 00:26:39.961 "uuid": "47dfa2bd-b4f0-4f96-8ced-cb506570bf97", 00:26:39.961 "is_configured": true, 00:26:39.961 "data_offset": 0, 00:26:39.961 "data_size": 65536 00:26:39.961 } 00:26:39.961 ] 00:26:39.961 }' 00:26:39.961 11:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:39.961 11:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.525 11:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:40.525 11:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:40.525 11:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:40.525 11:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:40.525 11:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:40.525 11:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:40.525 11:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:40.525 11:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:40.782 [2024-06-10 11:50:12.730395] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:40.782 11:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:40.782 "name": "Existed_Raid", 00:26:40.782 "aliases": [ 00:26:40.782 "3c94b127-2377-4557-9da1-5b063ced0d0a" 00:26:40.782 ], 00:26:40.782 "product_name": "Raid Volume", 00:26:40.782 "block_size": 512, 00:26:40.782 "num_blocks": 262144, 00:26:40.782 "uuid": "3c94b127-2377-4557-9da1-5b063ced0d0a", 00:26:40.782 "assigned_rate_limits": { 00:26:40.782 "rw_ios_per_sec": 0, 00:26:40.782 "rw_mbytes_per_sec": 0, 00:26:40.782 "r_mbytes_per_sec": 0, 00:26:40.782 "w_mbytes_per_sec": 0 00:26:40.782 }, 00:26:40.782 "claimed": false, 00:26:40.782 "zoned": false, 00:26:40.782 "supported_io_types": { 00:26:40.782 "read": true, 00:26:40.782 "write": true, 00:26:40.782 "unmap": true, 00:26:40.782 "write_zeroes": true, 00:26:40.782 "flush": true, 00:26:40.782 "reset": true, 00:26:40.782 "compare": false, 00:26:40.782 "compare_and_write": false, 00:26:40.782 "abort": false, 00:26:40.782 "nvme_admin": false, 00:26:40.782 "nvme_io": false 00:26:40.782 }, 00:26:40.782 "memory_domains": [ 00:26:40.782 { 00:26:40.782 "dma_device_id": "system", 00:26:40.782 "dma_device_type": 1 00:26:40.782 }, 00:26:40.782 { 00:26:40.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.782 "dma_device_type": 2 00:26:40.782 }, 00:26:40.782 { 00:26:40.782 "dma_device_id": "system", 00:26:40.782 "dma_device_type": 1 00:26:40.782 }, 00:26:40.782 { 00:26:40.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.782 "dma_device_type": 2 00:26:40.782 }, 00:26:40.782 { 00:26:40.782 "dma_device_id": "system", 00:26:40.782 "dma_device_type": 1 00:26:40.782 }, 00:26:40.782 { 00:26:40.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.782 "dma_device_type": 2 00:26:40.782 }, 00:26:40.782 { 00:26:40.782 "dma_device_id": "system", 00:26:40.782 "dma_device_type": 1 00:26:40.782 }, 00:26:40.782 { 00:26:40.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.782 "dma_device_type": 2 00:26:40.782 } 00:26:40.782 ], 00:26:40.782 "driver_specific": { 00:26:40.782 "raid": { 00:26:40.782 "uuid": "3c94b127-2377-4557-9da1-5b063ced0d0a", 00:26:40.782 "strip_size_kb": 64, 00:26:40.782 "state": "online", 00:26:40.782 "raid_level": "concat", 00:26:40.782 "superblock": false, 00:26:40.782 "num_base_bdevs": 4, 00:26:40.782 "num_base_bdevs_discovered": 4, 00:26:40.782 "num_base_bdevs_operational": 4, 00:26:40.782 "base_bdevs_list": [ 00:26:40.782 { 00:26:40.782 "name": "NewBaseBdev", 00:26:40.782 "uuid": "6c2d2308-faac-4ede-b6c5-dc935cf6584d", 00:26:40.782 "is_configured": true, 00:26:40.782 "data_offset": 0, 00:26:40.782 "data_size": 65536 00:26:40.782 }, 00:26:40.782 { 00:26:40.782 "name": "BaseBdev2", 00:26:40.782 "uuid": "fff07d83-9d8b-4b02-9000-12f5dbd1d5ea", 00:26:40.782 "is_configured": true, 00:26:40.782 "data_offset": 0, 00:26:40.782 "data_size": 65536 00:26:40.782 }, 00:26:40.782 { 00:26:40.782 "name": "BaseBdev3", 00:26:40.782 "uuid": "497545df-5644-4961-96ad-bf41ff2dd36b", 00:26:40.782 "is_configured": true, 00:26:40.782 "data_offset": 0, 00:26:40.782 "data_size": 65536 00:26:40.782 }, 00:26:40.782 { 00:26:40.782 "name": "BaseBdev4", 00:26:40.782 "uuid": "47dfa2bd-b4f0-4f96-8ced-cb506570bf97", 00:26:40.782 "is_configured": true, 00:26:40.782 "data_offset": 0, 00:26:40.782 "data_size": 65536 00:26:40.782 } 00:26:40.782 ] 00:26:40.782 } 00:26:40.782 } 00:26:40.782 }' 00:26:40.782 11:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:40.782 11:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:40.782 BaseBdev2 00:26:40.782 BaseBdev3 00:26:40.782 BaseBdev4' 00:26:40.782 11:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:40.782 11:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:40.782 11:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:41.039 11:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:41.039 "name": "NewBaseBdev", 00:26:41.039 "aliases": [ 00:26:41.039 "6c2d2308-faac-4ede-b6c5-dc935cf6584d" 00:26:41.039 ], 00:26:41.039 "product_name": "Malloc disk", 00:26:41.039 "block_size": 512, 00:26:41.039 "num_blocks": 65536, 00:26:41.039 "uuid": "6c2d2308-faac-4ede-b6c5-dc935cf6584d", 00:26:41.039 "assigned_rate_limits": { 00:26:41.039 "rw_ios_per_sec": 0, 00:26:41.039 "rw_mbytes_per_sec": 0, 00:26:41.039 "r_mbytes_per_sec": 0, 00:26:41.039 "w_mbytes_per_sec": 0 00:26:41.039 }, 00:26:41.039 "claimed": true, 00:26:41.039 "claim_type": "exclusive_write", 00:26:41.039 "zoned": false, 00:26:41.039 "supported_io_types": { 00:26:41.039 "read": true, 00:26:41.039 "write": true, 00:26:41.039 "unmap": true, 00:26:41.039 "write_zeroes": true, 00:26:41.039 "flush": true, 00:26:41.039 "reset": true, 00:26:41.039 "compare": false, 00:26:41.039 "compare_and_write": false, 00:26:41.039 "abort": true, 00:26:41.039 "nvme_admin": false, 00:26:41.039 "nvme_io": false 00:26:41.039 }, 00:26:41.039 "memory_domains": [ 00:26:41.039 { 00:26:41.039 "dma_device_id": "system", 00:26:41.039 "dma_device_type": 1 00:26:41.039 }, 00:26:41.039 { 00:26:41.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.039 "dma_device_type": 2 00:26:41.039 } 00:26:41.039 ], 00:26:41.039 "driver_specific": {} 00:26:41.039 }' 00:26:41.039 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:41.039 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:41.039 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:41.296 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:41.296 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:41.296 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:41.296 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:41.296 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:41.296 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:41.296 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:41.553 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:41.553 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:41.553 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:41.553 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:41.553 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:41.810 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:41.810 "name": "BaseBdev2", 00:26:41.810 "aliases": [ 00:26:41.810 "fff07d83-9d8b-4b02-9000-12f5dbd1d5ea" 00:26:41.810 ], 00:26:41.810 "product_name": "Malloc disk", 00:26:41.810 "block_size": 512, 00:26:41.810 "num_blocks": 65536, 00:26:41.810 "uuid": "fff07d83-9d8b-4b02-9000-12f5dbd1d5ea", 00:26:41.810 "assigned_rate_limits": { 00:26:41.810 "rw_ios_per_sec": 0, 00:26:41.810 "rw_mbytes_per_sec": 0, 00:26:41.810 "r_mbytes_per_sec": 0, 00:26:41.810 "w_mbytes_per_sec": 0 00:26:41.810 }, 00:26:41.810 "claimed": true, 00:26:41.810 "claim_type": "exclusive_write", 00:26:41.810 "zoned": false, 00:26:41.810 "supported_io_types": { 00:26:41.810 "read": true, 00:26:41.810 "write": true, 00:26:41.810 "unmap": true, 00:26:41.810 "write_zeroes": true, 00:26:41.810 "flush": true, 00:26:41.810 "reset": true, 00:26:41.810 "compare": false, 00:26:41.810 "compare_and_write": false, 00:26:41.810 "abort": true, 00:26:41.810 "nvme_admin": false, 00:26:41.810 "nvme_io": false 00:26:41.810 }, 00:26:41.810 "memory_domains": [ 00:26:41.810 { 00:26:41.810 "dma_device_id": "system", 00:26:41.810 "dma_device_type": 1 00:26:41.810 }, 00:26:41.810 { 00:26:41.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.810 "dma_device_type": 2 00:26:41.810 } 00:26:41.810 ], 00:26:41.810 "driver_specific": {} 00:26:41.810 }' 00:26:41.811 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:41.811 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:41.811 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:41.811 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:41.811 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:41.811 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:41.811 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:41.811 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:42.068 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:42.068 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:42.068 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:42.068 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:42.068 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:42.068 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:42.068 11:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:42.325 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:42.325 "name": "BaseBdev3", 00:26:42.325 "aliases": [ 00:26:42.325 "497545df-5644-4961-96ad-bf41ff2dd36b" 00:26:42.325 ], 00:26:42.325 "product_name": "Malloc disk", 00:26:42.325 "block_size": 512, 00:26:42.325 "num_blocks": 65536, 00:26:42.325 "uuid": "497545df-5644-4961-96ad-bf41ff2dd36b", 00:26:42.325 "assigned_rate_limits": { 00:26:42.325 "rw_ios_per_sec": 0, 00:26:42.325 "rw_mbytes_per_sec": 0, 00:26:42.325 "r_mbytes_per_sec": 0, 00:26:42.325 "w_mbytes_per_sec": 0 00:26:42.325 }, 00:26:42.325 "claimed": true, 00:26:42.325 "claim_type": "exclusive_write", 00:26:42.325 "zoned": false, 00:26:42.325 "supported_io_types": { 00:26:42.325 "read": true, 00:26:42.325 "write": true, 00:26:42.325 "unmap": true, 00:26:42.325 "write_zeroes": true, 00:26:42.325 "flush": true, 00:26:42.325 "reset": true, 00:26:42.325 "compare": false, 00:26:42.325 "compare_and_write": false, 00:26:42.325 "abort": true, 00:26:42.325 "nvme_admin": false, 00:26:42.325 "nvme_io": false 00:26:42.325 }, 00:26:42.325 "memory_domains": [ 00:26:42.325 { 00:26:42.325 "dma_device_id": "system", 00:26:42.325 "dma_device_type": 1 00:26:42.325 }, 00:26:42.325 { 00:26:42.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:42.325 "dma_device_type": 2 00:26:42.325 } 00:26:42.325 ], 00:26:42.325 "driver_specific": {} 00:26:42.325 }' 00:26:42.325 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:42.583 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:42.583 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:42.583 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:42.583 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:42.583 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:42.583 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:42.583 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:42.583 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:42.583 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:42.841 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:42.841 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:42.841 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:42.841 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:42.841 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:43.099 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:43.099 "name": "BaseBdev4", 00:26:43.099 "aliases": [ 00:26:43.099 "47dfa2bd-b4f0-4f96-8ced-cb506570bf97" 00:26:43.099 ], 00:26:43.099 "product_name": "Malloc disk", 00:26:43.099 "block_size": 512, 00:26:43.099 "num_blocks": 65536, 00:26:43.099 "uuid": "47dfa2bd-b4f0-4f96-8ced-cb506570bf97", 00:26:43.099 "assigned_rate_limits": { 00:26:43.099 "rw_ios_per_sec": 0, 00:26:43.099 "rw_mbytes_per_sec": 0, 00:26:43.099 "r_mbytes_per_sec": 0, 00:26:43.099 "w_mbytes_per_sec": 0 00:26:43.099 }, 00:26:43.099 "claimed": true, 00:26:43.099 "claim_type": "exclusive_write", 00:26:43.099 "zoned": false, 00:26:43.099 "supported_io_types": { 00:26:43.099 "read": true, 00:26:43.099 "write": true, 00:26:43.099 "unmap": true, 00:26:43.099 "write_zeroes": true, 00:26:43.099 "flush": true, 00:26:43.099 "reset": true, 00:26:43.099 "compare": false, 00:26:43.099 "compare_and_write": false, 00:26:43.099 "abort": true, 00:26:43.099 "nvme_admin": false, 00:26:43.099 "nvme_io": false 00:26:43.099 }, 00:26:43.099 "memory_domains": [ 00:26:43.099 { 00:26:43.099 "dma_device_id": "system", 00:26:43.099 "dma_device_type": 1 00:26:43.099 }, 00:26:43.099 { 00:26:43.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:43.099 "dma_device_type": 2 00:26:43.099 } 00:26:43.099 ], 00:26:43.099 "driver_specific": {} 00:26:43.099 }' 00:26:43.099 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:43.099 11:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:43.099 11:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:43.099 11:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:43.099 11:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:43.099 11:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:43.099 11:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:43.356 11:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:43.356 11:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:43.356 11:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:43.356 11:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:43.356 11:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:43.356 11:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:43.613 [2024-06-10 11:50:15.538745] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:43.613 [2024-06-10 11:50:15.538792] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:43.613 [2024-06-10 11:50:15.538869] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:43.613 [2024-06-10 11:50:15.538943] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:43.614 [2024-06-10 11:50:15.538954] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:26:43.614 11:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 138968 00:26:43.614 11:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 138968 ']' 00:26:43.614 11:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 138968 00:26:43.614 11:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:26:43.614 11:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:43.614 11:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 138968 00:26:43.614 11:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:43.614 11:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:43.614 killing process with pid 138968 00:26:43.614 11:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 138968' 00:26:43.614 11:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 138968 00:26:43.614 [2024-06-10 11:50:15.582143] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:43.614 11:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 138968 00:26:44.179 [2024-06-10 11:50:16.024837] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:45.552 ************************************ 00:26:45.552 END TEST raid_state_function_test 00:26:45.552 ************************************ 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:26:45.552 00:26:45.552 real 0m35.869s 00:26:45.552 user 1m5.442s 00:26:45.552 sys 0m4.723s 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.552 11:50:17 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:26:45.552 11:50:17 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:26:45.552 11:50:17 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:45.552 11:50:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:45.552 ************************************ 00:26:45.552 START TEST raid_state_function_test_sb 00:26:45.552 ************************************ 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 4 true 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:45.552 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:26:45.553 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:26:45.553 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:26:45.553 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:45.553 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:45.553 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=140087 00:26:45.553 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:45.553 Process raid pid: 140087 00:26:45.553 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 140087' 00:26:45.553 11:50:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 140087 /var/tmp/spdk-raid.sock 00:26:45.553 11:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 140087 ']' 00:26:45.553 11:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:45.553 11:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:45.553 11:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:45.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:45.553 11:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:45.553 11:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:45.553 [2024-06-10 11:50:17.479797] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:26:45.553 [2024-06-10 11:50:17.480692] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.840 [2024-06-10 11:50:17.647662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.098 [2024-06-10 11:50:17.920408] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.098 [2024-06-10 11:50:18.122218] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:46.357 11:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:46.357 11:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:26:46.357 11:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:46.617 [2024-06-10 11:50:18.512741] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:46.617 [2024-06-10 11:50:18.513018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:46.617 [2024-06-10 11:50:18.513126] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:46.617 [2024-06-10 11:50:18.513216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:46.617 [2024-06-10 11:50:18.513319] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:46.617 [2024-06-10 11:50:18.513435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:46.617 [2024-06-10 11:50:18.513515] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:46.617 [2024-06-10 11:50:18.513572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:46.617 11:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:46.617 11:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:46.617 11:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:46.617 11:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:46.617 11:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:46.617 11:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:46.617 11:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:46.617 11:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:46.617 11:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:46.617 11:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:46.617 11:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.617 11:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:46.875 11:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:46.875 "name": "Existed_Raid", 00:26:46.875 "uuid": "69e9875a-46c3-4341-addb-4c5c67727b6c", 00:26:46.875 "strip_size_kb": 64, 00:26:46.875 "state": "configuring", 00:26:46.875 "raid_level": "concat", 00:26:46.875 "superblock": true, 00:26:46.875 "num_base_bdevs": 4, 00:26:46.875 "num_base_bdevs_discovered": 0, 00:26:46.875 "num_base_bdevs_operational": 4, 00:26:46.875 "base_bdevs_list": [ 00:26:46.875 { 00:26:46.875 "name": "BaseBdev1", 00:26:46.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.875 "is_configured": false, 00:26:46.875 "data_offset": 0, 00:26:46.875 "data_size": 0 00:26:46.875 }, 00:26:46.875 { 00:26:46.875 "name": "BaseBdev2", 00:26:46.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.875 "is_configured": false, 00:26:46.875 "data_offset": 0, 00:26:46.875 "data_size": 0 00:26:46.875 }, 00:26:46.875 { 00:26:46.875 "name": "BaseBdev3", 00:26:46.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.875 "is_configured": false, 00:26:46.875 "data_offset": 0, 00:26:46.875 "data_size": 0 00:26:46.875 }, 00:26:46.875 { 00:26:46.875 "name": "BaseBdev4", 00:26:46.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.875 "is_configured": false, 00:26:46.875 "data_offset": 0, 00:26:46.875 "data_size": 0 00:26:46.875 } 00:26:46.875 ] 00:26:46.876 }' 00:26:46.876 11:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:46.876 11:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.441 11:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:47.699 [2024-06-10 11:50:19.622034] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:47.699 [2024-06-10 11:50:19.622314] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:26:47.699 11:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:47.958 [2024-06-10 11:50:19.826048] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:47.958 [2024-06-10 11:50:19.826550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:47.958 [2024-06-10 11:50:19.826751] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:47.958 [2024-06-10 11:50:19.826887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:47.958 [2024-06-10 11:50:19.826985] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:47.958 [2024-06-10 11:50:19.827064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:47.958 [2024-06-10 11:50:19.827142] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:47.958 [2024-06-10 11:50:19.827202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:47.958 11:50:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:48.218 [2024-06-10 11:50:20.153088] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:48.218 BaseBdev1 00:26:48.218 11:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:48.218 11:50:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:26:48.218 11:50:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:26:48.218 11:50:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:26:48.218 11:50:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:26:48.218 11:50:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:26:48.218 11:50:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:48.476 11:50:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:48.734 [ 00:26:48.734 { 00:26:48.734 "name": "BaseBdev1", 00:26:48.734 "aliases": [ 00:26:48.734 "0bcb4c46-d516-40f6-9c7e-7b819a13303e" 00:26:48.734 ], 00:26:48.734 "product_name": "Malloc disk", 00:26:48.734 "block_size": 512, 00:26:48.734 "num_blocks": 65536, 00:26:48.734 "uuid": "0bcb4c46-d516-40f6-9c7e-7b819a13303e", 00:26:48.734 "assigned_rate_limits": { 00:26:48.734 "rw_ios_per_sec": 0, 00:26:48.734 "rw_mbytes_per_sec": 0, 00:26:48.734 "r_mbytes_per_sec": 0, 00:26:48.734 "w_mbytes_per_sec": 0 00:26:48.734 }, 00:26:48.734 "claimed": true, 00:26:48.734 "claim_type": "exclusive_write", 00:26:48.734 "zoned": false, 00:26:48.734 "supported_io_types": { 00:26:48.734 "read": true, 00:26:48.734 "write": true, 00:26:48.734 "unmap": true, 00:26:48.734 "write_zeroes": true, 00:26:48.734 "flush": true, 00:26:48.734 "reset": true, 00:26:48.734 "compare": false, 00:26:48.734 "compare_and_write": false, 00:26:48.734 "abort": true, 00:26:48.734 "nvme_admin": false, 00:26:48.734 "nvme_io": false 00:26:48.734 }, 00:26:48.734 "memory_domains": [ 00:26:48.734 { 00:26:48.734 "dma_device_id": "system", 00:26:48.734 "dma_device_type": 1 00:26:48.734 }, 00:26:48.734 { 00:26:48.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:48.734 "dma_device_type": 2 00:26:48.734 } 00:26:48.734 ], 00:26:48.734 "driver_specific": {} 00:26:48.734 } 00:26:48.734 ] 00:26:48.734 11:50:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:26:48.734 11:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:48.734 11:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:48.734 11:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:48.734 11:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:48.734 11:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:48.734 11:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:48.734 11:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:48.734 11:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:48.734 11:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:48.734 11:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:48.734 11:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.734 11:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:48.991 11:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:48.991 "name": "Existed_Raid", 00:26:48.991 "uuid": "3787ad72-6b7d-4553-a771-4d29b25359b1", 00:26:48.991 "strip_size_kb": 64, 00:26:48.991 "state": "configuring", 00:26:48.991 "raid_level": "concat", 00:26:48.991 "superblock": true, 00:26:48.991 "num_base_bdevs": 4, 00:26:48.991 "num_base_bdevs_discovered": 1, 00:26:48.991 "num_base_bdevs_operational": 4, 00:26:48.991 "base_bdevs_list": [ 00:26:48.991 { 00:26:48.991 "name": "BaseBdev1", 00:26:48.991 "uuid": "0bcb4c46-d516-40f6-9c7e-7b819a13303e", 00:26:48.991 "is_configured": true, 00:26:48.991 "data_offset": 2048, 00:26:48.991 "data_size": 63488 00:26:48.991 }, 00:26:48.991 { 00:26:48.991 "name": "BaseBdev2", 00:26:48.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.991 "is_configured": false, 00:26:48.991 "data_offset": 0, 00:26:48.991 "data_size": 0 00:26:48.991 }, 00:26:48.991 { 00:26:48.991 "name": "BaseBdev3", 00:26:48.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.991 "is_configured": false, 00:26:48.991 "data_offset": 0, 00:26:48.991 "data_size": 0 00:26:48.991 }, 00:26:48.991 { 00:26:48.991 "name": "BaseBdev4", 00:26:48.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.991 "is_configured": false, 00:26:48.991 "data_offset": 0, 00:26:48.991 "data_size": 0 00:26:48.991 } 00:26:48.991 ] 00:26:48.991 }' 00:26:48.991 11:50:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:48.991 11:50:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.557 11:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:49.816 [2024-06-10 11:50:21.693561] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:49.816 [2024-06-10 11:50:21.693859] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:26:49.816 11:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:50.074 [2024-06-10 11:50:21.897736] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:50.074 [2024-06-10 11:50:21.900031] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:50.074 [2024-06-10 11:50:21.900198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:50.074 [2024-06-10 11:50:21.900319] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:50.074 [2024-06-10 11:50:21.900464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:50.074 [2024-06-10 11:50:21.900548] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:50.074 [2024-06-10 11:50:21.900606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:50.074 11:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:50.074 11:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:50.074 11:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:50.075 11:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:50.075 11:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:50.075 11:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:50.075 11:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:50.075 11:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:50.075 11:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:50.075 11:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:50.075 11:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:50.075 11:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:50.075 11:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.075 11:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:50.075 11:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:50.075 "name": "Existed_Raid", 00:26:50.075 "uuid": "84dc7f47-2bf2-475c-ae16-bc09e2a031d5", 00:26:50.075 "strip_size_kb": 64, 00:26:50.075 "state": "configuring", 00:26:50.075 "raid_level": "concat", 00:26:50.075 "superblock": true, 00:26:50.075 "num_base_bdevs": 4, 00:26:50.075 "num_base_bdevs_discovered": 1, 00:26:50.075 "num_base_bdevs_operational": 4, 00:26:50.075 "base_bdevs_list": [ 00:26:50.075 { 00:26:50.075 "name": "BaseBdev1", 00:26:50.075 "uuid": "0bcb4c46-d516-40f6-9c7e-7b819a13303e", 00:26:50.075 "is_configured": true, 00:26:50.075 "data_offset": 2048, 00:26:50.075 "data_size": 63488 00:26:50.075 }, 00:26:50.075 { 00:26:50.075 "name": "BaseBdev2", 00:26:50.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.075 "is_configured": false, 00:26:50.075 "data_offset": 0, 00:26:50.075 "data_size": 0 00:26:50.075 }, 00:26:50.075 { 00:26:50.075 "name": "BaseBdev3", 00:26:50.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.075 "is_configured": false, 00:26:50.075 "data_offset": 0, 00:26:50.075 "data_size": 0 00:26:50.075 }, 00:26:50.075 { 00:26:50.075 "name": "BaseBdev4", 00:26:50.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.075 "is_configured": false, 00:26:50.075 "data_offset": 0, 00:26:50.075 "data_size": 0 00:26:50.075 } 00:26:50.075 ] 00:26:50.075 }' 00:26:50.075 11:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:50.075 11:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.006 11:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:51.006 [2024-06-10 11:50:22.973093] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:51.006 BaseBdev2 00:26:51.006 11:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:51.006 11:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:26:51.006 11:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:26:51.006 11:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:26:51.006 11:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:26:51.006 11:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:26:51.006 11:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:51.264 11:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:51.523 [ 00:26:51.523 { 00:26:51.523 "name": "BaseBdev2", 00:26:51.523 "aliases": [ 00:26:51.523 "e888b863-1d7d-45d7-8d50-d1d9dc7067ed" 00:26:51.523 ], 00:26:51.523 "product_name": "Malloc disk", 00:26:51.523 "block_size": 512, 00:26:51.523 "num_blocks": 65536, 00:26:51.523 "uuid": "e888b863-1d7d-45d7-8d50-d1d9dc7067ed", 00:26:51.523 "assigned_rate_limits": { 00:26:51.523 "rw_ios_per_sec": 0, 00:26:51.523 "rw_mbytes_per_sec": 0, 00:26:51.523 "r_mbytes_per_sec": 0, 00:26:51.523 "w_mbytes_per_sec": 0 00:26:51.523 }, 00:26:51.523 "claimed": true, 00:26:51.523 "claim_type": "exclusive_write", 00:26:51.523 "zoned": false, 00:26:51.523 "supported_io_types": { 00:26:51.523 "read": true, 00:26:51.523 "write": true, 00:26:51.523 "unmap": true, 00:26:51.523 "write_zeroes": true, 00:26:51.523 "flush": true, 00:26:51.523 "reset": true, 00:26:51.523 "compare": false, 00:26:51.523 "compare_and_write": false, 00:26:51.523 "abort": true, 00:26:51.523 "nvme_admin": false, 00:26:51.523 "nvme_io": false 00:26:51.523 }, 00:26:51.523 "memory_domains": [ 00:26:51.523 { 00:26:51.523 "dma_device_id": "system", 00:26:51.523 "dma_device_type": 1 00:26:51.523 }, 00:26:51.523 { 00:26:51.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:51.523 "dma_device_type": 2 00:26:51.523 } 00:26:51.523 ], 00:26:51.523 "driver_specific": {} 00:26:51.523 } 00:26:51.523 ] 00:26:51.523 11:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:26:51.523 11:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:51.523 11:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:51.523 11:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:51.523 11:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:51.523 11:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:51.523 11:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:51.523 11:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:51.523 11:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:51.523 11:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:51.523 11:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:51.523 11:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:51.523 11:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:51.781 11:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.781 11:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:52.039 11:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:52.039 "name": "Existed_Raid", 00:26:52.039 "uuid": "84dc7f47-2bf2-475c-ae16-bc09e2a031d5", 00:26:52.039 "strip_size_kb": 64, 00:26:52.039 "state": "configuring", 00:26:52.039 "raid_level": "concat", 00:26:52.039 "superblock": true, 00:26:52.039 "num_base_bdevs": 4, 00:26:52.039 "num_base_bdevs_discovered": 2, 00:26:52.039 "num_base_bdevs_operational": 4, 00:26:52.039 "base_bdevs_list": [ 00:26:52.039 { 00:26:52.039 "name": "BaseBdev1", 00:26:52.039 "uuid": "0bcb4c46-d516-40f6-9c7e-7b819a13303e", 00:26:52.039 "is_configured": true, 00:26:52.039 "data_offset": 2048, 00:26:52.039 "data_size": 63488 00:26:52.039 }, 00:26:52.039 { 00:26:52.039 "name": "BaseBdev2", 00:26:52.039 "uuid": "e888b863-1d7d-45d7-8d50-d1d9dc7067ed", 00:26:52.039 "is_configured": true, 00:26:52.039 "data_offset": 2048, 00:26:52.039 "data_size": 63488 00:26:52.039 }, 00:26:52.039 { 00:26:52.039 "name": "BaseBdev3", 00:26:52.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.039 "is_configured": false, 00:26:52.039 "data_offset": 0, 00:26:52.039 "data_size": 0 00:26:52.039 }, 00:26:52.039 { 00:26:52.039 "name": "BaseBdev4", 00:26:52.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.039 "is_configured": false, 00:26:52.039 "data_offset": 0, 00:26:52.039 "data_size": 0 00:26:52.039 } 00:26:52.039 ] 00:26:52.039 }' 00:26:52.039 11:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:52.039 11:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.608 11:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:52.867 [2024-06-10 11:50:24.806538] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:52.867 BaseBdev3 00:26:52.867 11:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:26:52.867 11:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:26:52.867 11:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:26:52.867 11:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:26:52.867 11:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:26:52.867 11:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:26:52.867 11:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:53.125 11:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:53.385 [ 00:26:53.385 { 00:26:53.385 "name": "BaseBdev3", 00:26:53.385 "aliases": [ 00:26:53.385 "4db82c07-42cf-4514-a40e-281266a68e5c" 00:26:53.385 ], 00:26:53.385 "product_name": "Malloc disk", 00:26:53.385 "block_size": 512, 00:26:53.385 "num_blocks": 65536, 00:26:53.385 "uuid": "4db82c07-42cf-4514-a40e-281266a68e5c", 00:26:53.385 "assigned_rate_limits": { 00:26:53.385 "rw_ios_per_sec": 0, 00:26:53.385 "rw_mbytes_per_sec": 0, 00:26:53.385 "r_mbytes_per_sec": 0, 00:26:53.385 "w_mbytes_per_sec": 0 00:26:53.385 }, 00:26:53.385 "claimed": true, 00:26:53.385 "claim_type": "exclusive_write", 00:26:53.385 "zoned": false, 00:26:53.385 "supported_io_types": { 00:26:53.385 "read": true, 00:26:53.385 "write": true, 00:26:53.385 "unmap": true, 00:26:53.385 "write_zeroes": true, 00:26:53.385 "flush": true, 00:26:53.385 "reset": true, 00:26:53.385 "compare": false, 00:26:53.385 "compare_and_write": false, 00:26:53.385 "abort": true, 00:26:53.385 "nvme_admin": false, 00:26:53.385 "nvme_io": false 00:26:53.385 }, 00:26:53.385 "memory_domains": [ 00:26:53.385 { 00:26:53.385 "dma_device_id": "system", 00:26:53.385 "dma_device_type": 1 00:26:53.385 }, 00:26:53.385 { 00:26:53.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.385 "dma_device_type": 2 00:26:53.385 } 00:26:53.385 ], 00:26:53.385 "driver_specific": {} 00:26:53.385 } 00:26:53.385 ] 00:26:53.385 11:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:26:53.385 11:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:53.385 11:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:53.385 11:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:53.385 11:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:53.385 11:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:53.385 11:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:53.385 11:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:53.385 11:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:53.385 11:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:53.385 11:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:53.385 11:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:53.385 11:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:53.385 11:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.385 11:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:53.643 11:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:53.643 "name": "Existed_Raid", 00:26:53.643 "uuid": "84dc7f47-2bf2-475c-ae16-bc09e2a031d5", 00:26:53.643 "strip_size_kb": 64, 00:26:53.643 "state": "configuring", 00:26:53.643 "raid_level": "concat", 00:26:53.643 "superblock": true, 00:26:53.643 "num_base_bdevs": 4, 00:26:53.643 "num_base_bdevs_discovered": 3, 00:26:53.643 "num_base_bdevs_operational": 4, 00:26:53.643 "base_bdevs_list": [ 00:26:53.643 { 00:26:53.643 "name": "BaseBdev1", 00:26:53.643 "uuid": "0bcb4c46-d516-40f6-9c7e-7b819a13303e", 00:26:53.643 "is_configured": true, 00:26:53.644 "data_offset": 2048, 00:26:53.644 "data_size": 63488 00:26:53.644 }, 00:26:53.644 { 00:26:53.644 "name": "BaseBdev2", 00:26:53.644 "uuid": "e888b863-1d7d-45d7-8d50-d1d9dc7067ed", 00:26:53.644 "is_configured": true, 00:26:53.644 "data_offset": 2048, 00:26:53.644 "data_size": 63488 00:26:53.644 }, 00:26:53.644 { 00:26:53.644 "name": "BaseBdev3", 00:26:53.644 "uuid": "4db82c07-42cf-4514-a40e-281266a68e5c", 00:26:53.644 "is_configured": true, 00:26:53.644 "data_offset": 2048, 00:26:53.644 "data_size": 63488 00:26:53.644 }, 00:26:53.644 { 00:26:53.644 "name": "BaseBdev4", 00:26:53.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.644 "is_configured": false, 00:26:53.644 "data_offset": 0, 00:26:53.644 "data_size": 0 00:26:53.644 } 00:26:53.644 ] 00:26:53.644 }' 00:26:53.644 11:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:53.644 11:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.212 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:54.471 [2024-06-10 11:50:26.326157] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:54.471 [2024-06-10 11:50:26.326660] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:26:54.471 [2024-06-10 11:50:26.326810] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:54.471 [2024-06-10 11:50:26.327012] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:26:54.471 [2024-06-10 11:50:26.327398] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:26:54.471 BaseBdev4 00:26:54.471 [2024-06-10 11:50:26.327554] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:26:54.471 [2024-06-10 11:50:26.327783] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:54.471 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:26:54.471 11:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:26:54.471 11:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:26:54.471 11:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:26:54.471 11:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:26:54.471 11:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:26:54.471 11:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:54.732 11:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:54.732 [ 00:26:54.732 { 00:26:54.732 "name": "BaseBdev4", 00:26:54.732 "aliases": [ 00:26:54.732 "e616a62c-5a39-4a61-b3d4-a3462aac9d62" 00:26:54.732 ], 00:26:54.732 "product_name": "Malloc disk", 00:26:54.732 "block_size": 512, 00:26:54.732 "num_blocks": 65536, 00:26:54.732 "uuid": "e616a62c-5a39-4a61-b3d4-a3462aac9d62", 00:26:54.732 "assigned_rate_limits": { 00:26:54.732 "rw_ios_per_sec": 0, 00:26:54.732 "rw_mbytes_per_sec": 0, 00:26:54.732 "r_mbytes_per_sec": 0, 00:26:54.732 "w_mbytes_per_sec": 0 00:26:54.732 }, 00:26:54.732 "claimed": true, 00:26:54.732 "claim_type": "exclusive_write", 00:26:54.732 "zoned": false, 00:26:54.732 "supported_io_types": { 00:26:54.732 "read": true, 00:26:54.732 "write": true, 00:26:54.732 "unmap": true, 00:26:54.732 "write_zeroes": true, 00:26:54.732 "flush": true, 00:26:54.732 "reset": true, 00:26:54.732 "compare": false, 00:26:54.732 "compare_and_write": false, 00:26:54.732 "abort": true, 00:26:54.732 "nvme_admin": false, 00:26:54.732 "nvme_io": false 00:26:54.732 }, 00:26:54.732 "memory_domains": [ 00:26:54.732 { 00:26:54.732 "dma_device_id": "system", 00:26:54.732 "dma_device_type": 1 00:26:54.732 }, 00:26:54.732 { 00:26:54.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:54.732 "dma_device_type": 2 00:26:54.732 } 00:26:54.732 ], 00:26:54.732 "driver_specific": {} 00:26:54.732 } 00:26:54.732 ] 00:26:55.040 11:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:26:55.040 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:55.040 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:55.040 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:26:55.040 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:55.040 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:55.040 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:55.040 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:55.040 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:55.040 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:55.040 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:55.040 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:55.040 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:55.040 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.040 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:55.040 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:55.040 "name": "Existed_Raid", 00:26:55.040 "uuid": "84dc7f47-2bf2-475c-ae16-bc09e2a031d5", 00:26:55.040 "strip_size_kb": 64, 00:26:55.040 "state": "online", 00:26:55.040 "raid_level": "concat", 00:26:55.040 "superblock": true, 00:26:55.040 "num_base_bdevs": 4, 00:26:55.040 "num_base_bdevs_discovered": 4, 00:26:55.040 "num_base_bdevs_operational": 4, 00:26:55.040 "base_bdevs_list": [ 00:26:55.040 { 00:26:55.040 "name": "BaseBdev1", 00:26:55.040 "uuid": "0bcb4c46-d516-40f6-9c7e-7b819a13303e", 00:26:55.040 "is_configured": true, 00:26:55.040 "data_offset": 2048, 00:26:55.040 "data_size": 63488 00:26:55.040 }, 00:26:55.040 { 00:26:55.040 "name": "BaseBdev2", 00:26:55.040 "uuid": "e888b863-1d7d-45d7-8d50-d1d9dc7067ed", 00:26:55.040 "is_configured": true, 00:26:55.040 "data_offset": 2048, 00:26:55.040 "data_size": 63488 00:26:55.040 }, 00:26:55.040 { 00:26:55.040 "name": "BaseBdev3", 00:26:55.040 "uuid": "4db82c07-42cf-4514-a40e-281266a68e5c", 00:26:55.041 "is_configured": true, 00:26:55.041 "data_offset": 2048, 00:26:55.041 "data_size": 63488 00:26:55.041 }, 00:26:55.041 { 00:26:55.041 "name": "BaseBdev4", 00:26:55.041 "uuid": "e616a62c-5a39-4a61-b3d4-a3462aac9d62", 00:26:55.041 "is_configured": true, 00:26:55.041 "data_offset": 2048, 00:26:55.041 "data_size": 63488 00:26:55.041 } 00:26:55.041 ] 00:26:55.041 }' 00:26:55.041 11:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:55.041 11:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.975 11:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:55.975 11:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:55.975 11:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:55.975 11:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:55.975 11:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:55.975 11:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:55.975 11:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:55.975 11:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:55.975 [2024-06-10 11:50:27.978777] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:55.975 11:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:55.975 "name": "Existed_Raid", 00:26:55.975 "aliases": [ 00:26:55.975 "84dc7f47-2bf2-475c-ae16-bc09e2a031d5" 00:26:55.975 ], 00:26:55.975 "product_name": "Raid Volume", 00:26:55.975 "block_size": 512, 00:26:55.975 "num_blocks": 253952, 00:26:55.975 "uuid": "84dc7f47-2bf2-475c-ae16-bc09e2a031d5", 00:26:55.975 "assigned_rate_limits": { 00:26:55.975 "rw_ios_per_sec": 0, 00:26:55.975 "rw_mbytes_per_sec": 0, 00:26:55.975 "r_mbytes_per_sec": 0, 00:26:55.975 "w_mbytes_per_sec": 0 00:26:55.975 }, 00:26:55.975 "claimed": false, 00:26:55.975 "zoned": false, 00:26:55.975 "supported_io_types": { 00:26:55.975 "read": true, 00:26:55.975 "write": true, 00:26:55.975 "unmap": true, 00:26:55.975 "write_zeroes": true, 00:26:55.975 "flush": true, 00:26:55.975 "reset": true, 00:26:55.975 "compare": false, 00:26:55.975 "compare_and_write": false, 00:26:55.975 "abort": false, 00:26:55.975 "nvme_admin": false, 00:26:55.975 "nvme_io": false 00:26:55.975 }, 00:26:55.975 "memory_domains": [ 00:26:55.975 { 00:26:55.975 "dma_device_id": "system", 00:26:55.975 "dma_device_type": 1 00:26:55.975 }, 00:26:55.975 { 00:26:55.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.975 "dma_device_type": 2 00:26:55.975 }, 00:26:55.975 { 00:26:55.975 "dma_device_id": "system", 00:26:55.975 "dma_device_type": 1 00:26:55.975 }, 00:26:55.975 { 00:26:55.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.975 "dma_device_type": 2 00:26:55.975 }, 00:26:55.975 { 00:26:55.975 "dma_device_id": "system", 00:26:55.975 "dma_device_type": 1 00:26:55.975 }, 00:26:55.975 { 00:26:55.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.975 "dma_device_type": 2 00:26:55.975 }, 00:26:55.975 { 00:26:55.975 "dma_device_id": "system", 00:26:55.975 "dma_device_type": 1 00:26:55.975 }, 00:26:55.975 { 00:26:55.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.975 "dma_device_type": 2 00:26:55.975 } 00:26:55.975 ], 00:26:55.975 "driver_specific": { 00:26:55.975 "raid": { 00:26:55.975 "uuid": "84dc7f47-2bf2-475c-ae16-bc09e2a031d5", 00:26:55.975 "strip_size_kb": 64, 00:26:55.975 "state": "online", 00:26:55.975 "raid_level": "concat", 00:26:55.975 "superblock": true, 00:26:55.975 "num_base_bdevs": 4, 00:26:55.975 "num_base_bdevs_discovered": 4, 00:26:55.975 "num_base_bdevs_operational": 4, 00:26:55.975 "base_bdevs_list": [ 00:26:55.975 { 00:26:55.975 "name": "BaseBdev1", 00:26:55.975 "uuid": "0bcb4c46-d516-40f6-9c7e-7b819a13303e", 00:26:55.975 "is_configured": true, 00:26:55.975 "data_offset": 2048, 00:26:55.975 "data_size": 63488 00:26:55.975 }, 00:26:55.975 { 00:26:55.975 "name": "BaseBdev2", 00:26:55.975 "uuid": "e888b863-1d7d-45d7-8d50-d1d9dc7067ed", 00:26:55.975 "is_configured": true, 00:26:55.975 "data_offset": 2048, 00:26:55.975 "data_size": 63488 00:26:55.975 }, 00:26:55.975 { 00:26:55.975 "name": "BaseBdev3", 00:26:55.975 "uuid": "4db82c07-42cf-4514-a40e-281266a68e5c", 00:26:55.975 "is_configured": true, 00:26:55.975 "data_offset": 2048, 00:26:55.975 "data_size": 63488 00:26:55.975 }, 00:26:55.976 { 00:26:55.976 "name": "BaseBdev4", 00:26:55.976 "uuid": "e616a62c-5a39-4a61-b3d4-a3462aac9d62", 00:26:55.976 "is_configured": true, 00:26:55.976 "data_offset": 2048, 00:26:55.976 "data_size": 63488 00:26:55.976 } 00:26:55.976 ] 00:26:55.976 } 00:26:55.976 } 00:26:55.976 }' 00:26:55.976 11:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:56.234 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:56.234 BaseBdev2 00:26:56.234 BaseBdev3 00:26:56.234 BaseBdev4' 00:26:56.234 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:56.234 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:56.234 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:56.234 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:56.234 "name": "BaseBdev1", 00:26:56.234 "aliases": [ 00:26:56.234 "0bcb4c46-d516-40f6-9c7e-7b819a13303e" 00:26:56.234 ], 00:26:56.234 "product_name": "Malloc disk", 00:26:56.234 "block_size": 512, 00:26:56.234 "num_blocks": 65536, 00:26:56.234 "uuid": "0bcb4c46-d516-40f6-9c7e-7b819a13303e", 00:26:56.234 "assigned_rate_limits": { 00:26:56.234 "rw_ios_per_sec": 0, 00:26:56.234 "rw_mbytes_per_sec": 0, 00:26:56.234 "r_mbytes_per_sec": 0, 00:26:56.234 "w_mbytes_per_sec": 0 00:26:56.234 }, 00:26:56.234 "claimed": true, 00:26:56.234 "claim_type": "exclusive_write", 00:26:56.234 "zoned": false, 00:26:56.234 "supported_io_types": { 00:26:56.234 "read": true, 00:26:56.234 "write": true, 00:26:56.234 "unmap": true, 00:26:56.234 "write_zeroes": true, 00:26:56.234 "flush": true, 00:26:56.234 "reset": true, 00:26:56.234 "compare": false, 00:26:56.234 "compare_and_write": false, 00:26:56.234 "abort": true, 00:26:56.234 "nvme_admin": false, 00:26:56.234 "nvme_io": false 00:26:56.234 }, 00:26:56.234 "memory_domains": [ 00:26:56.234 { 00:26:56.234 "dma_device_id": "system", 00:26:56.234 "dma_device_type": 1 00:26:56.234 }, 00:26:56.234 { 00:26:56.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:56.234 "dma_device_type": 2 00:26:56.234 } 00:26:56.234 ], 00:26:56.234 "driver_specific": {} 00:26:56.234 }' 00:26:56.234 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:56.492 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:56.492 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:56.492 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:56.492 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:56.492 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:56.492 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:56.492 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:56.492 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:56.492 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:56.751 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:56.751 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:56.751 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:56.751 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:56.751 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:57.009 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:57.009 "name": "BaseBdev2", 00:26:57.009 "aliases": [ 00:26:57.009 "e888b863-1d7d-45d7-8d50-d1d9dc7067ed" 00:26:57.009 ], 00:26:57.009 "product_name": "Malloc disk", 00:26:57.009 "block_size": 512, 00:26:57.009 "num_blocks": 65536, 00:26:57.009 "uuid": "e888b863-1d7d-45d7-8d50-d1d9dc7067ed", 00:26:57.009 "assigned_rate_limits": { 00:26:57.009 "rw_ios_per_sec": 0, 00:26:57.009 "rw_mbytes_per_sec": 0, 00:26:57.009 "r_mbytes_per_sec": 0, 00:26:57.009 "w_mbytes_per_sec": 0 00:26:57.009 }, 00:26:57.009 "claimed": true, 00:26:57.009 "claim_type": "exclusive_write", 00:26:57.009 "zoned": false, 00:26:57.009 "supported_io_types": { 00:26:57.009 "read": true, 00:26:57.009 "write": true, 00:26:57.009 "unmap": true, 00:26:57.009 "write_zeroes": true, 00:26:57.009 "flush": true, 00:26:57.009 "reset": true, 00:26:57.009 "compare": false, 00:26:57.009 "compare_and_write": false, 00:26:57.009 "abort": true, 00:26:57.009 "nvme_admin": false, 00:26:57.009 "nvme_io": false 00:26:57.009 }, 00:26:57.009 "memory_domains": [ 00:26:57.009 { 00:26:57.009 "dma_device_id": "system", 00:26:57.009 "dma_device_type": 1 00:26:57.009 }, 00:26:57.009 { 00:26:57.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.009 "dma_device_type": 2 00:26:57.009 } 00:26:57.009 ], 00:26:57.009 "driver_specific": {} 00:26:57.009 }' 00:26:57.009 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:57.009 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:57.009 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:57.009 11:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:57.009 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:57.268 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:57.268 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:57.268 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:57.268 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:57.268 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:57.268 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:57.268 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:57.268 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:57.268 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:57.268 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:57.526 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:57.526 "name": "BaseBdev3", 00:26:57.526 "aliases": [ 00:26:57.526 "4db82c07-42cf-4514-a40e-281266a68e5c" 00:26:57.526 ], 00:26:57.526 "product_name": "Malloc disk", 00:26:57.526 "block_size": 512, 00:26:57.526 "num_blocks": 65536, 00:26:57.526 "uuid": "4db82c07-42cf-4514-a40e-281266a68e5c", 00:26:57.526 "assigned_rate_limits": { 00:26:57.526 "rw_ios_per_sec": 0, 00:26:57.526 "rw_mbytes_per_sec": 0, 00:26:57.526 "r_mbytes_per_sec": 0, 00:26:57.526 "w_mbytes_per_sec": 0 00:26:57.526 }, 00:26:57.526 "claimed": true, 00:26:57.526 "claim_type": "exclusive_write", 00:26:57.526 "zoned": false, 00:26:57.526 "supported_io_types": { 00:26:57.526 "read": true, 00:26:57.526 "write": true, 00:26:57.526 "unmap": true, 00:26:57.526 "write_zeroes": true, 00:26:57.526 "flush": true, 00:26:57.526 "reset": true, 00:26:57.526 "compare": false, 00:26:57.526 "compare_and_write": false, 00:26:57.526 "abort": true, 00:26:57.526 "nvme_admin": false, 00:26:57.526 "nvme_io": false 00:26:57.526 }, 00:26:57.527 "memory_domains": [ 00:26:57.527 { 00:26:57.527 "dma_device_id": "system", 00:26:57.527 "dma_device_type": 1 00:26:57.527 }, 00:26:57.527 { 00:26:57.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.527 "dma_device_type": 2 00:26:57.527 } 00:26:57.527 ], 00:26:57.527 "driver_specific": {} 00:26:57.527 }' 00:26:57.527 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:57.784 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:57.784 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:57.785 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:57.785 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:57.785 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:57.785 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:57.785 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:57.785 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:57.785 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:58.043 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:58.043 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:58.043 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:58.043 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:58.043 11:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:58.301 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:58.301 "name": "BaseBdev4", 00:26:58.301 "aliases": [ 00:26:58.301 "e616a62c-5a39-4a61-b3d4-a3462aac9d62" 00:26:58.301 ], 00:26:58.301 "product_name": "Malloc disk", 00:26:58.301 "block_size": 512, 00:26:58.301 "num_blocks": 65536, 00:26:58.301 "uuid": "e616a62c-5a39-4a61-b3d4-a3462aac9d62", 00:26:58.301 "assigned_rate_limits": { 00:26:58.301 "rw_ios_per_sec": 0, 00:26:58.301 "rw_mbytes_per_sec": 0, 00:26:58.301 "r_mbytes_per_sec": 0, 00:26:58.301 "w_mbytes_per_sec": 0 00:26:58.301 }, 00:26:58.301 "claimed": true, 00:26:58.301 "claim_type": "exclusive_write", 00:26:58.301 "zoned": false, 00:26:58.301 "supported_io_types": { 00:26:58.301 "read": true, 00:26:58.301 "write": true, 00:26:58.301 "unmap": true, 00:26:58.301 "write_zeroes": true, 00:26:58.301 "flush": true, 00:26:58.301 "reset": true, 00:26:58.301 "compare": false, 00:26:58.301 "compare_and_write": false, 00:26:58.301 "abort": true, 00:26:58.301 "nvme_admin": false, 00:26:58.301 "nvme_io": false 00:26:58.301 }, 00:26:58.301 "memory_domains": [ 00:26:58.301 { 00:26:58.301 "dma_device_id": "system", 00:26:58.301 "dma_device_type": 1 00:26:58.301 }, 00:26:58.301 { 00:26:58.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:58.301 "dma_device_type": 2 00:26:58.301 } 00:26:58.301 ], 00:26:58.301 "driver_specific": {} 00:26:58.301 }' 00:26:58.301 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:58.301 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:58.301 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:58.301 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:58.301 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:58.558 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:58.558 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:58.558 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:58.558 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:58.558 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:58.558 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:58.558 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:58.558 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:58.815 [2024-06-10 11:50:30.807413] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:58.815 [2024-06-10 11:50:30.807665] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:58.815 [2024-06-10 11:50:30.807802] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:59.072 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:59.072 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:26:59.072 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:59.072 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:26:59.072 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:26:59.072 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:26:59.072 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:59.072 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:26:59.072 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:59.072 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:59.072 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:59.072 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:59.072 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:59.072 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:59.072 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:59.072 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.072 11:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:59.330 11:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:59.330 "name": "Existed_Raid", 00:26:59.330 "uuid": "84dc7f47-2bf2-475c-ae16-bc09e2a031d5", 00:26:59.330 "strip_size_kb": 64, 00:26:59.330 "state": "offline", 00:26:59.330 "raid_level": "concat", 00:26:59.330 "superblock": true, 00:26:59.330 "num_base_bdevs": 4, 00:26:59.330 "num_base_bdevs_discovered": 3, 00:26:59.330 "num_base_bdevs_operational": 3, 00:26:59.330 "base_bdevs_list": [ 00:26:59.330 { 00:26:59.330 "name": null, 00:26:59.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.330 "is_configured": false, 00:26:59.330 "data_offset": 2048, 00:26:59.330 "data_size": 63488 00:26:59.330 }, 00:26:59.330 { 00:26:59.330 "name": "BaseBdev2", 00:26:59.330 "uuid": "e888b863-1d7d-45d7-8d50-d1d9dc7067ed", 00:26:59.330 "is_configured": true, 00:26:59.330 "data_offset": 2048, 00:26:59.330 "data_size": 63488 00:26:59.330 }, 00:26:59.330 { 00:26:59.330 "name": "BaseBdev3", 00:26:59.330 "uuid": "4db82c07-42cf-4514-a40e-281266a68e5c", 00:26:59.330 "is_configured": true, 00:26:59.330 "data_offset": 2048, 00:26:59.330 "data_size": 63488 00:26:59.330 }, 00:26:59.330 { 00:26:59.330 "name": "BaseBdev4", 00:26:59.330 "uuid": "e616a62c-5a39-4a61-b3d4-a3462aac9d62", 00:26:59.330 "is_configured": true, 00:26:59.330 "data_offset": 2048, 00:26:59.330 "data_size": 63488 00:26:59.330 } 00:26:59.330 ] 00:26:59.330 }' 00:26:59.330 11:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:59.330 11:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.896 11:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:59.896 11:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:59.896 11:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:59.896 11:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.154 11:50:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:00.154 11:50:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:00.154 11:50:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:00.412 [2024-06-10 11:50:32.375203] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:00.669 11:50:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:00.669 11:50:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:00.669 11:50:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.669 11:50:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:00.669 11:50:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:00.669 11:50:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:00.669 11:50:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:27:00.928 [2024-06-10 11:50:32.881523] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:01.186 11:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:01.186 11:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:01.186 11:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:01.186 11:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.444 11:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:01.444 11:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:01.444 11:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:27:01.702 [2024-06-10 11:50:33.551182] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:01.702 [2024-06-10 11:50:33.551523] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:27:01.702 11:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:01.702 11:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:01.702 11:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.702 11:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:27:01.959 11:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:27:01.959 11:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:27:01.959 11:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:27:01.959 11:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:27:01.959 11:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:01.959 11:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:02.216 BaseBdev2 00:27:02.216 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:27:02.216 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:27:02.216 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:27:02.216 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:27:02.216 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:27:02.216 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:27:02.216 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:02.783 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:02.783 [ 00:27:02.783 { 00:27:02.783 "name": "BaseBdev2", 00:27:02.783 "aliases": [ 00:27:02.783 "6cf92919-8bed-44f4-b721-7931af86d115" 00:27:02.783 ], 00:27:02.783 "product_name": "Malloc disk", 00:27:02.783 "block_size": 512, 00:27:02.783 "num_blocks": 65536, 00:27:02.783 "uuid": "6cf92919-8bed-44f4-b721-7931af86d115", 00:27:02.783 "assigned_rate_limits": { 00:27:02.783 "rw_ios_per_sec": 0, 00:27:02.783 "rw_mbytes_per_sec": 0, 00:27:02.783 "r_mbytes_per_sec": 0, 00:27:02.783 "w_mbytes_per_sec": 0 00:27:02.783 }, 00:27:02.783 "claimed": false, 00:27:02.783 "zoned": false, 00:27:02.783 "supported_io_types": { 00:27:02.783 "read": true, 00:27:02.783 "write": true, 00:27:02.783 "unmap": true, 00:27:02.783 "write_zeroes": true, 00:27:02.783 "flush": true, 00:27:02.783 "reset": true, 00:27:02.783 "compare": false, 00:27:02.783 "compare_and_write": false, 00:27:02.783 "abort": true, 00:27:02.783 "nvme_admin": false, 00:27:02.783 "nvme_io": false 00:27:02.783 }, 00:27:02.783 "memory_domains": [ 00:27:02.783 { 00:27:02.783 "dma_device_id": "system", 00:27:02.783 "dma_device_type": 1 00:27:02.783 }, 00:27:02.783 { 00:27:02.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.783 "dma_device_type": 2 00:27:02.783 } 00:27:02.783 ], 00:27:02.783 "driver_specific": {} 00:27:02.783 } 00:27:02.783 ] 00:27:02.783 11:50:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:27:02.783 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:02.783 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:02.783 11:50:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:03.350 BaseBdev3 00:27:03.350 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:27:03.350 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:27:03.350 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:27:03.350 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:27:03.350 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:27:03.350 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:27:03.350 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:03.350 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:03.608 [ 00:27:03.608 { 00:27:03.608 "name": "BaseBdev3", 00:27:03.608 "aliases": [ 00:27:03.608 "c668b454-0763-44f6-bc96-5a9310e9490b" 00:27:03.608 ], 00:27:03.608 "product_name": "Malloc disk", 00:27:03.608 "block_size": 512, 00:27:03.608 "num_blocks": 65536, 00:27:03.608 "uuid": "c668b454-0763-44f6-bc96-5a9310e9490b", 00:27:03.608 "assigned_rate_limits": { 00:27:03.608 "rw_ios_per_sec": 0, 00:27:03.608 "rw_mbytes_per_sec": 0, 00:27:03.608 "r_mbytes_per_sec": 0, 00:27:03.608 "w_mbytes_per_sec": 0 00:27:03.608 }, 00:27:03.608 "claimed": false, 00:27:03.608 "zoned": false, 00:27:03.608 "supported_io_types": { 00:27:03.608 "read": true, 00:27:03.608 "write": true, 00:27:03.608 "unmap": true, 00:27:03.608 "write_zeroes": true, 00:27:03.608 "flush": true, 00:27:03.608 "reset": true, 00:27:03.608 "compare": false, 00:27:03.608 "compare_and_write": false, 00:27:03.608 "abort": true, 00:27:03.608 "nvme_admin": false, 00:27:03.608 "nvme_io": false 00:27:03.608 }, 00:27:03.608 "memory_domains": [ 00:27:03.608 { 00:27:03.608 "dma_device_id": "system", 00:27:03.608 "dma_device_type": 1 00:27:03.608 }, 00:27:03.608 { 00:27:03.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.608 "dma_device_type": 2 00:27:03.608 } 00:27:03.608 ], 00:27:03.608 "driver_specific": {} 00:27:03.608 } 00:27:03.608 ] 00:27:03.608 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:27:03.608 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:03.608 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:03.608 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:03.903 BaseBdev4 00:27:03.903 11:50:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:27:03.903 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:27:03.903 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:27:03.903 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:27:03.903 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:27:03.903 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:27:03.903 11:50:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:04.184 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:04.443 [ 00:27:04.443 { 00:27:04.443 "name": "BaseBdev4", 00:27:04.443 "aliases": [ 00:27:04.443 "34f023b6-027b-4eed-9b1e-b2c4a1f38561" 00:27:04.443 ], 00:27:04.443 "product_name": "Malloc disk", 00:27:04.443 "block_size": 512, 00:27:04.443 "num_blocks": 65536, 00:27:04.443 "uuid": "34f023b6-027b-4eed-9b1e-b2c4a1f38561", 00:27:04.443 "assigned_rate_limits": { 00:27:04.443 "rw_ios_per_sec": 0, 00:27:04.443 "rw_mbytes_per_sec": 0, 00:27:04.443 "r_mbytes_per_sec": 0, 00:27:04.443 "w_mbytes_per_sec": 0 00:27:04.443 }, 00:27:04.443 "claimed": false, 00:27:04.443 "zoned": false, 00:27:04.443 "supported_io_types": { 00:27:04.443 "read": true, 00:27:04.443 "write": true, 00:27:04.443 "unmap": true, 00:27:04.443 "write_zeroes": true, 00:27:04.443 "flush": true, 00:27:04.443 "reset": true, 00:27:04.443 "compare": false, 00:27:04.443 "compare_and_write": false, 00:27:04.443 "abort": true, 00:27:04.443 "nvme_admin": false, 00:27:04.443 "nvme_io": false 00:27:04.443 }, 00:27:04.443 "memory_domains": [ 00:27:04.443 { 00:27:04.443 "dma_device_id": "system", 00:27:04.443 "dma_device_type": 1 00:27:04.443 }, 00:27:04.443 { 00:27:04.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.443 "dma_device_type": 2 00:27:04.443 } 00:27:04.443 ], 00:27:04.443 "driver_specific": {} 00:27:04.443 } 00:27:04.443 ] 00:27:04.443 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:27:04.443 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:04.443 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:04.443 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:04.702 [2024-06-10 11:50:36.671605] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:04.702 [2024-06-10 11:50:36.672353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:04.702 [2024-06-10 11:50:36.672534] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:04.702 [2024-06-10 11:50:36.674822] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:04.702 [2024-06-10 11:50:36.675018] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:04.702 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:04.702 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:04.702 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:04.702 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:04.702 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:04.702 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:04.702 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:04.702 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:04.702 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:04.702 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:04.702 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.702 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:04.960 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:04.960 "name": "Existed_Raid", 00:27:04.960 "uuid": "10855a95-0b44-4d3e-8cb2-c4c17330b646", 00:27:04.960 "strip_size_kb": 64, 00:27:04.960 "state": "configuring", 00:27:04.960 "raid_level": "concat", 00:27:04.960 "superblock": true, 00:27:04.960 "num_base_bdevs": 4, 00:27:04.960 "num_base_bdevs_discovered": 3, 00:27:04.960 "num_base_bdevs_operational": 4, 00:27:04.960 "base_bdevs_list": [ 00:27:04.960 { 00:27:04.960 "name": "BaseBdev1", 00:27:04.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.960 "is_configured": false, 00:27:04.960 "data_offset": 0, 00:27:04.960 "data_size": 0 00:27:04.960 }, 00:27:04.960 { 00:27:04.960 "name": "BaseBdev2", 00:27:04.960 "uuid": "6cf92919-8bed-44f4-b721-7931af86d115", 00:27:04.960 "is_configured": true, 00:27:04.960 "data_offset": 2048, 00:27:04.960 "data_size": 63488 00:27:04.960 }, 00:27:04.960 { 00:27:04.960 "name": "BaseBdev3", 00:27:04.960 "uuid": "c668b454-0763-44f6-bc96-5a9310e9490b", 00:27:04.960 "is_configured": true, 00:27:04.960 "data_offset": 2048, 00:27:04.960 "data_size": 63488 00:27:04.960 }, 00:27:04.960 { 00:27:04.960 "name": "BaseBdev4", 00:27:04.960 "uuid": "34f023b6-027b-4eed-9b1e-b2c4a1f38561", 00:27:04.960 "is_configured": true, 00:27:04.960 "data_offset": 2048, 00:27:04.960 "data_size": 63488 00:27:04.960 } 00:27:04.960 ] 00:27:04.960 }' 00:27:04.960 11:50:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:04.960 11:50:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.525 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:27:05.783 [2024-06-10 11:50:37.759805] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:05.783 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:05.783 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:05.783 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:05.783 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:05.783 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:05.783 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:05.783 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:05.783 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:05.783 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:05.783 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:05.783 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:05.783 11:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.042 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:06.042 "name": "Existed_Raid", 00:27:06.042 "uuid": "10855a95-0b44-4d3e-8cb2-c4c17330b646", 00:27:06.042 "strip_size_kb": 64, 00:27:06.042 "state": "configuring", 00:27:06.042 "raid_level": "concat", 00:27:06.042 "superblock": true, 00:27:06.042 "num_base_bdevs": 4, 00:27:06.042 "num_base_bdevs_discovered": 2, 00:27:06.042 "num_base_bdevs_operational": 4, 00:27:06.042 "base_bdevs_list": [ 00:27:06.042 { 00:27:06.042 "name": "BaseBdev1", 00:27:06.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.042 "is_configured": false, 00:27:06.042 "data_offset": 0, 00:27:06.042 "data_size": 0 00:27:06.042 }, 00:27:06.042 { 00:27:06.042 "name": null, 00:27:06.042 "uuid": "6cf92919-8bed-44f4-b721-7931af86d115", 00:27:06.042 "is_configured": false, 00:27:06.042 "data_offset": 2048, 00:27:06.042 "data_size": 63488 00:27:06.042 }, 00:27:06.042 { 00:27:06.042 "name": "BaseBdev3", 00:27:06.042 "uuid": "c668b454-0763-44f6-bc96-5a9310e9490b", 00:27:06.042 "is_configured": true, 00:27:06.042 "data_offset": 2048, 00:27:06.042 "data_size": 63488 00:27:06.042 }, 00:27:06.042 { 00:27:06.042 "name": "BaseBdev4", 00:27:06.042 "uuid": "34f023b6-027b-4eed-9b1e-b2c4a1f38561", 00:27:06.042 "is_configured": true, 00:27:06.042 "data_offset": 2048, 00:27:06.042 "data_size": 63488 00:27:06.042 } 00:27:06.042 ] 00:27:06.042 }' 00:27:06.042 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:06.042 11:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.976 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.976 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:06.976 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:27:06.976 11:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:07.307 [2024-06-10 11:50:39.128760] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:07.307 BaseBdev1 00:27:07.307 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:27:07.307 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:27:07.307 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:27:07.307 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:27:07.307 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:27:07.307 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:27:07.307 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:07.581 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:07.581 [ 00:27:07.581 { 00:27:07.581 "name": "BaseBdev1", 00:27:07.581 "aliases": [ 00:27:07.581 "8cc1b954-f340-448a-a5f6-f5030332af01" 00:27:07.581 ], 00:27:07.581 "product_name": "Malloc disk", 00:27:07.581 "block_size": 512, 00:27:07.581 "num_blocks": 65536, 00:27:07.581 "uuid": "8cc1b954-f340-448a-a5f6-f5030332af01", 00:27:07.581 "assigned_rate_limits": { 00:27:07.581 "rw_ios_per_sec": 0, 00:27:07.581 "rw_mbytes_per_sec": 0, 00:27:07.581 "r_mbytes_per_sec": 0, 00:27:07.581 "w_mbytes_per_sec": 0 00:27:07.581 }, 00:27:07.581 "claimed": true, 00:27:07.581 "claim_type": "exclusive_write", 00:27:07.581 "zoned": false, 00:27:07.581 "supported_io_types": { 00:27:07.581 "read": true, 00:27:07.581 "write": true, 00:27:07.581 "unmap": true, 00:27:07.581 "write_zeroes": true, 00:27:07.581 "flush": true, 00:27:07.581 "reset": true, 00:27:07.581 "compare": false, 00:27:07.581 "compare_and_write": false, 00:27:07.581 "abort": true, 00:27:07.581 "nvme_admin": false, 00:27:07.581 "nvme_io": false 00:27:07.581 }, 00:27:07.581 "memory_domains": [ 00:27:07.581 { 00:27:07.581 "dma_device_id": "system", 00:27:07.581 "dma_device_type": 1 00:27:07.581 }, 00:27:07.581 { 00:27:07.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:07.581 "dma_device_type": 2 00:27:07.581 } 00:27:07.581 ], 00:27:07.581 "driver_specific": {} 00:27:07.581 } 00:27:07.581 ] 00:27:07.581 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:27:07.581 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:07.581 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:07.581 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:07.581 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:07.581 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:07.581 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:07.581 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:07.581 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:07.581 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:07.581 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:07.581 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.581 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:07.839 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:07.839 "name": "Existed_Raid", 00:27:07.839 "uuid": "10855a95-0b44-4d3e-8cb2-c4c17330b646", 00:27:07.839 "strip_size_kb": 64, 00:27:07.839 "state": "configuring", 00:27:07.839 "raid_level": "concat", 00:27:07.839 "superblock": true, 00:27:07.839 "num_base_bdevs": 4, 00:27:07.839 "num_base_bdevs_discovered": 3, 00:27:07.839 "num_base_bdevs_operational": 4, 00:27:07.839 "base_bdevs_list": [ 00:27:07.839 { 00:27:07.839 "name": "BaseBdev1", 00:27:07.839 "uuid": "8cc1b954-f340-448a-a5f6-f5030332af01", 00:27:07.839 "is_configured": true, 00:27:07.839 "data_offset": 2048, 00:27:07.839 "data_size": 63488 00:27:07.839 }, 00:27:07.839 { 00:27:07.839 "name": null, 00:27:07.839 "uuid": "6cf92919-8bed-44f4-b721-7931af86d115", 00:27:07.839 "is_configured": false, 00:27:07.839 "data_offset": 2048, 00:27:07.839 "data_size": 63488 00:27:07.839 }, 00:27:07.839 { 00:27:07.839 "name": "BaseBdev3", 00:27:07.839 "uuid": "c668b454-0763-44f6-bc96-5a9310e9490b", 00:27:07.839 "is_configured": true, 00:27:07.839 "data_offset": 2048, 00:27:07.839 "data_size": 63488 00:27:07.839 }, 00:27:07.839 { 00:27:07.839 "name": "BaseBdev4", 00:27:07.839 "uuid": "34f023b6-027b-4eed-9b1e-b2c4a1f38561", 00:27:07.839 "is_configured": true, 00:27:07.839 "data_offset": 2048, 00:27:07.839 "data_size": 63488 00:27:07.839 } 00:27:07.839 ] 00:27:07.839 }' 00:27:07.840 11:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:07.840 11:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.773 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.773 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:08.773 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:27:08.774 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:27:09.032 [2024-06-10 11:50:40.979170] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:09.032 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:09.032 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:09.032 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:09.032 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:09.032 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:09.032 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:09.032 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:09.032 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:09.032 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:09.032 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:09.032 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.032 11:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:09.290 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:09.290 "name": "Existed_Raid", 00:27:09.290 "uuid": "10855a95-0b44-4d3e-8cb2-c4c17330b646", 00:27:09.290 "strip_size_kb": 64, 00:27:09.290 "state": "configuring", 00:27:09.290 "raid_level": "concat", 00:27:09.290 "superblock": true, 00:27:09.290 "num_base_bdevs": 4, 00:27:09.290 "num_base_bdevs_discovered": 2, 00:27:09.290 "num_base_bdevs_operational": 4, 00:27:09.290 "base_bdevs_list": [ 00:27:09.290 { 00:27:09.290 "name": "BaseBdev1", 00:27:09.290 "uuid": "8cc1b954-f340-448a-a5f6-f5030332af01", 00:27:09.290 "is_configured": true, 00:27:09.290 "data_offset": 2048, 00:27:09.290 "data_size": 63488 00:27:09.290 }, 00:27:09.290 { 00:27:09.290 "name": null, 00:27:09.290 "uuid": "6cf92919-8bed-44f4-b721-7931af86d115", 00:27:09.290 "is_configured": false, 00:27:09.290 "data_offset": 2048, 00:27:09.290 "data_size": 63488 00:27:09.290 }, 00:27:09.290 { 00:27:09.290 "name": null, 00:27:09.290 "uuid": "c668b454-0763-44f6-bc96-5a9310e9490b", 00:27:09.290 "is_configured": false, 00:27:09.290 "data_offset": 2048, 00:27:09.290 "data_size": 63488 00:27:09.290 }, 00:27:09.290 { 00:27:09.290 "name": "BaseBdev4", 00:27:09.290 "uuid": "34f023b6-027b-4eed-9b1e-b2c4a1f38561", 00:27:09.290 "is_configured": true, 00:27:09.290 "data_offset": 2048, 00:27:09.290 "data_size": 63488 00:27:09.290 } 00:27:09.290 ] 00:27:09.290 }' 00:27:09.291 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:09.291 11:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:10.233 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.233 11:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:10.233 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:27:10.233 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:10.502 [2024-06-10 11:50:42.375488] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:10.502 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:10.502 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:10.502 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:10.502 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:10.502 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:10.502 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:10.502 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:10.502 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:10.502 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:10.502 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:10.502 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.502 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:10.766 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:10.766 "name": "Existed_Raid", 00:27:10.766 "uuid": "10855a95-0b44-4d3e-8cb2-c4c17330b646", 00:27:10.766 "strip_size_kb": 64, 00:27:10.766 "state": "configuring", 00:27:10.766 "raid_level": "concat", 00:27:10.766 "superblock": true, 00:27:10.766 "num_base_bdevs": 4, 00:27:10.766 "num_base_bdevs_discovered": 3, 00:27:10.766 "num_base_bdevs_operational": 4, 00:27:10.766 "base_bdevs_list": [ 00:27:10.766 { 00:27:10.766 "name": "BaseBdev1", 00:27:10.766 "uuid": "8cc1b954-f340-448a-a5f6-f5030332af01", 00:27:10.766 "is_configured": true, 00:27:10.766 "data_offset": 2048, 00:27:10.766 "data_size": 63488 00:27:10.766 }, 00:27:10.766 { 00:27:10.766 "name": null, 00:27:10.766 "uuid": "6cf92919-8bed-44f4-b721-7931af86d115", 00:27:10.766 "is_configured": false, 00:27:10.766 "data_offset": 2048, 00:27:10.767 "data_size": 63488 00:27:10.767 }, 00:27:10.767 { 00:27:10.767 "name": "BaseBdev3", 00:27:10.767 "uuid": "c668b454-0763-44f6-bc96-5a9310e9490b", 00:27:10.767 "is_configured": true, 00:27:10.767 "data_offset": 2048, 00:27:10.767 "data_size": 63488 00:27:10.767 }, 00:27:10.767 { 00:27:10.767 "name": "BaseBdev4", 00:27:10.767 "uuid": "34f023b6-027b-4eed-9b1e-b2c4a1f38561", 00:27:10.767 "is_configured": true, 00:27:10.767 "data_offset": 2048, 00:27:10.767 "data_size": 63488 00:27:10.767 } 00:27:10.767 ] 00:27:10.767 }' 00:27:10.767 11:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:10.767 11:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:11.332 11:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:11.332 11:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.590 11:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:27:11.591 11:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:11.591 [2024-06-10 11:50:43.627774] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:11.849 11:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:11.849 11:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:11.849 11:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:11.849 11:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:11.849 11:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:11.849 11:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:11.849 11:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:11.849 11:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:11.849 11:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:11.849 11:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:11.849 11:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:11.849 11:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.108 11:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:12.108 "name": "Existed_Raid", 00:27:12.108 "uuid": "10855a95-0b44-4d3e-8cb2-c4c17330b646", 00:27:12.108 "strip_size_kb": 64, 00:27:12.108 "state": "configuring", 00:27:12.108 "raid_level": "concat", 00:27:12.108 "superblock": true, 00:27:12.108 "num_base_bdevs": 4, 00:27:12.108 "num_base_bdevs_discovered": 2, 00:27:12.108 "num_base_bdevs_operational": 4, 00:27:12.108 "base_bdevs_list": [ 00:27:12.108 { 00:27:12.108 "name": null, 00:27:12.108 "uuid": "8cc1b954-f340-448a-a5f6-f5030332af01", 00:27:12.108 "is_configured": false, 00:27:12.108 "data_offset": 2048, 00:27:12.108 "data_size": 63488 00:27:12.108 }, 00:27:12.108 { 00:27:12.108 "name": null, 00:27:12.108 "uuid": "6cf92919-8bed-44f4-b721-7931af86d115", 00:27:12.108 "is_configured": false, 00:27:12.108 "data_offset": 2048, 00:27:12.108 "data_size": 63488 00:27:12.108 }, 00:27:12.108 { 00:27:12.108 "name": "BaseBdev3", 00:27:12.108 "uuid": "c668b454-0763-44f6-bc96-5a9310e9490b", 00:27:12.108 "is_configured": true, 00:27:12.108 "data_offset": 2048, 00:27:12.108 "data_size": 63488 00:27:12.108 }, 00:27:12.108 { 00:27:12.108 "name": "BaseBdev4", 00:27:12.108 "uuid": "34f023b6-027b-4eed-9b1e-b2c4a1f38561", 00:27:12.108 "is_configured": true, 00:27:12.108 "data_offset": 2048, 00:27:12.108 "data_size": 63488 00:27:12.108 } 00:27:12.108 ] 00:27:12.108 }' 00:27:12.108 11:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:12.108 11:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:12.674 11:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:12.674 11:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.240 11:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:27:13.240 11:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:13.240 [2024-06-10 11:50:45.293300] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:13.499 11:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:13.499 11:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:13.499 11:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:13.499 11:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:13.499 11:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:13.499 11:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:13.499 11:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:13.499 11:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:13.499 11:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:13.499 11:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:13.499 11:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.499 11:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:13.499 11:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:13.499 "name": "Existed_Raid", 00:27:13.499 "uuid": "10855a95-0b44-4d3e-8cb2-c4c17330b646", 00:27:13.499 "strip_size_kb": 64, 00:27:13.499 "state": "configuring", 00:27:13.499 "raid_level": "concat", 00:27:13.499 "superblock": true, 00:27:13.499 "num_base_bdevs": 4, 00:27:13.499 "num_base_bdevs_discovered": 3, 00:27:13.499 "num_base_bdevs_operational": 4, 00:27:13.499 "base_bdevs_list": [ 00:27:13.499 { 00:27:13.499 "name": null, 00:27:13.499 "uuid": "8cc1b954-f340-448a-a5f6-f5030332af01", 00:27:13.499 "is_configured": false, 00:27:13.499 "data_offset": 2048, 00:27:13.499 "data_size": 63488 00:27:13.499 }, 00:27:13.499 { 00:27:13.499 "name": "BaseBdev2", 00:27:13.499 "uuid": "6cf92919-8bed-44f4-b721-7931af86d115", 00:27:13.499 "is_configured": true, 00:27:13.499 "data_offset": 2048, 00:27:13.499 "data_size": 63488 00:27:13.499 }, 00:27:13.499 { 00:27:13.499 "name": "BaseBdev3", 00:27:13.499 "uuid": "c668b454-0763-44f6-bc96-5a9310e9490b", 00:27:13.499 "is_configured": true, 00:27:13.499 "data_offset": 2048, 00:27:13.499 "data_size": 63488 00:27:13.499 }, 00:27:13.499 { 00:27:13.499 "name": "BaseBdev4", 00:27:13.499 "uuid": "34f023b6-027b-4eed-9b1e-b2c4a1f38561", 00:27:13.499 "is_configured": true, 00:27:13.499 "data_offset": 2048, 00:27:13.499 "data_size": 63488 00:27:13.499 } 00:27:13.499 ] 00:27:13.499 }' 00:27:13.499 11:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:13.499 11:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:14.470 11:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.470 11:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:14.470 11:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:27:14.470 11:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.470 11:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:14.728 11:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8cc1b954-f340-448a-a5f6-f5030332af01 00:27:14.986 [2024-06-10 11:50:47.019799] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:14.986 [2024-06-10 11:50:47.020272] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:27:14.986 [2024-06-10 11:50:47.020396] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:14.986 [2024-06-10 11:50:47.020552] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:14.986 [2024-06-10 11:50:47.020912] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:27:14.986 [2024-06-10 11:50:47.021039] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:27:14.986 NewBaseBdev 00:27:14.986 [2024-06-10 11:50:47.021287] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:14.995 11:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:27:14.995 11:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:27:14.995 11:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:27:14.996 11:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:27:14.996 11:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:27:14.996 11:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:27:14.996 11:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:15.563 11:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:15.563 [ 00:27:15.563 { 00:27:15.563 "name": "NewBaseBdev", 00:27:15.563 "aliases": [ 00:27:15.563 "8cc1b954-f340-448a-a5f6-f5030332af01" 00:27:15.563 ], 00:27:15.563 "product_name": "Malloc disk", 00:27:15.563 "block_size": 512, 00:27:15.563 "num_blocks": 65536, 00:27:15.563 "uuid": "8cc1b954-f340-448a-a5f6-f5030332af01", 00:27:15.563 "assigned_rate_limits": { 00:27:15.563 "rw_ios_per_sec": 0, 00:27:15.563 "rw_mbytes_per_sec": 0, 00:27:15.563 "r_mbytes_per_sec": 0, 00:27:15.563 "w_mbytes_per_sec": 0 00:27:15.563 }, 00:27:15.563 "claimed": true, 00:27:15.563 "claim_type": "exclusive_write", 00:27:15.563 "zoned": false, 00:27:15.563 "supported_io_types": { 00:27:15.563 "read": true, 00:27:15.563 "write": true, 00:27:15.563 "unmap": true, 00:27:15.563 "write_zeroes": true, 00:27:15.563 "flush": true, 00:27:15.563 "reset": true, 00:27:15.563 "compare": false, 00:27:15.563 "compare_and_write": false, 00:27:15.563 "abort": true, 00:27:15.563 "nvme_admin": false, 00:27:15.563 "nvme_io": false 00:27:15.563 }, 00:27:15.563 "memory_domains": [ 00:27:15.563 { 00:27:15.563 "dma_device_id": "system", 00:27:15.563 "dma_device_type": 1 00:27:15.563 }, 00:27:15.563 { 00:27:15.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:15.563 "dma_device_type": 2 00:27:15.563 } 00:27:15.563 ], 00:27:15.563 "driver_specific": {} 00:27:15.563 } 00:27:15.563 ] 00:27:15.563 11:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:27:15.563 11:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:27:15.563 11:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:15.563 11:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:15.563 11:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:15.563 11:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:15.563 11:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:15.563 11:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:15.563 11:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:15.563 11:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:15.563 11:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:15.563 11:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.564 11:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:15.821 11:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:15.821 "name": "Existed_Raid", 00:27:15.821 "uuid": "10855a95-0b44-4d3e-8cb2-c4c17330b646", 00:27:15.821 "strip_size_kb": 64, 00:27:15.821 "state": "online", 00:27:15.821 "raid_level": "concat", 00:27:15.821 "superblock": true, 00:27:15.821 "num_base_bdevs": 4, 00:27:15.821 "num_base_bdevs_discovered": 4, 00:27:15.821 "num_base_bdevs_operational": 4, 00:27:15.821 "base_bdevs_list": [ 00:27:15.821 { 00:27:15.821 "name": "NewBaseBdev", 00:27:15.821 "uuid": "8cc1b954-f340-448a-a5f6-f5030332af01", 00:27:15.821 "is_configured": true, 00:27:15.821 "data_offset": 2048, 00:27:15.821 "data_size": 63488 00:27:15.821 }, 00:27:15.821 { 00:27:15.821 "name": "BaseBdev2", 00:27:15.821 "uuid": "6cf92919-8bed-44f4-b721-7931af86d115", 00:27:15.821 "is_configured": true, 00:27:15.821 "data_offset": 2048, 00:27:15.821 "data_size": 63488 00:27:15.821 }, 00:27:15.821 { 00:27:15.821 "name": "BaseBdev3", 00:27:15.821 "uuid": "c668b454-0763-44f6-bc96-5a9310e9490b", 00:27:15.821 "is_configured": true, 00:27:15.821 "data_offset": 2048, 00:27:15.821 "data_size": 63488 00:27:15.821 }, 00:27:15.821 { 00:27:15.821 "name": "BaseBdev4", 00:27:15.821 "uuid": "34f023b6-027b-4eed-9b1e-b2c4a1f38561", 00:27:15.821 "is_configured": true, 00:27:15.821 "data_offset": 2048, 00:27:15.821 "data_size": 63488 00:27:15.821 } 00:27:15.821 ] 00:27:15.821 }' 00:27:15.821 11:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:15.821 11:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:16.386 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:27:16.386 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:16.386 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:16.386 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:16.386 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:16.386 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:27:16.387 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:16.387 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:16.644 [2024-06-10 11:50:48.503557] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:16.644 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:16.644 "name": "Existed_Raid", 00:27:16.644 "aliases": [ 00:27:16.644 "10855a95-0b44-4d3e-8cb2-c4c17330b646" 00:27:16.644 ], 00:27:16.644 "product_name": "Raid Volume", 00:27:16.644 "block_size": 512, 00:27:16.644 "num_blocks": 253952, 00:27:16.644 "uuid": "10855a95-0b44-4d3e-8cb2-c4c17330b646", 00:27:16.644 "assigned_rate_limits": { 00:27:16.644 "rw_ios_per_sec": 0, 00:27:16.644 "rw_mbytes_per_sec": 0, 00:27:16.644 "r_mbytes_per_sec": 0, 00:27:16.644 "w_mbytes_per_sec": 0 00:27:16.644 }, 00:27:16.644 "claimed": false, 00:27:16.644 "zoned": false, 00:27:16.644 "supported_io_types": { 00:27:16.644 "read": true, 00:27:16.644 "write": true, 00:27:16.644 "unmap": true, 00:27:16.644 "write_zeroes": true, 00:27:16.644 "flush": true, 00:27:16.644 "reset": true, 00:27:16.644 "compare": false, 00:27:16.644 "compare_and_write": false, 00:27:16.644 "abort": false, 00:27:16.644 "nvme_admin": false, 00:27:16.644 "nvme_io": false 00:27:16.644 }, 00:27:16.644 "memory_domains": [ 00:27:16.644 { 00:27:16.644 "dma_device_id": "system", 00:27:16.644 "dma_device_type": 1 00:27:16.644 }, 00:27:16.644 { 00:27:16.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:16.644 "dma_device_type": 2 00:27:16.644 }, 00:27:16.644 { 00:27:16.644 "dma_device_id": "system", 00:27:16.644 "dma_device_type": 1 00:27:16.644 }, 00:27:16.644 { 00:27:16.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:16.644 "dma_device_type": 2 00:27:16.645 }, 00:27:16.645 { 00:27:16.645 "dma_device_id": "system", 00:27:16.645 "dma_device_type": 1 00:27:16.645 }, 00:27:16.645 { 00:27:16.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:16.645 "dma_device_type": 2 00:27:16.645 }, 00:27:16.645 { 00:27:16.645 "dma_device_id": "system", 00:27:16.645 "dma_device_type": 1 00:27:16.645 }, 00:27:16.645 { 00:27:16.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:16.645 "dma_device_type": 2 00:27:16.645 } 00:27:16.645 ], 00:27:16.645 "driver_specific": { 00:27:16.645 "raid": { 00:27:16.645 "uuid": "10855a95-0b44-4d3e-8cb2-c4c17330b646", 00:27:16.645 "strip_size_kb": 64, 00:27:16.645 "state": "online", 00:27:16.645 "raid_level": "concat", 00:27:16.645 "superblock": true, 00:27:16.645 "num_base_bdevs": 4, 00:27:16.645 "num_base_bdevs_discovered": 4, 00:27:16.645 "num_base_bdevs_operational": 4, 00:27:16.645 "base_bdevs_list": [ 00:27:16.645 { 00:27:16.645 "name": "NewBaseBdev", 00:27:16.645 "uuid": "8cc1b954-f340-448a-a5f6-f5030332af01", 00:27:16.645 "is_configured": true, 00:27:16.645 "data_offset": 2048, 00:27:16.645 "data_size": 63488 00:27:16.645 }, 00:27:16.645 { 00:27:16.645 "name": "BaseBdev2", 00:27:16.645 "uuid": "6cf92919-8bed-44f4-b721-7931af86d115", 00:27:16.645 "is_configured": true, 00:27:16.645 "data_offset": 2048, 00:27:16.645 "data_size": 63488 00:27:16.645 }, 00:27:16.645 { 00:27:16.645 "name": "BaseBdev3", 00:27:16.645 "uuid": "c668b454-0763-44f6-bc96-5a9310e9490b", 00:27:16.645 "is_configured": true, 00:27:16.645 "data_offset": 2048, 00:27:16.645 "data_size": 63488 00:27:16.645 }, 00:27:16.645 { 00:27:16.645 "name": "BaseBdev4", 00:27:16.645 "uuid": "34f023b6-027b-4eed-9b1e-b2c4a1f38561", 00:27:16.645 "is_configured": true, 00:27:16.645 "data_offset": 2048, 00:27:16.645 "data_size": 63488 00:27:16.645 } 00:27:16.645 ] 00:27:16.645 } 00:27:16.645 } 00:27:16.645 }' 00:27:16.645 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:16.645 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:27:16.645 BaseBdev2 00:27:16.645 BaseBdev3 00:27:16.645 BaseBdev4' 00:27:16.645 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:16.645 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:27:16.645 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:16.902 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:16.902 "name": "NewBaseBdev", 00:27:16.902 "aliases": [ 00:27:16.902 "8cc1b954-f340-448a-a5f6-f5030332af01" 00:27:16.902 ], 00:27:16.902 "product_name": "Malloc disk", 00:27:16.902 "block_size": 512, 00:27:16.902 "num_blocks": 65536, 00:27:16.902 "uuid": "8cc1b954-f340-448a-a5f6-f5030332af01", 00:27:16.902 "assigned_rate_limits": { 00:27:16.902 "rw_ios_per_sec": 0, 00:27:16.902 "rw_mbytes_per_sec": 0, 00:27:16.902 "r_mbytes_per_sec": 0, 00:27:16.902 "w_mbytes_per_sec": 0 00:27:16.902 }, 00:27:16.902 "claimed": true, 00:27:16.902 "claim_type": "exclusive_write", 00:27:16.902 "zoned": false, 00:27:16.902 "supported_io_types": { 00:27:16.902 "read": true, 00:27:16.902 "write": true, 00:27:16.902 "unmap": true, 00:27:16.902 "write_zeroes": true, 00:27:16.902 "flush": true, 00:27:16.902 "reset": true, 00:27:16.902 "compare": false, 00:27:16.902 "compare_and_write": false, 00:27:16.902 "abort": true, 00:27:16.902 "nvme_admin": false, 00:27:16.902 "nvme_io": false 00:27:16.902 }, 00:27:16.902 "memory_domains": [ 00:27:16.902 { 00:27:16.902 "dma_device_id": "system", 00:27:16.902 "dma_device_type": 1 00:27:16.902 }, 00:27:16.902 { 00:27:16.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:16.902 "dma_device_type": 2 00:27:16.902 } 00:27:16.902 ], 00:27:16.903 "driver_specific": {} 00:27:16.903 }' 00:27:16.903 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:16.903 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:16.903 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:16.903 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:17.159 11:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:17.159 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:17.159 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:17.159 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:17.159 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:17.159 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:17.159 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:17.159 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:17.159 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:17.159 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:17.159 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:17.417 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:17.417 "name": "BaseBdev2", 00:27:17.417 "aliases": [ 00:27:17.417 "6cf92919-8bed-44f4-b721-7931af86d115" 00:27:17.417 ], 00:27:17.417 "product_name": "Malloc disk", 00:27:17.417 "block_size": 512, 00:27:17.417 "num_blocks": 65536, 00:27:17.417 "uuid": "6cf92919-8bed-44f4-b721-7931af86d115", 00:27:17.417 "assigned_rate_limits": { 00:27:17.417 "rw_ios_per_sec": 0, 00:27:17.417 "rw_mbytes_per_sec": 0, 00:27:17.417 "r_mbytes_per_sec": 0, 00:27:17.417 "w_mbytes_per_sec": 0 00:27:17.417 }, 00:27:17.417 "claimed": true, 00:27:17.417 "claim_type": "exclusive_write", 00:27:17.417 "zoned": false, 00:27:17.417 "supported_io_types": { 00:27:17.417 "read": true, 00:27:17.417 "write": true, 00:27:17.417 "unmap": true, 00:27:17.417 "write_zeroes": true, 00:27:17.417 "flush": true, 00:27:17.417 "reset": true, 00:27:17.417 "compare": false, 00:27:17.417 "compare_and_write": false, 00:27:17.417 "abort": true, 00:27:17.417 "nvme_admin": false, 00:27:17.417 "nvme_io": false 00:27:17.417 }, 00:27:17.417 "memory_domains": [ 00:27:17.417 { 00:27:17.417 "dma_device_id": "system", 00:27:17.417 "dma_device_type": 1 00:27:17.417 }, 00:27:17.417 { 00:27:17.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:17.417 "dma_device_type": 2 00:27:17.417 } 00:27:17.417 ], 00:27:17.417 "driver_specific": {} 00:27:17.417 }' 00:27:17.417 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:17.417 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:17.417 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:17.417 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:17.675 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:17.675 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:17.675 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:17.675 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:17.675 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:17.675 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:17.675 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:17.932 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:17.932 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:17.932 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:17.932 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:27:17.932 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:17.932 "name": "BaseBdev3", 00:27:17.932 "aliases": [ 00:27:17.932 "c668b454-0763-44f6-bc96-5a9310e9490b" 00:27:17.932 ], 00:27:17.932 "product_name": "Malloc disk", 00:27:17.932 "block_size": 512, 00:27:17.932 "num_blocks": 65536, 00:27:17.932 "uuid": "c668b454-0763-44f6-bc96-5a9310e9490b", 00:27:17.932 "assigned_rate_limits": { 00:27:17.932 "rw_ios_per_sec": 0, 00:27:17.932 "rw_mbytes_per_sec": 0, 00:27:17.932 "r_mbytes_per_sec": 0, 00:27:17.932 "w_mbytes_per_sec": 0 00:27:17.932 }, 00:27:17.932 "claimed": true, 00:27:17.932 "claim_type": "exclusive_write", 00:27:17.932 "zoned": false, 00:27:17.932 "supported_io_types": { 00:27:17.932 "read": true, 00:27:17.932 "write": true, 00:27:17.932 "unmap": true, 00:27:17.932 "write_zeroes": true, 00:27:17.932 "flush": true, 00:27:17.932 "reset": true, 00:27:17.932 "compare": false, 00:27:17.932 "compare_and_write": false, 00:27:17.932 "abort": true, 00:27:17.932 "nvme_admin": false, 00:27:17.932 "nvme_io": false 00:27:17.932 }, 00:27:17.932 "memory_domains": [ 00:27:17.932 { 00:27:17.932 "dma_device_id": "system", 00:27:17.932 "dma_device_type": 1 00:27:17.932 }, 00:27:17.932 { 00:27:17.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:17.932 "dma_device_type": 2 00:27:17.932 } 00:27:17.932 ], 00:27:17.932 "driver_specific": {} 00:27:17.932 }' 00:27:17.932 11:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:18.190 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:18.190 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:18.190 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:18.190 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:18.190 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:18.190 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:18.190 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:18.190 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:18.190 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:18.447 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:18.447 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:18.447 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:18.447 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:27:18.447 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:18.705 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:18.705 "name": "BaseBdev4", 00:27:18.705 "aliases": [ 00:27:18.705 "34f023b6-027b-4eed-9b1e-b2c4a1f38561" 00:27:18.705 ], 00:27:18.705 "product_name": "Malloc disk", 00:27:18.705 "block_size": 512, 00:27:18.705 "num_blocks": 65536, 00:27:18.705 "uuid": "34f023b6-027b-4eed-9b1e-b2c4a1f38561", 00:27:18.705 "assigned_rate_limits": { 00:27:18.705 "rw_ios_per_sec": 0, 00:27:18.705 "rw_mbytes_per_sec": 0, 00:27:18.705 "r_mbytes_per_sec": 0, 00:27:18.705 "w_mbytes_per_sec": 0 00:27:18.705 }, 00:27:18.705 "claimed": true, 00:27:18.705 "claim_type": "exclusive_write", 00:27:18.705 "zoned": false, 00:27:18.705 "supported_io_types": { 00:27:18.705 "read": true, 00:27:18.705 "write": true, 00:27:18.705 "unmap": true, 00:27:18.705 "write_zeroes": true, 00:27:18.705 "flush": true, 00:27:18.705 "reset": true, 00:27:18.705 "compare": false, 00:27:18.705 "compare_and_write": false, 00:27:18.705 "abort": true, 00:27:18.705 "nvme_admin": false, 00:27:18.705 "nvme_io": false 00:27:18.705 }, 00:27:18.705 "memory_domains": [ 00:27:18.705 { 00:27:18.705 "dma_device_id": "system", 00:27:18.705 "dma_device_type": 1 00:27:18.705 }, 00:27:18.705 { 00:27:18.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:18.705 "dma_device_type": 2 00:27:18.705 } 00:27:18.705 ], 00:27:18.705 "driver_specific": {} 00:27:18.705 }' 00:27:18.705 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:18.705 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:18.705 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:18.705 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:18.705 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:18.964 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:18.964 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:18.964 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:18.964 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:18.964 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:18.964 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:18.964 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:18.964 11:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:19.221 [2024-06-10 11:50:51.251266] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:19.221 [2024-06-10 11:50:51.251503] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:19.221 [2024-06-10 11:50:51.251681] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:19.221 [2024-06-10 11:50:51.251831] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:19.221 [2024-06-10 11:50:51.251912] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:27:19.221 11:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 140087 00:27:19.221 11:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 140087 ']' 00:27:19.222 11:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 140087 00:27:19.222 11:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:27:19.222 11:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:19.222 11:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 140087 00:27:19.480 11:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:19.480 11:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:19.480 11:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 140087' 00:27:19.480 killing process with pid 140087 00:27:19.480 11:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 140087 00:27:19.480 [2024-06-10 11:50:51.302415] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:19.480 11:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 140087 00:27:19.754 [2024-06-10 11:50:51.741675] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:21.128 11:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:27:21.128 00:27:21.128 real 0m35.702s 00:27:21.128 user 1m4.388s 00:27:21.128 sys 0m5.323s 00:27:21.128 11:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:21.128 ************************************ 00:27:21.128 END TEST raid_state_function_test_sb 00:27:21.128 11:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:21.128 ************************************ 00:27:21.128 11:50:53 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:27:21.128 11:50:53 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:27:21.128 11:50:53 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:21.128 11:50:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:21.128 ************************************ 00:27:21.128 START TEST raid_superblock_test 00:27:21.128 ************************************ 00:27:21.128 11:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test concat 4 00:27:21.128 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:27:21.128 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:27:21.128 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=141202 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 141202 /var/tmp/spdk-raid.sock 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 141202 ']' 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:21.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:21.386 11:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.386 [2024-06-10 11:50:53.274895] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:27:21.386 [2024-06-10 11:50:53.275896] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141202 ] 00:27:21.645 [2024-06-10 11:50:53.460215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.904 [2024-06-10 11:50:53.733522] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.904 [2024-06-10 11:50:53.952493] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:22.162 11:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:22.162 11:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:27:22.162 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:27:22.162 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:22.162 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:27:22.162 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:27:22.162 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:22.162 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:22.162 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:22.162 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:22.162 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:27:22.420 malloc1 00:27:22.420 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:22.678 [2024-06-10 11:50:54.666154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:22.678 [2024-06-10 11:50:54.666527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.678 [2024-06-10 11:50:54.666749] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:27:22.679 [2024-06-10 11:50:54.666908] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.679 [2024-06-10 11:50:54.670128] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.679 [2024-06-10 11:50:54.670331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:22.679 pt1 00:27:22.679 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:22.679 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:22.679 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:27:22.679 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:27:22.679 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:22.679 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:22.679 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:22.679 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:22.679 11:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:27:23.245 malloc2 00:27:23.245 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:23.245 [2024-06-10 11:50:55.242897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:23.245 [2024-06-10 11:50:55.243190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:23.245 [2024-06-10 11:50:55.243348] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:27:23.245 [2024-06-10 11:50:55.243443] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:23.245 [2024-06-10 11:50:55.246079] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:23.245 [2024-06-10 11:50:55.246254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:23.245 pt2 00:27:23.245 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:23.245 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:23.245 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:27:23.245 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:27:23.245 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:27:23.245 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:23.245 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:23.245 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:23.245 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:27:23.812 malloc3 00:27:23.812 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:23.812 [2024-06-10 11:50:55.815607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:23.812 [2024-06-10 11:50:55.815977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:23.812 [2024-06-10 11:50:55.816050] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:23.812 [2024-06-10 11:50:55.816172] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:23.812 [2024-06-10 11:50:55.818684] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:23.812 [2024-06-10 11:50:55.818872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:23.812 pt3 00:27:23.812 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:23.812 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:23.812 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:27:23.812 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:27:23.812 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:27:23.812 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:23.812 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:23.812 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:23.812 11:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:27:24.070 malloc4 00:27:24.328 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:24.328 [2024-06-10 11:50:56.343348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:24.328 [2024-06-10 11:50:56.343713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:24.328 [2024-06-10 11:50:56.343864] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:24.328 [2024-06-10 11:50:56.343991] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:24.328 [2024-06-10 11:50:56.346959] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:24.328 [2024-06-10 11:50:56.347159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:24.328 pt4 00:27:24.328 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:24.328 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:24.328 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:27:24.586 [2024-06-10 11:50:56.611531] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:24.586 [2024-06-10 11:50:56.613934] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:24.586 [2024-06-10 11:50:56.614166] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:24.586 [2024-06-10 11:50:56.614343] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:24.586 [2024-06-10 11:50:56.614650] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:27:24.586 [2024-06-10 11:50:56.614777] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:24.586 [2024-06-10 11:50:56.614999] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:24.586 [2024-06-10 11:50:56.615454] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:27:24.586 [2024-06-10 11:50:56.615566] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:27:24.586 [2024-06-10 11:50:56.615898] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:24.586 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:24.586 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:24.586 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:24.586 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:24.586 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:24.586 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:24.845 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:24.845 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:24.845 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:24.845 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:24.845 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.845 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.103 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:25.103 "name": "raid_bdev1", 00:27:25.103 "uuid": "bba78ee0-03e5-463e-bfbc-77807ee88a5b", 00:27:25.103 "strip_size_kb": 64, 00:27:25.103 "state": "online", 00:27:25.103 "raid_level": "concat", 00:27:25.103 "superblock": true, 00:27:25.103 "num_base_bdevs": 4, 00:27:25.103 "num_base_bdevs_discovered": 4, 00:27:25.103 "num_base_bdevs_operational": 4, 00:27:25.103 "base_bdevs_list": [ 00:27:25.103 { 00:27:25.103 "name": "pt1", 00:27:25.103 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:25.103 "is_configured": true, 00:27:25.103 "data_offset": 2048, 00:27:25.103 "data_size": 63488 00:27:25.103 }, 00:27:25.103 { 00:27:25.103 "name": "pt2", 00:27:25.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:25.103 "is_configured": true, 00:27:25.103 "data_offset": 2048, 00:27:25.103 "data_size": 63488 00:27:25.103 }, 00:27:25.103 { 00:27:25.103 "name": "pt3", 00:27:25.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:25.103 "is_configured": true, 00:27:25.103 "data_offset": 2048, 00:27:25.103 "data_size": 63488 00:27:25.103 }, 00:27:25.103 { 00:27:25.103 "name": "pt4", 00:27:25.103 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:25.103 "is_configured": true, 00:27:25.103 "data_offset": 2048, 00:27:25.103 "data_size": 63488 00:27:25.103 } 00:27:25.103 ] 00:27:25.103 }' 00:27:25.103 11:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:25.103 11:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.755 11:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:27:25.755 11:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:25.755 11:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:25.755 11:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:25.755 11:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:25.755 11:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:25.755 11:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:25.755 11:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:25.755 [2024-06-10 11:50:57.736349] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:25.755 11:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:25.755 "name": "raid_bdev1", 00:27:25.755 "aliases": [ 00:27:25.755 "bba78ee0-03e5-463e-bfbc-77807ee88a5b" 00:27:25.755 ], 00:27:25.755 "product_name": "Raid Volume", 00:27:25.755 "block_size": 512, 00:27:25.755 "num_blocks": 253952, 00:27:25.755 "uuid": "bba78ee0-03e5-463e-bfbc-77807ee88a5b", 00:27:25.755 "assigned_rate_limits": { 00:27:25.755 "rw_ios_per_sec": 0, 00:27:25.755 "rw_mbytes_per_sec": 0, 00:27:25.755 "r_mbytes_per_sec": 0, 00:27:25.755 "w_mbytes_per_sec": 0 00:27:25.755 }, 00:27:25.755 "claimed": false, 00:27:25.755 "zoned": false, 00:27:25.755 "supported_io_types": { 00:27:25.755 "read": true, 00:27:25.755 "write": true, 00:27:25.755 "unmap": true, 00:27:25.755 "write_zeroes": true, 00:27:25.755 "flush": true, 00:27:25.755 "reset": true, 00:27:25.755 "compare": false, 00:27:25.755 "compare_and_write": false, 00:27:25.755 "abort": false, 00:27:25.755 "nvme_admin": false, 00:27:25.755 "nvme_io": false 00:27:25.755 }, 00:27:25.755 "memory_domains": [ 00:27:25.755 { 00:27:25.755 "dma_device_id": "system", 00:27:25.755 "dma_device_type": 1 00:27:25.755 }, 00:27:25.755 { 00:27:25.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:25.756 "dma_device_type": 2 00:27:25.756 }, 00:27:25.756 { 00:27:25.756 "dma_device_id": "system", 00:27:25.756 "dma_device_type": 1 00:27:25.756 }, 00:27:25.756 { 00:27:25.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:25.756 "dma_device_type": 2 00:27:25.756 }, 00:27:25.756 { 00:27:25.756 "dma_device_id": "system", 00:27:25.756 "dma_device_type": 1 00:27:25.756 }, 00:27:25.756 { 00:27:25.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:25.756 "dma_device_type": 2 00:27:25.756 }, 00:27:25.756 { 00:27:25.756 "dma_device_id": "system", 00:27:25.756 "dma_device_type": 1 00:27:25.756 }, 00:27:25.756 { 00:27:25.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:25.756 "dma_device_type": 2 00:27:25.756 } 00:27:25.756 ], 00:27:25.756 "driver_specific": { 00:27:25.756 "raid": { 00:27:25.756 "uuid": "bba78ee0-03e5-463e-bfbc-77807ee88a5b", 00:27:25.756 "strip_size_kb": 64, 00:27:25.756 "state": "online", 00:27:25.756 "raid_level": "concat", 00:27:25.756 "superblock": true, 00:27:25.756 "num_base_bdevs": 4, 00:27:25.756 "num_base_bdevs_discovered": 4, 00:27:25.756 "num_base_bdevs_operational": 4, 00:27:25.756 "base_bdevs_list": [ 00:27:25.756 { 00:27:25.756 "name": "pt1", 00:27:25.756 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:25.756 "is_configured": true, 00:27:25.756 "data_offset": 2048, 00:27:25.756 "data_size": 63488 00:27:25.756 }, 00:27:25.756 { 00:27:25.756 "name": "pt2", 00:27:25.756 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:25.756 "is_configured": true, 00:27:25.756 "data_offset": 2048, 00:27:25.756 "data_size": 63488 00:27:25.756 }, 00:27:25.756 { 00:27:25.756 "name": "pt3", 00:27:25.756 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:25.756 "is_configured": true, 00:27:25.756 "data_offset": 2048, 00:27:25.756 "data_size": 63488 00:27:25.756 }, 00:27:25.756 { 00:27:25.756 "name": "pt4", 00:27:25.756 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:25.756 "is_configured": true, 00:27:25.756 "data_offset": 2048, 00:27:25.756 "data_size": 63488 00:27:25.756 } 00:27:25.756 ] 00:27:25.756 } 00:27:25.756 } 00:27:25.756 }' 00:27:25.756 11:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:26.014 11:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:26.014 pt2 00:27:26.014 pt3 00:27:26.014 pt4' 00:27:26.014 11:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:26.014 11:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:26.014 11:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:26.272 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:26.272 "name": "pt1", 00:27:26.272 "aliases": [ 00:27:26.272 "00000000-0000-0000-0000-000000000001" 00:27:26.272 ], 00:27:26.272 "product_name": "passthru", 00:27:26.272 "block_size": 512, 00:27:26.272 "num_blocks": 65536, 00:27:26.272 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:26.272 "assigned_rate_limits": { 00:27:26.272 "rw_ios_per_sec": 0, 00:27:26.272 "rw_mbytes_per_sec": 0, 00:27:26.272 "r_mbytes_per_sec": 0, 00:27:26.272 "w_mbytes_per_sec": 0 00:27:26.272 }, 00:27:26.272 "claimed": true, 00:27:26.272 "claim_type": "exclusive_write", 00:27:26.272 "zoned": false, 00:27:26.272 "supported_io_types": { 00:27:26.272 "read": true, 00:27:26.272 "write": true, 00:27:26.272 "unmap": true, 00:27:26.272 "write_zeroes": true, 00:27:26.272 "flush": true, 00:27:26.272 "reset": true, 00:27:26.272 "compare": false, 00:27:26.272 "compare_and_write": false, 00:27:26.272 "abort": true, 00:27:26.272 "nvme_admin": false, 00:27:26.272 "nvme_io": false 00:27:26.272 }, 00:27:26.272 "memory_domains": [ 00:27:26.272 { 00:27:26.272 "dma_device_id": "system", 00:27:26.272 "dma_device_type": 1 00:27:26.272 }, 00:27:26.272 { 00:27:26.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:26.272 "dma_device_type": 2 00:27:26.272 } 00:27:26.272 ], 00:27:26.272 "driver_specific": { 00:27:26.272 "passthru": { 00:27:26.272 "name": "pt1", 00:27:26.272 "base_bdev_name": "malloc1" 00:27:26.272 } 00:27:26.272 } 00:27:26.272 }' 00:27:26.272 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:26.272 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:26.272 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:26.272 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:26.272 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:26.272 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:26.272 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:26.531 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:26.531 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:26.531 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:26.531 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:26.531 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:26.531 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:26.531 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:26.531 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:26.789 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:26.789 "name": "pt2", 00:27:26.789 "aliases": [ 00:27:26.789 "00000000-0000-0000-0000-000000000002" 00:27:26.789 ], 00:27:26.789 "product_name": "passthru", 00:27:26.789 "block_size": 512, 00:27:26.789 "num_blocks": 65536, 00:27:26.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:26.789 "assigned_rate_limits": { 00:27:26.789 "rw_ios_per_sec": 0, 00:27:26.789 "rw_mbytes_per_sec": 0, 00:27:26.789 "r_mbytes_per_sec": 0, 00:27:26.789 "w_mbytes_per_sec": 0 00:27:26.789 }, 00:27:26.789 "claimed": true, 00:27:26.789 "claim_type": "exclusive_write", 00:27:26.789 "zoned": false, 00:27:26.789 "supported_io_types": { 00:27:26.789 "read": true, 00:27:26.789 "write": true, 00:27:26.789 "unmap": true, 00:27:26.789 "write_zeroes": true, 00:27:26.789 "flush": true, 00:27:26.789 "reset": true, 00:27:26.789 "compare": false, 00:27:26.789 "compare_and_write": false, 00:27:26.789 "abort": true, 00:27:26.789 "nvme_admin": false, 00:27:26.789 "nvme_io": false 00:27:26.789 }, 00:27:26.789 "memory_domains": [ 00:27:26.789 { 00:27:26.789 "dma_device_id": "system", 00:27:26.789 "dma_device_type": 1 00:27:26.789 }, 00:27:26.789 { 00:27:26.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:26.789 "dma_device_type": 2 00:27:26.789 } 00:27:26.789 ], 00:27:26.789 "driver_specific": { 00:27:26.789 "passthru": { 00:27:26.789 "name": "pt2", 00:27:26.789 "base_bdev_name": "malloc2" 00:27:26.789 } 00:27:26.789 } 00:27:26.789 }' 00:27:26.790 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:26.790 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:27.048 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:27.048 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:27.048 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:27.048 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:27.048 11:50:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:27.048 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:27.048 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:27.048 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:27.305 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:27.305 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:27.305 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:27.305 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:27:27.305 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:27.563 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:27.563 "name": "pt3", 00:27:27.563 "aliases": [ 00:27:27.563 "00000000-0000-0000-0000-000000000003" 00:27:27.563 ], 00:27:27.563 "product_name": "passthru", 00:27:27.563 "block_size": 512, 00:27:27.563 "num_blocks": 65536, 00:27:27.563 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:27.563 "assigned_rate_limits": { 00:27:27.563 "rw_ios_per_sec": 0, 00:27:27.563 "rw_mbytes_per_sec": 0, 00:27:27.563 "r_mbytes_per_sec": 0, 00:27:27.563 "w_mbytes_per_sec": 0 00:27:27.563 }, 00:27:27.563 "claimed": true, 00:27:27.563 "claim_type": "exclusive_write", 00:27:27.563 "zoned": false, 00:27:27.563 "supported_io_types": { 00:27:27.563 "read": true, 00:27:27.563 "write": true, 00:27:27.563 "unmap": true, 00:27:27.563 "write_zeroes": true, 00:27:27.563 "flush": true, 00:27:27.563 "reset": true, 00:27:27.563 "compare": false, 00:27:27.563 "compare_and_write": false, 00:27:27.563 "abort": true, 00:27:27.563 "nvme_admin": false, 00:27:27.563 "nvme_io": false 00:27:27.563 }, 00:27:27.563 "memory_domains": [ 00:27:27.563 { 00:27:27.563 "dma_device_id": "system", 00:27:27.563 "dma_device_type": 1 00:27:27.563 }, 00:27:27.563 { 00:27:27.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:27.563 "dma_device_type": 2 00:27:27.563 } 00:27:27.563 ], 00:27:27.563 "driver_specific": { 00:27:27.563 "passthru": { 00:27:27.563 "name": "pt3", 00:27:27.563 "base_bdev_name": "malloc3" 00:27:27.563 } 00:27:27.563 } 00:27:27.563 }' 00:27:27.563 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:27.563 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:27.563 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:27.563 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:27.563 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:27.820 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:27.820 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:27.820 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:27.820 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:27.820 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:27.820 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:27.820 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:27.820 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:27.820 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:27:27.820 11:50:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:28.385 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:28.385 "name": "pt4", 00:27:28.385 "aliases": [ 00:27:28.385 "00000000-0000-0000-0000-000000000004" 00:27:28.385 ], 00:27:28.385 "product_name": "passthru", 00:27:28.385 "block_size": 512, 00:27:28.385 "num_blocks": 65536, 00:27:28.385 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:28.385 "assigned_rate_limits": { 00:27:28.385 "rw_ios_per_sec": 0, 00:27:28.385 "rw_mbytes_per_sec": 0, 00:27:28.385 "r_mbytes_per_sec": 0, 00:27:28.385 "w_mbytes_per_sec": 0 00:27:28.385 }, 00:27:28.385 "claimed": true, 00:27:28.385 "claim_type": "exclusive_write", 00:27:28.385 "zoned": false, 00:27:28.385 "supported_io_types": { 00:27:28.385 "read": true, 00:27:28.385 "write": true, 00:27:28.385 "unmap": true, 00:27:28.385 "write_zeroes": true, 00:27:28.385 "flush": true, 00:27:28.385 "reset": true, 00:27:28.385 "compare": false, 00:27:28.385 "compare_and_write": false, 00:27:28.385 "abort": true, 00:27:28.385 "nvme_admin": false, 00:27:28.385 "nvme_io": false 00:27:28.385 }, 00:27:28.385 "memory_domains": [ 00:27:28.385 { 00:27:28.385 "dma_device_id": "system", 00:27:28.385 "dma_device_type": 1 00:27:28.385 }, 00:27:28.385 { 00:27:28.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.385 "dma_device_type": 2 00:27:28.385 } 00:27:28.385 ], 00:27:28.385 "driver_specific": { 00:27:28.385 "passthru": { 00:27:28.385 "name": "pt4", 00:27:28.385 "base_bdev_name": "malloc4" 00:27:28.385 } 00:27:28.385 } 00:27:28.385 }' 00:27:28.385 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:28.385 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:28.385 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:28.385 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:28.385 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:28.385 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:28.385 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:28.385 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:28.643 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:28.643 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:28.643 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:28.643 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:28.643 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:27:28.643 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:28.900 [2024-06-10 11:51:00.732974] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:28.900 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=bba78ee0-03e5-463e-bfbc-77807ee88a5b 00:27:28.900 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z bba78ee0-03e5-463e-bfbc-77807ee88a5b ']' 00:27:28.900 11:51:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:29.246 [2024-06-10 11:51:01.028834] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:29.246 [2024-06-10 11:51:01.029060] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:29.246 [2024-06-10 11:51:01.029312] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:29.246 [2024-06-10 11:51:01.029460] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:29.246 [2024-06-10 11:51:01.029545] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:27:29.246 11:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.246 11:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:27:29.505 11:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:27:29.505 11:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:27:29.505 11:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:29.505 11:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:29.505 11:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:29.505 11:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:29.763 11:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:29.763 11:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:30.022 11:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:30.022 11:51:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:27:30.280 11:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:30.280 11:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:30.538 11:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:27:30.538 11:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:30.538 11:51:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:27:30.538 11:51:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:30.538 11:51:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:30.538 11:51:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:30.538 11:51:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:30.538 11:51:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:30.538 11:51:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:30.538 11:51:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:30.538 11:51:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:30.538 11:51:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:30.538 11:51:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:30.796 [2024-06-10 11:51:02.597060] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:30.796 [2024-06-10 11:51:02.599271] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:30.796 [2024-06-10 11:51:02.599494] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:30.796 [2024-06-10 11:51:02.599641] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:27:30.796 [2024-06-10 11:51:02.599782] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:30.796 [2024-06-10 11:51:02.599965] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:30.796 [2024-06-10 11:51:02.600118] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:27:30.796 [2024-06-10 11:51:02.600235] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:27:30.796 [2024-06-10 11:51:02.600349] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:30.796 [2024-06-10 11:51:02.600432] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:27:30.796 request: 00:27:30.796 { 00:27:30.796 "name": "raid_bdev1", 00:27:30.796 "raid_level": "concat", 00:27:30.796 "base_bdevs": [ 00:27:30.796 "malloc1", 00:27:30.796 "malloc2", 00:27:30.796 "malloc3", 00:27:30.796 "malloc4" 00:27:30.796 ], 00:27:30.796 "strip_size_kb": 64, 00:27:30.796 "superblock": false, 00:27:30.797 "method": "bdev_raid_create", 00:27:30.797 "req_id": 1 00:27:30.797 } 00:27:30.797 Got JSON-RPC error response 00:27:30.797 response: 00:27:30.797 { 00:27:30.797 "code": -17, 00:27:30.797 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:30.797 } 00:27:30.797 11:51:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:27:30.797 11:51:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:30.797 11:51:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:30.797 11:51:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:30.797 11:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.797 11:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:27:30.797 11:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:27:30.797 11:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:27:30.797 11:51:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:31.055 [2024-06-10 11:51:03.029197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:31.055 [2024-06-10 11:51:03.029405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.055 [2024-06-10 11:51:03.029508] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:31.055 [2024-06-10 11:51:03.029614] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.055 [2024-06-10 11:51:03.032090] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.055 [2024-06-10 11:51:03.032263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:31.055 [2024-06-10 11:51:03.032561] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:31.055 [2024-06-10 11:51:03.032716] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:31.055 pt1 00:27:31.055 11:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:27:31.055 11:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:31.055 11:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:31.055 11:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:31.055 11:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:31.055 11:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:31.055 11:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:31.055 11:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:31.055 11:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:31.055 11:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:31.055 11:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.055 11:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.313 11:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:31.313 "name": "raid_bdev1", 00:27:31.313 "uuid": "bba78ee0-03e5-463e-bfbc-77807ee88a5b", 00:27:31.313 "strip_size_kb": 64, 00:27:31.313 "state": "configuring", 00:27:31.313 "raid_level": "concat", 00:27:31.313 "superblock": true, 00:27:31.313 "num_base_bdevs": 4, 00:27:31.313 "num_base_bdevs_discovered": 1, 00:27:31.313 "num_base_bdevs_operational": 4, 00:27:31.313 "base_bdevs_list": [ 00:27:31.313 { 00:27:31.313 "name": "pt1", 00:27:31.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:31.313 "is_configured": true, 00:27:31.313 "data_offset": 2048, 00:27:31.313 "data_size": 63488 00:27:31.313 }, 00:27:31.313 { 00:27:31.313 "name": null, 00:27:31.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:31.313 "is_configured": false, 00:27:31.313 "data_offset": 2048, 00:27:31.313 "data_size": 63488 00:27:31.313 }, 00:27:31.313 { 00:27:31.313 "name": null, 00:27:31.313 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:31.313 "is_configured": false, 00:27:31.313 "data_offset": 2048, 00:27:31.313 "data_size": 63488 00:27:31.313 }, 00:27:31.313 { 00:27:31.313 "name": null, 00:27:31.313 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:31.313 "is_configured": false, 00:27:31.313 "data_offset": 2048, 00:27:31.313 "data_size": 63488 00:27:31.313 } 00:27:31.313 ] 00:27:31.313 }' 00:27:31.313 11:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:31.313 11:51:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.879 11:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:27:31.879 11:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:32.138 [2024-06-10 11:51:03.957384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:32.138 [2024-06-10 11:51:03.957647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:32.138 [2024-06-10 11:51:03.957719] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:32.138 [2024-06-10 11:51:03.957827] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:32.138 [2024-06-10 11:51:03.958349] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:32.138 [2024-06-10 11:51:03.958486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:32.138 [2024-06-10 11:51:03.958700] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:32.138 [2024-06-10 11:51:03.958810] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:32.138 pt2 00:27:32.138 11:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:32.397 [2024-06-10 11:51:04.217479] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:32.397 11:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:27:32.397 11:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:32.397 11:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:32.397 11:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:32.398 11:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:32.398 11:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:32.398 11:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:32.398 11:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:32.398 11:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:32.398 11:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:32.398 11:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.398 11:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.398 11:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:32.398 "name": "raid_bdev1", 00:27:32.398 "uuid": "bba78ee0-03e5-463e-bfbc-77807ee88a5b", 00:27:32.398 "strip_size_kb": 64, 00:27:32.398 "state": "configuring", 00:27:32.398 "raid_level": "concat", 00:27:32.398 "superblock": true, 00:27:32.398 "num_base_bdevs": 4, 00:27:32.398 "num_base_bdevs_discovered": 1, 00:27:32.398 "num_base_bdevs_operational": 4, 00:27:32.398 "base_bdevs_list": [ 00:27:32.398 { 00:27:32.398 "name": "pt1", 00:27:32.398 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:32.398 "is_configured": true, 00:27:32.398 "data_offset": 2048, 00:27:32.398 "data_size": 63488 00:27:32.398 }, 00:27:32.398 { 00:27:32.398 "name": null, 00:27:32.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:32.398 "is_configured": false, 00:27:32.398 "data_offset": 2048, 00:27:32.398 "data_size": 63488 00:27:32.398 }, 00:27:32.398 { 00:27:32.398 "name": null, 00:27:32.398 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:32.398 "is_configured": false, 00:27:32.398 "data_offset": 2048, 00:27:32.398 "data_size": 63488 00:27:32.398 }, 00:27:32.398 { 00:27:32.398 "name": null, 00:27:32.398 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:32.398 "is_configured": false, 00:27:32.398 "data_offset": 2048, 00:27:32.398 "data_size": 63488 00:27:32.398 } 00:27:32.398 ] 00:27:32.398 }' 00:27:32.398 11:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:32.398 11:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.965 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:27:32.965 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:33.223 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:33.483 [2024-06-10 11:51:05.281708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:33.483 [2024-06-10 11:51:05.282022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.483 [2024-06-10 11:51:05.282095] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:33.483 [2024-06-10 11:51:05.282204] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.483 [2024-06-10 11:51:05.282770] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.483 [2024-06-10 11:51:05.282913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:33.483 [2024-06-10 11:51:05.283102] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:33.483 [2024-06-10 11:51:05.283204] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:33.483 pt2 00:27:33.483 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:33.483 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:33.483 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:33.483 [2024-06-10 11:51:05.485753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:33.483 [2024-06-10 11:51:05.486022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.483 [2024-06-10 11:51:05.486083] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:27:33.483 [2024-06-10 11:51:05.486205] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.483 [2024-06-10 11:51:05.486703] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.483 [2024-06-10 11:51:05.486847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:33.483 [2024-06-10 11:51:05.487049] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:33.483 [2024-06-10 11:51:05.487150] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:33.483 pt3 00:27:33.483 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:33.483 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:33.483 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:33.742 [2024-06-10 11:51:05.765779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:33.742 [2024-06-10 11:51:05.766061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.742 [2024-06-10 11:51:05.766126] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:27:33.742 [2024-06-10 11:51:05.766273] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.742 [2024-06-10 11:51:05.766768] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.742 [2024-06-10 11:51:05.766922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:33.742 [2024-06-10 11:51:05.767094] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:33.742 [2024-06-10 11:51:05.767153] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:33.742 [2024-06-10 11:51:05.767342] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:27:33.743 [2024-06-10 11:51:05.767520] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:33.743 [2024-06-10 11:51:05.767647] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:27:33.743 [2024-06-10 11:51:05.768057] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:27:33.743 [2024-06-10 11:51:05.768166] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:27:33.743 [2024-06-10 11:51:05.768398] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:33.743 pt4 00:27:33.743 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:33.743 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:33.743 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:33.743 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:33.743 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:33.743 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:33.743 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:33.743 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:33.743 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:33.743 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:33.743 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:33.743 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:33.743 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.743 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.077 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:34.077 "name": "raid_bdev1", 00:27:34.077 "uuid": "bba78ee0-03e5-463e-bfbc-77807ee88a5b", 00:27:34.077 "strip_size_kb": 64, 00:27:34.077 "state": "online", 00:27:34.077 "raid_level": "concat", 00:27:34.077 "superblock": true, 00:27:34.077 "num_base_bdevs": 4, 00:27:34.077 "num_base_bdevs_discovered": 4, 00:27:34.077 "num_base_bdevs_operational": 4, 00:27:34.077 "base_bdevs_list": [ 00:27:34.077 { 00:27:34.077 "name": "pt1", 00:27:34.077 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:34.077 "is_configured": true, 00:27:34.077 "data_offset": 2048, 00:27:34.077 "data_size": 63488 00:27:34.077 }, 00:27:34.077 { 00:27:34.077 "name": "pt2", 00:27:34.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:34.077 "is_configured": true, 00:27:34.077 "data_offset": 2048, 00:27:34.077 "data_size": 63488 00:27:34.077 }, 00:27:34.077 { 00:27:34.077 "name": "pt3", 00:27:34.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:34.077 "is_configured": true, 00:27:34.077 "data_offset": 2048, 00:27:34.077 "data_size": 63488 00:27:34.077 }, 00:27:34.077 { 00:27:34.077 "name": "pt4", 00:27:34.077 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:34.077 "is_configured": true, 00:27:34.077 "data_offset": 2048, 00:27:34.077 "data_size": 63488 00:27:34.077 } 00:27:34.077 ] 00:27:34.077 }' 00:27:34.077 11:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:34.077 11:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.646 11:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:27:34.646 11:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:34.646 11:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:34.646 11:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:34.646 11:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:34.646 11:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:34.646 11:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:34.646 11:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:34.646 [2024-06-10 11:51:06.670216] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:34.646 11:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:34.646 "name": "raid_bdev1", 00:27:34.646 "aliases": [ 00:27:34.646 "bba78ee0-03e5-463e-bfbc-77807ee88a5b" 00:27:34.646 ], 00:27:34.646 "product_name": "Raid Volume", 00:27:34.646 "block_size": 512, 00:27:34.646 "num_blocks": 253952, 00:27:34.646 "uuid": "bba78ee0-03e5-463e-bfbc-77807ee88a5b", 00:27:34.646 "assigned_rate_limits": { 00:27:34.646 "rw_ios_per_sec": 0, 00:27:34.646 "rw_mbytes_per_sec": 0, 00:27:34.646 "r_mbytes_per_sec": 0, 00:27:34.646 "w_mbytes_per_sec": 0 00:27:34.646 }, 00:27:34.646 "claimed": false, 00:27:34.646 "zoned": false, 00:27:34.646 "supported_io_types": { 00:27:34.646 "read": true, 00:27:34.646 "write": true, 00:27:34.646 "unmap": true, 00:27:34.646 "write_zeroes": true, 00:27:34.646 "flush": true, 00:27:34.646 "reset": true, 00:27:34.646 "compare": false, 00:27:34.646 "compare_and_write": false, 00:27:34.646 "abort": false, 00:27:34.646 "nvme_admin": false, 00:27:34.646 "nvme_io": false 00:27:34.646 }, 00:27:34.646 "memory_domains": [ 00:27:34.646 { 00:27:34.646 "dma_device_id": "system", 00:27:34.646 "dma_device_type": 1 00:27:34.646 }, 00:27:34.646 { 00:27:34.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:34.646 "dma_device_type": 2 00:27:34.646 }, 00:27:34.646 { 00:27:34.646 "dma_device_id": "system", 00:27:34.646 "dma_device_type": 1 00:27:34.646 }, 00:27:34.646 { 00:27:34.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:34.646 "dma_device_type": 2 00:27:34.646 }, 00:27:34.646 { 00:27:34.646 "dma_device_id": "system", 00:27:34.646 "dma_device_type": 1 00:27:34.646 }, 00:27:34.646 { 00:27:34.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:34.646 "dma_device_type": 2 00:27:34.646 }, 00:27:34.646 { 00:27:34.646 "dma_device_id": "system", 00:27:34.646 "dma_device_type": 1 00:27:34.646 }, 00:27:34.646 { 00:27:34.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:34.646 "dma_device_type": 2 00:27:34.646 } 00:27:34.646 ], 00:27:34.646 "driver_specific": { 00:27:34.646 "raid": { 00:27:34.646 "uuid": "bba78ee0-03e5-463e-bfbc-77807ee88a5b", 00:27:34.646 "strip_size_kb": 64, 00:27:34.646 "state": "online", 00:27:34.646 "raid_level": "concat", 00:27:34.646 "superblock": true, 00:27:34.646 "num_base_bdevs": 4, 00:27:34.646 "num_base_bdevs_discovered": 4, 00:27:34.646 "num_base_bdevs_operational": 4, 00:27:34.646 "base_bdevs_list": [ 00:27:34.646 { 00:27:34.646 "name": "pt1", 00:27:34.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:34.646 "is_configured": true, 00:27:34.646 "data_offset": 2048, 00:27:34.646 "data_size": 63488 00:27:34.646 }, 00:27:34.646 { 00:27:34.646 "name": "pt2", 00:27:34.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:34.646 "is_configured": true, 00:27:34.646 "data_offset": 2048, 00:27:34.646 "data_size": 63488 00:27:34.646 }, 00:27:34.646 { 00:27:34.646 "name": "pt3", 00:27:34.646 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:34.646 "is_configured": true, 00:27:34.646 "data_offset": 2048, 00:27:34.646 "data_size": 63488 00:27:34.646 }, 00:27:34.646 { 00:27:34.646 "name": "pt4", 00:27:34.646 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:34.646 "is_configured": true, 00:27:34.646 "data_offset": 2048, 00:27:34.646 "data_size": 63488 00:27:34.646 } 00:27:34.646 ] 00:27:34.646 } 00:27:34.646 } 00:27:34.646 }' 00:27:34.646 11:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:34.905 11:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:34.905 pt2 00:27:34.905 pt3 00:27:34.905 pt4' 00:27:34.905 11:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:34.905 11:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:34.905 11:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:35.164 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:35.164 "name": "pt1", 00:27:35.164 "aliases": [ 00:27:35.164 "00000000-0000-0000-0000-000000000001" 00:27:35.164 ], 00:27:35.164 "product_name": "passthru", 00:27:35.164 "block_size": 512, 00:27:35.164 "num_blocks": 65536, 00:27:35.164 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:35.164 "assigned_rate_limits": { 00:27:35.164 "rw_ios_per_sec": 0, 00:27:35.164 "rw_mbytes_per_sec": 0, 00:27:35.164 "r_mbytes_per_sec": 0, 00:27:35.164 "w_mbytes_per_sec": 0 00:27:35.164 }, 00:27:35.164 "claimed": true, 00:27:35.165 "claim_type": "exclusive_write", 00:27:35.165 "zoned": false, 00:27:35.165 "supported_io_types": { 00:27:35.165 "read": true, 00:27:35.165 "write": true, 00:27:35.165 "unmap": true, 00:27:35.165 "write_zeroes": true, 00:27:35.165 "flush": true, 00:27:35.165 "reset": true, 00:27:35.165 "compare": false, 00:27:35.165 "compare_and_write": false, 00:27:35.165 "abort": true, 00:27:35.165 "nvme_admin": false, 00:27:35.165 "nvme_io": false 00:27:35.165 }, 00:27:35.165 "memory_domains": [ 00:27:35.165 { 00:27:35.165 "dma_device_id": "system", 00:27:35.165 "dma_device_type": 1 00:27:35.165 }, 00:27:35.165 { 00:27:35.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:35.165 "dma_device_type": 2 00:27:35.165 } 00:27:35.165 ], 00:27:35.165 "driver_specific": { 00:27:35.165 "passthru": { 00:27:35.165 "name": "pt1", 00:27:35.165 "base_bdev_name": "malloc1" 00:27:35.165 } 00:27:35.165 } 00:27:35.165 }' 00:27:35.165 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:35.165 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:35.165 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:35.165 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:35.165 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:35.165 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:35.165 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:35.423 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:35.423 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:35.423 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:35.423 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:35.423 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:35.423 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:35.423 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:35.423 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:35.682 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:35.682 "name": "pt2", 00:27:35.682 "aliases": [ 00:27:35.682 "00000000-0000-0000-0000-000000000002" 00:27:35.682 ], 00:27:35.682 "product_name": "passthru", 00:27:35.682 "block_size": 512, 00:27:35.682 "num_blocks": 65536, 00:27:35.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:35.682 "assigned_rate_limits": { 00:27:35.682 "rw_ios_per_sec": 0, 00:27:35.682 "rw_mbytes_per_sec": 0, 00:27:35.682 "r_mbytes_per_sec": 0, 00:27:35.682 "w_mbytes_per_sec": 0 00:27:35.682 }, 00:27:35.682 "claimed": true, 00:27:35.682 "claim_type": "exclusive_write", 00:27:35.682 "zoned": false, 00:27:35.682 "supported_io_types": { 00:27:35.682 "read": true, 00:27:35.682 "write": true, 00:27:35.682 "unmap": true, 00:27:35.682 "write_zeroes": true, 00:27:35.682 "flush": true, 00:27:35.682 "reset": true, 00:27:35.682 "compare": false, 00:27:35.682 "compare_and_write": false, 00:27:35.682 "abort": true, 00:27:35.682 "nvme_admin": false, 00:27:35.682 "nvme_io": false 00:27:35.682 }, 00:27:35.682 "memory_domains": [ 00:27:35.682 { 00:27:35.682 "dma_device_id": "system", 00:27:35.682 "dma_device_type": 1 00:27:35.682 }, 00:27:35.682 { 00:27:35.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:35.682 "dma_device_type": 2 00:27:35.682 } 00:27:35.682 ], 00:27:35.682 "driver_specific": { 00:27:35.682 "passthru": { 00:27:35.682 "name": "pt2", 00:27:35.682 "base_bdev_name": "malloc2" 00:27:35.682 } 00:27:35.682 } 00:27:35.682 }' 00:27:35.683 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:35.683 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:35.683 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:35.683 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:35.683 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:35.941 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:35.941 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:35.941 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:35.941 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:35.941 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:35.941 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:35.941 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:35.941 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:35.941 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:27:35.941 11:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:36.201 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:36.201 "name": "pt3", 00:27:36.201 "aliases": [ 00:27:36.201 "00000000-0000-0000-0000-000000000003" 00:27:36.201 ], 00:27:36.201 "product_name": "passthru", 00:27:36.201 "block_size": 512, 00:27:36.201 "num_blocks": 65536, 00:27:36.201 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:36.201 "assigned_rate_limits": { 00:27:36.201 "rw_ios_per_sec": 0, 00:27:36.201 "rw_mbytes_per_sec": 0, 00:27:36.201 "r_mbytes_per_sec": 0, 00:27:36.201 "w_mbytes_per_sec": 0 00:27:36.201 }, 00:27:36.201 "claimed": true, 00:27:36.201 "claim_type": "exclusive_write", 00:27:36.201 "zoned": false, 00:27:36.201 "supported_io_types": { 00:27:36.201 "read": true, 00:27:36.201 "write": true, 00:27:36.201 "unmap": true, 00:27:36.201 "write_zeroes": true, 00:27:36.201 "flush": true, 00:27:36.201 "reset": true, 00:27:36.201 "compare": false, 00:27:36.201 "compare_and_write": false, 00:27:36.201 "abort": true, 00:27:36.201 "nvme_admin": false, 00:27:36.201 "nvme_io": false 00:27:36.201 }, 00:27:36.201 "memory_domains": [ 00:27:36.201 { 00:27:36.201 "dma_device_id": "system", 00:27:36.201 "dma_device_type": 1 00:27:36.201 }, 00:27:36.201 { 00:27:36.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:36.201 "dma_device_type": 2 00:27:36.201 } 00:27:36.201 ], 00:27:36.201 "driver_specific": { 00:27:36.201 "passthru": { 00:27:36.201 "name": "pt3", 00:27:36.201 "base_bdev_name": "malloc3" 00:27:36.201 } 00:27:36.201 } 00:27:36.201 }' 00:27:36.201 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:36.201 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:36.201 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:36.201 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:36.460 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:36.460 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:36.460 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:36.460 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:36.460 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:36.460 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:36.460 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:36.460 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:36.460 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:36.460 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:27:36.460 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:36.779 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:36.779 "name": "pt4", 00:27:36.779 "aliases": [ 00:27:36.779 "00000000-0000-0000-0000-000000000004" 00:27:36.779 ], 00:27:36.779 "product_name": "passthru", 00:27:36.779 "block_size": 512, 00:27:36.779 "num_blocks": 65536, 00:27:36.779 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:36.779 "assigned_rate_limits": { 00:27:36.779 "rw_ios_per_sec": 0, 00:27:36.779 "rw_mbytes_per_sec": 0, 00:27:36.779 "r_mbytes_per_sec": 0, 00:27:36.779 "w_mbytes_per_sec": 0 00:27:36.779 }, 00:27:36.779 "claimed": true, 00:27:36.779 "claim_type": "exclusive_write", 00:27:36.779 "zoned": false, 00:27:36.779 "supported_io_types": { 00:27:36.779 "read": true, 00:27:36.779 "write": true, 00:27:36.779 "unmap": true, 00:27:36.779 "write_zeroes": true, 00:27:36.779 "flush": true, 00:27:36.779 "reset": true, 00:27:36.779 "compare": false, 00:27:36.779 "compare_and_write": false, 00:27:36.779 "abort": true, 00:27:36.779 "nvme_admin": false, 00:27:36.779 "nvme_io": false 00:27:36.779 }, 00:27:36.779 "memory_domains": [ 00:27:36.779 { 00:27:36.779 "dma_device_id": "system", 00:27:36.779 "dma_device_type": 1 00:27:36.779 }, 00:27:36.779 { 00:27:36.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:36.779 "dma_device_type": 2 00:27:36.779 } 00:27:36.779 ], 00:27:36.779 "driver_specific": { 00:27:36.779 "passthru": { 00:27:36.779 "name": "pt4", 00:27:36.779 "base_bdev_name": "malloc4" 00:27:36.779 } 00:27:36.779 } 00:27:36.779 }' 00:27:36.779 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:36.779 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:36.779 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:36.779 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:36.779 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:36.779 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:36.779 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:37.038 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:37.038 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:37.038 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:37.038 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:37.038 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:37.038 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:27:37.038 11:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:37.297 [2024-06-10 11:51:09.211709] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:37.297 11:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' bba78ee0-03e5-463e-bfbc-77807ee88a5b '!=' bba78ee0-03e5-463e-bfbc-77807ee88a5b ']' 00:27:37.297 11:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:27:37.297 11:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:37.297 11:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:27:37.297 11:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 141202 00:27:37.297 11:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 141202 ']' 00:27:37.297 11:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 141202 00:27:37.297 11:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:27:37.297 11:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:37.297 11:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 141202 00:27:37.297 killing process with pid 141202 00:27:37.297 11:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:37.297 11:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:37.297 11:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 141202' 00:27:37.297 11:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 141202 00:27:37.297 11:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 141202 00:27:37.297 [2024-06-10 11:51:09.258486] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:37.297 [2024-06-10 11:51:09.258579] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:37.297 [2024-06-10 11:51:09.258663] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:37.297 [2024-06-10 11:51:09.258697] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:27:37.876 [2024-06-10 11:51:09.670468] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:39.304 11:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:27:39.304 00:27:39.304 real 0m17.841s 00:27:39.304 user 0m31.232s 00:27:39.304 sys 0m2.539s 00:27:39.304 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:39.304 ************************************ 00:27:39.304 END TEST raid_superblock_test 00:27:39.304 ************************************ 00:27:39.304 11:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.304 11:51:11 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:27:39.304 11:51:11 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:27:39.304 11:51:11 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:39.304 11:51:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:39.304 ************************************ 00:27:39.304 START TEST raid_read_error_test 00:27:39.304 ************************************ 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 4 read 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.FlrHwG1hSh 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=141756 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 141756 /var/tmp/spdk-raid.sock 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 141756 ']' 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:39.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:39.304 11:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.304 [2024-06-10 11:51:11.177916] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:27:39.304 [2024-06-10 11:51:11.178300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141756 ] 00:27:39.599 [2024-06-10 11:51:11.347046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.599 [2024-06-10 11:51:11.639473] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.863 [2024-06-10 11:51:11.857882] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:40.192 11:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:40.192 11:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:27:40.192 11:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:40.192 11:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:40.463 BaseBdev1_malloc 00:27:40.463 11:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:27:40.721 true 00:27:40.721 11:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:40.980 [2024-06-10 11:51:12.968966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:40.980 [2024-06-10 11:51:12.969284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:40.980 [2024-06-10 11:51:12.969401] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:27:40.980 [2024-06-10 11:51:12.969513] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:40.980 [2024-06-10 11:51:12.972072] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:40.980 [2024-06-10 11:51:12.972257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:40.980 BaseBdev1 00:27:40.980 11:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:40.980 11:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:41.547 BaseBdev2_malloc 00:27:41.547 11:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:27:41.807 true 00:27:41.807 11:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:41.807 [2024-06-10 11:51:13.798553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:41.807 [2024-06-10 11:51:13.798885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:41.807 [2024-06-10 11:51:13.798975] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:41.807 [2024-06-10 11:51:13.799085] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:41.807 [2024-06-10 11:51:13.801520] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:41.807 [2024-06-10 11:51:13.801691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:41.807 BaseBdev2 00:27:41.807 11:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:41.807 11:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:42.098 BaseBdev3_malloc 00:27:42.098 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:27:42.356 true 00:27:42.356 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:42.614 [2024-06-10 11:51:14.596610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:42.614 [2024-06-10 11:51:14.596926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:42.614 [2024-06-10 11:51:14.597002] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:27:42.614 [2024-06-10 11:51:14.597264] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:42.614 [2024-06-10 11:51:14.599908] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:42.614 [2024-06-10 11:51:14.600110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:42.614 BaseBdev3 00:27:42.614 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:42.614 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:42.873 BaseBdev4_malloc 00:27:43.131 11:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:27:43.389 true 00:27:43.389 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:43.389 [2024-06-10 11:51:15.412984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:43.390 [2024-06-10 11:51:15.413330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:43.390 [2024-06-10 11:51:15.413512] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:43.390 [2024-06-10 11:51:15.413630] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:43.390 [2024-06-10 11:51:15.416247] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:43.390 [2024-06-10 11:51:15.416485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:43.390 BaseBdev4 00:27:43.390 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:27:43.648 [2024-06-10 11:51:15.641044] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:43.648 [2024-06-10 11:51:15.643507] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:43.648 [2024-06-10 11:51:15.643736] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:43.648 [2024-06-10 11:51:15.643903] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:43.648 [2024-06-10 11:51:15.644260] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:27:43.648 [2024-06-10 11:51:15.644396] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:43.648 [2024-06-10 11:51:15.644567] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:43.648 [2024-06-10 11:51:15.645032] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:27:43.648 [2024-06-10 11:51:15.645141] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:27:43.648 [2024-06-10 11:51:15.645434] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:43.648 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:43.648 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:43.648 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:43.648 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:43.648 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:43.648 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:43.648 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:43.648 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:43.648 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:43.648 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:43.648 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.648 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.906 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:43.906 "name": "raid_bdev1", 00:27:43.906 "uuid": "59955803-7bfa-4a9c-87b0-e59c671cf2ca", 00:27:43.906 "strip_size_kb": 64, 00:27:43.906 "state": "online", 00:27:43.906 "raid_level": "concat", 00:27:43.906 "superblock": true, 00:27:43.906 "num_base_bdevs": 4, 00:27:43.906 "num_base_bdevs_discovered": 4, 00:27:43.906 "num_base_bdevs_operational": 4, 00:27:43.906 "base_bdevs_list": [ 00:27:43.906 { 00:27:43.906 "name": "BaseBdev1", 00:27:43.906 "uuid": "b7d1d0b6-d6e7-53fc-add8-cc829c9bf81c", 00:27:43.906 "is_configured": true, 00:27:43.906 "data_offset": 2048, 00:27:43.906 "data_size": 63488 00:27:43.906 }, 00:27:43.906 { 00:27:43.906 "name": "BaseBdev2", 00:27:43.906 "uuid": "d38ab8ff-6bb8-5017-af99-5402842ccc40", 00:27:43.906 "is_configured": true, 00:27:43.906 "data_offset": 2048, 00:27:43.906 "data_size": 63488 00:27:43.906 }, 00:27:43.907 { 00:27:43.907 "name": "BaseBdev3", 00:27:43.907 "uuid": "d1b44ca5-7609-5262-ac03-bca39c10da2c", 00:27:43.907 "is_configured": true, 00:27:43.907 "data_offset": 2048, 00:27:43.907 "data_size": 63488 00:27:43.907 }, 00:27:43.907 { 00:27:43.907 "name": "BaseBdev4", 00:27:43.907 "uuid": "6553a589-b7c6-5bb7-85a2-396f3d6cc3cb", 00:27:43.907 "is_configured": true, 00:27:43.907 "data_offset": 2048, 00:27:43.907 "data_size": 63488 00:27:43.907 } 00:27:43.907 ] 00:27:43.907 }' 00:27:43.907 11:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:43.907 11:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.472 11:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:27:44.472 11:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:44.472 [2024-06-10 11:51:16.526876] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:45.405 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:27:45.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:27:45.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:27:45.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:27:45.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:45.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:45.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:45.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:45.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:45.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:45.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:45.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:45.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:45.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:45.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.971 11:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:46.229 11:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:46.229 "name": "raid_bdev1", 00:27:46.229 "uuid": "59955803-7bfa-4a9c-87b0-e59c671cf2ca", 00:27:46.229 "strip_size_kb": 64, 00:27:46.229 "state": "online", 00:27:46.229 "raid_level": "concat", 00:27:46.229 "superblock": true, 00:27:46.229 "num_base_bdevs": 4, 00:27:46.229 "num_base_bdevs_discovered": 4, 00:27:46.229 "num_base_bdevs_operational": 4, 00:27:46.229 "base_bdevs_list": [ 00:27:46.229 { 00:27:46.229 "name": "BaseBdev1", 00:27:46.229 "uuid": "b7d1d0b6-d6e7-53fc-add8-cc829c9bf81c", 00:27:46.229 "is_configured": true, 00:27:46.229 "data_offset": 2048, 00:27:46.229 "data_size": 63488 00:27:46.229 }, 00:27:46.229 { 00:27:46.229 "name": "BaseBdev2", 00:27:46.229 "uuid": "d38ab8ff-6bb8-5017-af99-5402842ccc40", 00:27:46.229 "is_configured": true, 00:27:46.229 "data_offset": 2048, 00:27:46.229 "data_size": 63488 00:27:46.229 }, 00:27:46.229 { 00:27:46.229 "name": "BaseBdev3", 00:27:46.229 "uuid": "d1b44ca5-7609-5262-ac03-bca39c10da2c", 00:27:46.229 "is_configured": true, 00:27:46.229 "data_offset": 2048, 00:27:46.229 "data_size": 63488 00:27:46.229 }, 00:27:46.229 { 00:27:46.229 "name": "BaseBdev4", 00:27:46.229 "uuid": "6553a589-b7c6-5bb7-85a2-396f3d6cc3cb", 00:27:46.229 "is_configured": true, 00:27:46.229 "data_offset": 2048, 00:27:46.229 "data_size": 63488 00:27:46.229 } 00:27:46.229 ] 00:27:46.229 }' 00:27:46.229 11:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:46.229 11:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.795 11:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:47.052 [2024-06-10 11:51:18.972586] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:47.052 [2024-06-10 11:51:18.972831] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:47.052 [2024-06-10 11:51:18.975518] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:47.052 [2024-06-10 11:51:18.975669] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:47.052 [2024-06-10 11:51:18.975743] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:47.052 [2024-06-10 11:51:18.975825] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:27:47.052 0 00:27:47.052 11:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 141756 00:27:47.052 11:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 141756 ']' 00:27:47.052 11:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 141756 00:27:47.052 11:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:27:47.052 11:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:47.052 11:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 141756 00:27:47.052 11:51:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:47.052 11:51:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:47.052 11:51:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 141756' 00:27:47.052 killing process with pid 141756 00:27:47.052 11:51:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 141756 00:27:47.052 [2024-06-10 11:51:19.025268] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:47.052 11:51:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 141756 00:27:47.618 [2024-06-10 11:51:19.401216] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:48.992 11:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:27:48.992 11:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:27:48.992 11:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.FlrHwG1hSh 00:27:48.992 11:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.41 00:27:48.992 11:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:27:48.992 11:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:48.992 11:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:27:48.992 11:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.41 != \0\.\0\0 ]] 00:27:48.992 00:27:48.992 real 0m9.912s 00:27:48.992 user 0m14.992s 00:27:48.992 sys 0m1.250s 00:27:48.992 11:51:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:48.992 ************************************ 00:27:48.992 END TEST raid_read_error_test 00:27:48.992 ************************************ 00:27:48.992 11:51:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.250 11:51:21 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:27:49.250 11:51:21 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:27:49.250 11:51:21 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:49.250 11:51:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:49.250 ************************************ 00:27:49.250 START TEST raid_write_error_test 00:27:49.250 ************************************ 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 4 write 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.9ATzAxpWnw 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=141979 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 141979 /var/tmp/spdk-raid.sock 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 141979 ']' 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:49.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:49.250 11:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.250 [2024-06-10 11:51:21.155963] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:27:49.250 [2024-06-10 11:51:21.156321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141979 ] 00:27:49.507 [2024-06-10 11:51:21.314861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.507 [2024-06-10 11:51:21.530117] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.764 [2024-06-10 11:51:21.747736] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:50.329 11:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:50.329 11:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:27:50.329 11:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:50.329 11:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:50.587 BaseBdev1_malloc 00:27:50.587 11:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:27:50.587 true 00:27:50.587 11:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:50.845 [2024-06-10 11:51:22.805558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:50.845 [2024-06-10 11:51:22.805841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:50.845 [2024-06-10 11:51:22.805917] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:27:50.845 [2024-06-10 11:51:22.806008] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:50.845 [2024-06-10 11:51:22.808320] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:50.845 [2024-06-10 11:51:22.808503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:50.845 BaseBdev1 00:27:50.845 11:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:50.845 11:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:51.109 BaseBdev2_malloc 00:27:51.109 11:51:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:27:51.367 true 00:27:51.367 11:51:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:51.625 [2024-06-10 11:51:23.527200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:51.625 [2024-06-10 11:51:23.527466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:51.625 [2024-06-10 11:51:23.527618] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:51.625 [2024-06-10 11:51:23.527729] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:51.625 [2024-06-10 11:51:23.530033] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:51.625 [2024-06-10 11:51:23.530192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:51.625 BaseBdev2 00:27:51.625 11:51:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:51.625 11:51:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:51.882 BaseBdev3_malloc 00:27:51.882 11:51:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:27:52.141 true 00:27:52.141 11:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:52.399 [2024-06-10 11:51:24.248078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:52.399 [2024-06-10 11:51:24.248403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:52.399 [2024-06-10 11:51:24.248480] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:27:52.400 [2024-06-10 11:51:24.248609] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:52.400 [2024-06-10 11:51:24.251231] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:52.400 [2024-06-10 11:51:24.251414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:52.400 BaseBdev3 00:27:52.400 11:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:52.400 11:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:52.657 BaseBdev4_malloc 00:27:52.657 11:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:27:52.916 true 00:27:52.916 11:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:53.174 [2024-06-10 11:51:25.118477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:53.174 [2024-06-10 11:51:25.118792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:53.174 [2024-06-10 11:51:25.118952] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:53.174 [2024-06-10 11:51:25.119070] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:53.174 [2024-06-10 11:51:25.121628] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:53.174 [2024-06-10 11:51:25.121793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:53.174 BaseBdev4 00:27:53.174 11:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:27:53.432 [2024-06-10 11:51:25.366693] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:53.432 [2024-06-10 11:51:25.368971] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:53.432 [2024-06-10 11:51:25.369194] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:53.432 [2024-06-10 11:51:25.369364] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:53.432 [2024-06-10 11:51:25.369650] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:27:53.432 [2024-06-10 11:51:25.369693] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:53.432 [2024-06-10 11:51:25.369915] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:53.432 [2024-06-10 11:51:25.370413] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:27:53.432 [2024-06-10 11:51:25.370541] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:27:53.432 [2024-06-10 11:51:25.370858] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:53.432 11:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:53.432 11:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:53.432 11:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:53.432 11:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:53.432 11:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:53.432 11:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:53.432 11:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:53.432 11:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:53.432 11:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:53.432 11:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:53.432 11:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.432 11:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.690 11:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:53.690 "name": "raid_bdev1", 00:27:53.690 "uuid": "f3952563-f6be-47e3-a966-87c2f6a874e7", 00:27:53.690 "strip_size_kb": 64, 00:27:53.690 "state": "online", 00:27:53.690 "raid_level": "concat", 00:27:53.690 "superblock": true, 00:27:53.690 "num_base_bdevs": 4, 00:27:53.690 "num_base_bdevs_discovered": 4, 00:27:53.690 "num_base_bdevs_operational": 4, 00:27:53.690 "base_bdevs_list": [ 00:27:53.690 { 00:27:53.690 "name": "BaseBdev1", 00:27:53.690 "uuid": "1cba3b8d-a7f2-5341-9a30-ed277b96c77f", 00:27:53.690 "is_configured": true, 00:27:53.690 "data_offset": 2048, 00:27:53.690 "data_size": 63488 00:27:53.690 }, 00:27:53.690 { 00:27:53.690 "name": "BaseBdev2", 00:27:53.690 "uuid": "a7432dc1-269f-58b1-b8bc-8cf810deb2fd", 00:27:53.690 "is_configured": true, 00:27:53.690 "data_offset": 2048, 00:27:53.690 "data_size": 63488 00:27:53.690 }, 00:27:53.690 { 00:27:53.690 "name": "BaseBdev3", 00:27:53.690 "uuid": "18f8a96e-90b1-56a6-9846-bbf83c247826", 00:27:53.690 "is_configured": true, 00:27:53.690 "data_offset": 2048, 00:27:53.690 "data_size": 63488 00:27:53.690 }, 00:27:53.690 { 00:27:53.690 "name": "BaseBdev4", 00:27:53.690 "uuid": "f07f9267-1b1b-54b9-bd55-ae454736168f", 00:27:53.690 "is_configured": true, 00:27:53.690 "data_offset": 2048, 00:27:53.690 "data_size": 63488 00:27:53.690 } 00:27:53.690 ] 00:27:53.690 }' 00:27:53.690 11:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:53.690 11:51:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.256 11:51:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:27:54.256 11:51:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:54.514 [2024-06-10 11:51:26.344646] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:55.449 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:27:55.449 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:27:55.449 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:27:55.449 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:27:55.449 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:55.449 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:55.449 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:55.449 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:55.449 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:55.449 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:55.449 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:55.449 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:55.449 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:55.449 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:55.449 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.449 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.707 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:55.707 "name": "raid_bdev1", 00:27:55.707 "uuid": "f3952563-f6be-47e3-a966-87c2f6a874e7", 00:27:55.707 "strip_size_kb": 64, 00:27:55.707 "state": "online", 00:27:55.707 "raid_level": "concat", 00:27:55.707 "superblock": true, 00:27:55.707 "num_base_bdevs": 4, 00:27:55.707 "num_base_bdevs_discovered": 4, 00:27:55.707 "num_base_bdevs_operational": 4, 00:27:55.707 "base_bdevs_list": [ 00:27:55.707 { 00:27:55.707 "name": "BaseBdev1", 00:27:55.707 "uuid": "1cba3b8d-a7f2-5341-9a30-ed277b96c77f", 00:27:55.707 "is_configured": true, 00:27:55.707 "data_offset": 2048, 00:27:55.707 "data_size": 63488 00:27:55.707 }, 00:27:55.707 { 00:27:55.707 "name": "BaseBdev2", 00:27:55.707 "uuid": "a7432dc1-269f-58b1-b8bc-8cf810deb2fd", 00:27:55.707 "is_configured": true, 00:27:55.707 "data_offset": 2048, 00:27:55.707 "data_size": 63488 00:27:55.707 }, 00:27:55.707 { 00:27:55.707 "name": "BaseBdev3", 00:27:55.707 "uuid": "18f8a96e-90b1-56a6-9846-bbf83c247826", 00:27:55.707 "is_configured": true, 00:27:55.707 "data_offset": 2048, 00:27:55.707 "data_size": 63488 00:27:55.707 }, 00:27:55.707 { 00:27:55.707 "name": "BaseBdev4", 00:27:55.708 "uuid": "f07f9267-1b1b-54b9-bd55-ae454736168f", 00:27:55.708 "is_configured": true, 00:27:55.708 "data_offset": 2048, 00:27:55.708 "data_size": 63488 00:27:55.708 } 00:27:55.708 ] 00:27:55.708 }' 00:27:55.708 11:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:55.708 11:51:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:56.321 11:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:56.579 [2024-06-10 11:51:28.546887] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:56.579 [2024-06-10 11:51:28.547093] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:56.579 [2024-06-10 11:51:28.549974] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:56.579 [2024-06-10 11:51:28.550139] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:56.579 [2024-06-10 11:51:28.550213] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:56.580 [2024-06-10 11:51:28.550299] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:27:56.580 0 00:27:56.580 11:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 141979 00:27:56.580 11:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 141979 ']' 00:27:56.580 11:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 141979 00:27:56.580 11:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:27:56.580 11:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:56.580 11:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 141979 00:27:56.580 11:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:56.580 11:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:56.580 11:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 141979' 00:27:56.580 killing process with pid 141979 00:27:56.580 11:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 141979 00:27:56.580 [2024-06-10 11:51:28.599026] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:56.580 11:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 141979 00:27:57.147 [2024-06-10 11:51:28.975184] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:59.068 11:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.9ATzAxpWnw 00:27:59.068 11:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:27:59.068 11:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:27:59.068 ************************************ 00:27:59.068 END TEST raid_write_error_test 00:27:59.068 ************************************ 00:27:59.068 11:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:27:59.069 11:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:27:59.069 11:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:59.069 11:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:27:59.069 11:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:27:59.069 00:27:59.069 real 0m9.552s 00:27:59.069 user 0m14.201s 00:27:59.069 sys 0m1.228s 00:27:59.069 11:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:59.069 11:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.069 11:51:30 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:27:59.069 11:51:30 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:27:59.069 11:51:30 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:27:59.069 11:51:30 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:59.069 11:51:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:59.069 ************************************ 00:27:59.069 START TEST raid_state_function_test 00:27:59.069 ************************************ 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 4 false 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=142202 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 142202' 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:59.069 Process raid pid: 142202 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 142202 /var/tmp/spdk-raid.sock 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 142202 ']' 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:59.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:59.069 11:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.069 [2024-06-10 11:51:30.790293] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:27:59.069 [2024-06-10 11:51:30.790760] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.069 [2024-06-10 11:51:30.977349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.327 [2024-06-10 11:51:31.241595] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.585 [2024-06-10 11:51:31.464065] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:59.843 11:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:59.843 11:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:27:59.843 11:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:00.102 [2024-06-10 11:51:31.954768] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:00.102 [2024-06-10 11:51:31.955062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:00.102 [2024-06-10 11:51:31.955175] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:00.102 [2024-06-10 11:51:31.955235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:00.102 [2024-06-10 11:51:31.955310] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:00.102 [2024-06-10 11:51:31.955359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:00.102 [2024-06-10 11:51:31.955427] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:00.102 [2024-06-10 11:51:31.955479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:00.102 11:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:00.102 11:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:00.102 11:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:00.102 11:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:00.102 11:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:00.102 11:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:00.102 11:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:00.102 11:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:00.102 11:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:00.102 11:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:00.102 11:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.102 11:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:00.360 11:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:00.360 "name": "Existed_Raid", 00:28:00.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.360 "strip_size_kb": 0, 00:28:00.360 "state": "configuring", 00:28:00.360 "raid_level": "raid1", 00:28:00.360 "superblock": false, 00:28:00.360 "num_base_bdevs": 4, 00:28:00.360 "num_base_bdevs_discovered": 0, 00:28:00.360 "num_base_bdevs_operational": 4, 00:28:00.360 "base_bdevs_list": [ 00:28:00.360 { 00:28:00.360 "name": "BaseBdev1", 00:28:00.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.360 "is_configured": false, 00:28:00.360 "data_offset": 0, 00:28:00.360 "data_size": 0 00:28:00.360 }, 00:28:00.360 { 00:28:00.360 "name": "BaseBdev2", 00:28:00.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.360 "is_configured": false, 00:28:00.360 "data_offset": 0, 00:28:00.360 "data_size": 0 00:28:00.360 }, 00:28:00.360 { 00:28:00.360 "name": "BaseBdev3", 00:28:00.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.360 "is_configured": false, 00:28:00.360 "data_offset": 0, 00:28:00.360 "data_size": 0 00:28:00.360 }, 00:28:00.360 { 00:28:00.360 "name": "BaseBdev4", 00:28:00.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.360 "is_configured": false, 00:28:00.360 "data_offset": 0, 00:28:00.360 "data_size": 0 00:28:00.360 } 00:28:00.360 ] 00:28:00.360 }' 00:28:00.360 11:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:00.360 11:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:00.926 11:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:01.184 [2024-06-10 11:51:32.986992] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:01.184 [2024-06-10 11:51:32.987229] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:28:01.184 11:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:01.184 [2024-06-10 11:51:33.199037] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:01.184 [2024-06-10 11:51:33.199319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:01.184 [2024-06-10 11:51:33.199413] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:01.184 [2024-06-10 11:51:33.199498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:01.185 [2024-06-10 11:51:33.199530] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:01.185 [2024-06-10 11:51:33.199631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:01.185 [2024-06-10 11:51:33.199665] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:01.185 [2024-06-10 11:51:33.199710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:01.185 11:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:01.751 [2024-06-10 11:51:33.503222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:01.751 BaseBdev1 00:28:01.751 11:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:28:01.751 11:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:28:01.751 11:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:01.751 11:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:28:01.751 11:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:01.751 11:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:01.751 11:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:01.751 11:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:02.010 [ 00:28:02.010 { 00:28:02.010 "name": "BaseBdev1", 00:28:02.010 "aliases": [ 00:28:02.010 "a46a2a9a-c302-43ac-97f4-6957680bf74a" 00:28:02.010 ], 00:28:02.010 "product_name": "Malloc disk", 00:28:02.010 "block_size": 512, 00:28:02.011 "num_blocks": 65536, 00:28:02.011 "uuid": "a46a2a9a-c302-43ac-97f4-6957680bf74a", 00:28:02.011 "assigned_rate_limits": { 00:28:02.011 "rw_ios_per_sec": 0, 00:28:02.011 "rw_mbytes_per_sec": 0, 00:28:02.011 "r_mbytes_per_sec": 0, 00:28:02.011 "w_mbytes_per_sec": 0 00:28:02.011 }, 00:28:02.011 "claimed": true, 00:28:02.011 "claim_type": "exclusive_write", 00:28:02.011 "zoned": false, 00:28:02.011 "supported_io_types": { 00:28:02.011 "read": true, 00:28:02.011 "write": true, 00:28:02.011 "unmap": true, 00:28:02.011 "write_zeroes": true, 00:28:02.011 "flush": true, 00:28:02.011 "reset": true, 00:28:02.011 "compare": false, 00:28:02.011 "compare_and_write": false, 00:28:02.011 "abort": true, 00:28:02.011 "nvme_admin": false, 00:28:02.011 "nvme_io": false 00:28:02.011 }, 00:28:02.011 "memory_domains": [ 00:28:02.011 { 00:28:02.011 "dma_device_id": "system", 00:28:02.011 "dma_device_type": 1 00:28:02.011 }, 00:28:02.011 { 00:28:02.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:02.011 "dma_device_type": 2 00:28:02.011 } 00:28:02.011 ], 00:28:02.011 "driver_specific": {} 00:28:02.011 } 00:28:02.011 ] 00:28:02.011 11:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:28:02.011 11:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:02.011 11:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:02.011 11:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:02.011 11:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:02.011 11:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:02.011 11:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:02.011 11:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:02.011 11:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:02.011 11:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:02.011 11:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:02.269 11:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:02.269 11:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.269 11:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:02.269 "name": "Existed_Raid", 00:28:02.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.269 "strip_size_kb": 0, 00:28:02.269 "state": "configuring", 00:28:02.269 "raid_level": "raid1", 00:28:02.269 "superblock": false, 00:28:02.269 "num_base_bdevs": 4, 00:28:02.269 "num_base_bdevs_discovered": 1, 00:28:02.269 "num_base_bdevs_operational": 4, 00:28:02.269 "base_bdevs_list": [ 00:28:02.269 { 00:28:02.269 "name": "BaseBdev1", 00:28:02.269 "uuid": "a46a2a9a-c302-43ac-97f4-6957680bf74a", 00:28:02.269 "is_configured": true, 00:28:02.269 "data_offset": 0, 00:28:02.269 "data_size": 65536 00:28:02.269 }, 00:28:02.269 { 00:28:02.269 "name": "BaseBdev2", 00:28:02.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.269 "is_configured": false, 00:28:02.269 "data_offset": 0, 00:28:02.269 "data_size": 0 00:28:02.269 }, 00:28:02.269 { 00:28:02.269 "name": "BaseBdev3", 00:28:02.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.269 "is_configured": false, 00:28:02.269 "data_offset": 0, 00:28:02.269 "data_size": 0 00:28:02.269 }, 00:28:02.269 { 00:28:02.269 "name": "BaseBdev4", 00:28:02.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.269 "is_configured": false, 00:28:02.269 "data_offset": 0, 00:28:02.269 "data_size": 0 00:28:02.269 } 00:28:02.269 ] 00:28:02.269 }' 00:28:02.269 11:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:02.269 11:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.836 11:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:03.094 [2024-06-10 11:51:35.023628] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:03.094 [2024-06-10 11:51:35.023846] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:28:03.094 11:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:03.353 [2024-06-10 11:51:35.295685] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:03.353 [2024-06-10 11:51:35.298217] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:03.353 [2024-06-10 11:51:35.298406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:03.353 [2024-06-10 11:51:35.298501] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:03.353 [2024-06-10 11:51:35.298567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:03.353 [2024-06-10 11:51:35.298668] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:03.353 [2024-06-10 11:51:35.298727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:03.353 11:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:28:03.353 11:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:03.353 11:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:03.353 11:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:03.353 11:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:03.353 11:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:03.353 11:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:03.353 11:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:03.353 11:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:03.353 11:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:03.353 11:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:03.353 11:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:03.353 11:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.353 11:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:03.612 11:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:03.612 "name": "Existed_Raid", 00:28:03.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.612 "strip_size_kb": 0, 00:28:03.612 "state": "configuring", 00:28:03.612 "raid_level": "raid1", 00:28:03.612 "superblock": false, 00:28:03.612 "num_base_bdevs": 4, 00:28:03.612 "num_base_bdevs_discovered": 1, 00:28:03.612 "num_base_bdevs_operational": 4, 00:28:03.612 "base_bdevs_list": [ 00:28:03.612 { 00:28:03.612 "name": "BaseBdev1", 00:28:03.612 "uuid": "a46a2a9a-c302-43ac-97f4-6957680bf74a", 00:28:03.612 "is_configured": true, 00:28:03.612 "data_offset": 0, 00:28:03.612 "data_size": 65536 00:28:03.612 }, 00:28:03.612 { 00:28:03.612 "name": "BaseBdev2", 00:28:03.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.612 "is_configured": false, 00:28:03.612 "data_offset": 0, 00:28:03.612 "data_size": 0 00:28:03.612 }, 00:28:03.612 { 00:28:03.612 "name": "BaseBdev3", 00:28:03.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.612 "is_configured": false, 00:28:03.612 "data_offset": 0, 00:28:03.612 "data_size": 0 00:28:03.612 }, 00:28:03.612 { 00:28:03.612 "name": "BaseBdev4", 00:28:03.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.612 "is_configured": false, 00:28:03.612 "data_offset": 0, 00:28:03.612 "data_size": 0 00:28:03.612 } 00:28:03.612 ] 00:28:03.612 }' 00:28:03.612 11:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:03.612 11:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.180 11:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:04.439 [2024-06-10 11:51:36.369215] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:04.439 BaseBdev2 00:28:04.439 11:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:28:04.439 11:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:28:04.439 11:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:04.439 11:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:28:04.439 11:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:04.439 11:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:04.439 11:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:04.698 11:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:04.957 [ 00:28:04.957 { 00:28:04.957 "name": "BaseBdev2", 00:28:04.957 "aliases": [ 00:28:04.957 "ca43762a-482a-40e2-840d-ef0fc69900d8" 00:28:04.957 ], 00:28:04.957 "product_name": "Malloc disk", 00:28:04.957 "block_size": 512, 00:28:04.957 "num_blocks": 65536, 00:28:04.957 "uuid": "ca43762a-482a-40e2-840d-ef0fc69900d8", 00:28:04.957 "assigned_rate_limits": { 00:28:04.957 "rw_ios_per_sec": 0, 00:28:04.957 "rw_mbytes_per_sec": 0, 00:28:04.957 "r_mbytes_per_sec": 0, 00:28:04.957 "w_mbytes_per_sec": 0 00:28:04.957 }, 00:28:04.957 "claimed": true, 00:28:04.957 "claim_type": "exclusive_write", 00:28:04.957 "zoned": false, 00:28:04.957 "supported_io_types": { 00:28:04.957 "read": true, 00:28:04.957 "write": true, 00:28:04.957 "unmap": true, 00:28:04.957 "write_zeroes": true, 00:28:04.957 "flush": true, 00:28:04.957 "reset": true, 00:28:04.957 "compare": false, 00:28:04.957 "compare_and_write": false, 00:28:04.957 "abort": true, 00:28:04.957 "nvme_admin": false, 00:28:04.957 "nvme_io": false 00:28:04.957 }, 00:28:04.957 "memory_domains": [ 00:28:04.957 { 00:28:04.957 "dma_device_id": "system", 00:28:04.957 "dma_device_type": 1 00:28:04.957 }, 00:28:04.957 { 00:28:04.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:04.957 "dma_device_type": 2 00:28:04.957 } 00:28:04.957 ], 00:28:04.957 "driver_specific": {} 00:28:04.957 } 00:28:04.957 ] 00:28:04.957 11:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:28:04.957 11:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:04.957 11:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:04.957 11:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:04.957 11:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:04.957 11:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:04.957 11:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:04.957 11:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:04.957 11:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:04.957 11:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:04.957 11:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:04.957 11:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:04.957 11:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:04.957 11:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.957 11:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.216 11:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:05.216 "name": "Existed_Raid", 00:28:05.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.216 "strip_size_kb": 0, 00:28:05.216 "state": "configuring", 00:28:05.216 "raid_level": "raid1", 00:28:05.216 "superblock": false, 00:28:05.216 "num_base_bdevs": 4, 00:28:05.216 "num_base_bdevs_discovered": 2, 00:28:05.216 "num_base_bdevs_operational": 4, 00:28:05.216 "base_bdevs_list": [ 00:28:05.216 { 00:28:05.216 "name": "BaseBdev1", 00:28:05.216 "uuid": "a46a2a9a-c302-43ac-97f4-6957680bf74a", 00:28:05.216 "is_configured": true, 00:28:05.216 "data_offset": 0, 00:28:05.216 "data_size": 65536 00:28:05.216 }, 00:28:05.216 { 00:28:05.216 "name": "BaseBdev2", 00:28:05.216 "uuid": "ca43762a-482a-40e2-840d-ef0fc69900d8", 00:28:05.216 "is_configured": true, 00:28:05.216 "data_offset": 0, 00:28:05.216 "data_size": 65536 00:28:05.216 }, 00:28:05.216 { 00:28:05.216 "name": "BaseBdev3", 00:28:05.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.216 "is_configured": false, 00:28:05.216 "data_offset": 0, 00:28:05.216 "data_size": 0 00:28:05.216 }, 00:28:05.216 { 00:28:05.216 "name": "BaseBdev4", 00:28:05.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.216 "is_configured": false, 00:28:05.216 "data_offset": 0, 00:28:05.216 "data_size": 0 00:28:05.216 } 00:28:05.216 ] 00:28:05.216 }' 00:28:05.216 11:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:05.216 11:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.784 11:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:06.043 [2024-06-10 11:51:38.084820] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:06.043 BaseBdev3 00:28:06.302 11:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:28:06.302 11:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:28:06.302 11:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:06.302 11:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:28:06.302 11:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:06.302 11:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:06.302 11:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:06.617 11:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:06.617 [ 00:28:06.617 { 00:28:06.617 "name": "BaseBdev3", 00:28:06.617 "aliases": [ 00:28:06.617 "7b59f9ec-3fac-40c5-85eb-a2b4bb330ffa" 00:28:06.617 ], 00:28:06.617 "product_name": "Malloc disk", 00:28:06.617 "block_size": 512, 00:28:06.617 "num_blocks": 65536, 00:28:06.617 "uuid": "7b59f9ec-3fac-40c5-85eb-a2b4bb330ffa", 00:28:06.617 "assigned_rate_limits": { 00:28:06.617 "rw_ios_per_sec": 0, 00:28:06.617 "rw_mbytes_per_sec": 0, 00:28:06.617 "r_mbytes_per_sec": 0, 00:28:06.617 "w_mbytes_per_sec": 0 00:28:06.617 }, 00:28:06.617 "claimed": true, 00:28:06.617 "claim_type": "exclusive_write", 00:28:06.617 "zoned": false, 00:28:06.617 "supported_io_types": { 00:28:06.617 "read": true, 00:28:06.617 "write": true, 00:28:06.617 "unmap": true, 00:28:06.617 "write_zeroes": true, 00:28:06.617 "flush": true, 00:28:06.617 "reset": true, 00:28:06.617 "compare": false, 00:28:06.617 "compare_and_write": false, 00:28:06.617 "abort": true, 00:28:06.617 "nvme_admin": false, 00:28:06.617 "nvme_io": false 00:28:06.617 }, 00:28:06.617 "memory_domains": [ 00:28:06.617 { 00:28:06.617 "dma_device_id": "system", 00:28:06.617 "dma_device_type": 1 00:28:06.617 }, 00:28:06.617 { 00:28:06.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:06.617 "dma_device_type": 2 00:28:06.617 } 00:28:06.617 ], 00:28:06.617 "driver_specific": {} 00:28:06.617 } 00:28:06.617 ] 00:28:06.617 11:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:28:06.617 11:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:06.617 11:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:06.617 11:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:06.617 11:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:06.617 11:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:06.617 11:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:06.617 11:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:06.617 11:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:06.617 11:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:06.617 11:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:06.617 11:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:06.617 11:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:06.617 11:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.617 11:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:06.902 11:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:06.902 "name": "Existed_Raid", 00:28:06.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.902 "strip_size_kb": 0, 00:28:06.902 "state": "configuring", 00:28:06.902 "raid_level": "raid1", 00:28:06.902 "superblock": false, 00:28:06.902 "num_base_bdevs": 4, 00:28:06.902 "num_base_bdevs_discovered": 3, 00:28:06.902 "num_base_bdevs_operational": 4, 00:28:06.902 "base_bdevs_list": [ 00:28:06.902 { 00:28:06.902 "name": "BaseBdev1", 00:28:06.902 "uuid": "a46a2a9a-c302-43ac-97f4-6957680bf74a", 00:28:06.902 "is_configured": true, 00:28:06.902 "data_offset": 0, 00:28:06.902 "data_size": 65536 00:28:06.902 }, 00:28:06.902 { 00:28:06.902 "name": "BaseBdev2", 00:28:06.902 "uuid": "ca43762a-482a-40e2-840d-ef0fc69900d8", 00:28:06.902 "is_configured": true, 00:28:06.902 "data_offset": 0, 00:28:06.902 "data_size": 65536 00:28:06.902 }, 00:28:06.902 { 00:28:06.902 "name": "BaseBdev3", 00:28:06.902 "uuid": "7b59f9ec-3fac-40c5-85eb-a2b4bb330ffa", 00:28:06.903 "is_configured": true, 00:28:06.903 "data_offset": 0, 00:28:06.903 "data_size": 65536 00:28:06.903 }, 00:28:06.903 { 00:28:06.903 "name": "BaseBdev4", 00:28:06.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.903 "is_configured": false, 00:28:06.903 "data_offset": 0, 00:28:06.903 "data_size": 0 00:28:06.903 } 00:28:06.903 ] 00:28:06.903 }' 00:28:06.903 11:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:06.903 11:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.469 11:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:07.727 [2024-06-10 11:51:39.672934] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:07.727 [2024-06-10 11:51:39.673229] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:28:07.727 [2024-06-10 11:51:39.673275] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:07.727 [2024-06-10 11:51:39.673502] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:28:07.727 [2024-06-10 11:51:39.673924] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:28:07.727 [2024-06-10 11:51:39.674037] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:28:07.727 [2024-06-10 11:51:39.674340] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:07.727 BaseBdev4 00:28:07.727 11:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:28:07.727 11:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:28:07.727 11:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:07.727 11:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:28:07.727 11:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:07.727 11:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:07.727 11:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:07.986 11:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:08.245 [ 00:28:08.245 { 00:28:08.245 "name": "BaseBdev4", 00:28:08.245 "aliases": [ 00:28:08.245 "56b839e2-6c0a-4b3e-9326-10c2ab56a238" 00:28:08.245 ], 00:28:08.245 "product_name": "Malloc disk", 00:28:08.245 "block_size": 512, 00:28:08.245 "num_blocks": 65536, 00:28:08.245 "uuid": "56b839e2-6c0a-4b3e-9326-10c2ab56a238", 00:28:08.245 "assigned_rate_limits": { 00:28:08.245 "rw_ios_per_sec": 0, 00:28:08.245 "rw_mbytes_per_sec": 0, 00:28:08.245 "r_mbytes_per_sec": 0, 00:28:08.245 "w_mbytes_per_sec": 0 00:28:08.245 }, 00:28:08.245 "claimed": true, 00:28:08.245 "claim_type": "exclusive_write", 00:28:08.245 "zoned": false, 00:28:08.245 "supported_io_types": { 00:28:08.245 "read": true, 00:28:08.245 "write": true, 00:28:08.245 "unmap": true, 00:28:08.245 "write_zeroes": true, 00:28:08.245 "flush": true, 00:28:08.245 "reset": true, 00:28:08.245 "compare": false, 00:28:08.245 "compare_and_write": false, 00:28:08.245 "abort": true, 00:28:08.245 "nvme_admin": false, 00:28:08.245 "nvme_io": false 00:28:08.245 }, 00:28:08.245 "memory_domains": [ 00:28:08.245 { 00:28:08.245 "dma_device_id": "system", 00:28:08.245 "dma_device_type": 1 00:28:08.245 }, 00:28:08.245 { 00:28:08.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:08.245 "dma_device_type": 2 00:28:08.245 } 00:28:08.245 ], 00:28:08.245 "driver_specific": {} 00:28:08.245 } 00:28:08.245 ] 00:28:08.245 11:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:28:08.245 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:08.245 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:08.245 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:08.245 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:08.245 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:08.245 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:08.245 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:08.245 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:08.245 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:08.245 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:08.245 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:08.245 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:08.245 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.245 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:08.504 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:08.504 "name": "Existed_Raid", 00:28:08.504 "uuid": "4212516f-fc71-482d-8f74-13d6c030730d", 00:28:08.504 "strip_size_kb": 0, 00:28:08.504 "state": "online", 00:28:08.504 "raid_level": "raid1", 00:28:08.504 "superblock": false, 00:28:08.504 "num_base_bdevs": 4, 00:28:08.504 "num_base_bdevs_discovered": 4, 00:28:08.504 "num_base_bdevs_operational": 4, 00:28:08.504 "base_bdevs_list": [ 00:28:08.504 { 00:28:08.504 "name": "BaseBdev1", 00:28:08.504 "uuid": "a46a2a9a-c302-43ac-97f4-6957680bf74a", 00:28:08.504 "is_configured": true, 00:28:08.504 "data_offset": 0, 00:28:08.504 "data_size": 65536 00:28:08.504 }, 00:28:08.504 { 00:28:08.504 "name": "BaseBdev2", 00:28:08.504 "uuid": "ca43762a-482a-40e2-840d-ef0fc69900d8", 00:28:08.504 "is_configured": true, 00:28:08.504 "data_offset": 0, 00:28:08.504 "data_size": 65536 00:28:08.504 }, 00:28:08.504 { 00:28:08.504 "name": "BaseBdev3", 00:28:08.504 "uuid": "7b59f9ec-3fac-40c5-85eb-a2b4bb330ffa", 00:28:08.504 "is_configured": true, 00:28:08.504 "data_offset": 0, 00:28:08.504 "data_size": 65536 00:28:08.504 }, 00:28:08.504 { 00:28:08.504 "name": "BaseBdev4", 00:28:08.504 "uuid": "56b839e2-6c0a-4b3e-9326-10c2ab56a238", 00:28:08.504 "is_configured": true, 00:28:08.504 "data_offset": 0, 00:28:08.504 "data_size": 65536 00:28:08.504 } 00:28:08.504 ] 00:28:08.504 }' 00:28:08.504 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:08.504 11:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.070 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:28:09.070 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:09.070 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:09.070 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:09.070 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:09.070 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:28:09.070 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:09.070 11:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:09.328 [2024-06-10 11:51:41.254443] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:09.328 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:09.328 "name": "Existed_Raid", 00:28:09.328 "aliases": [ 00:28:09.328 "4212516f-fc71-482d-8f74-13d6c030730d" 00:28:09.328 ], 00:28:09.328 "product_name": "Raid Volume", 00:28:09.328 "block_size": 512, 00:28:09.328 "num_blocks": 65536, 00:28:09.328 "uuid": "4212516f-fc71-482d-8f74-13d6c030730d", 00:28:09.328 "assigned_rate_limits": { 00:28:09.328 "rw_ios_per_sec": 0, 00:28:09.328 "rw_mbytes_per_sec": 0, 00:28:09.328 "r_mbytes_per_sec": 0, 00:28:09.328 "w_mbytes_per_sec": 0 00:28:09.328 }, 00:28:09.328 "claimed": false, 00:28:09.328 "zoned": false, 00:28:09.328 "supported_io_types": { 00:28:09.328 "read": true, 00:28:09.328 "write": true, 00:28:09.328 "unmap": false, 00:28:09.328 "write_zeroes": true, 00:28:09.328 "flush": false, 00:28:09.328 "reset": true, 00:28:09.328 "compare": false, 00:28:09.328 "compare_and_write": false, 00:28:09.328 "abort": false, 00:28:09.328 "nvme_admin": false, 00:28:09.328 "nvme_io": false 00:28:09.328 }, 00:28:09.328 "memory_domains": [ 00:28:09.328 { 00:28:09.328 "dma_device_id": "system", 00:28:09.328 "dma_device_type": 1 00:28:09.328 }, 00:28:09.328 { 00:28:09.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.328 "dma_device_type": 2 00:28:09.328 }, 00:28:09.328 { 00:28:09.328 "dma_device_id": "system", 00:28:09.328 "dma_device_type": 1 00:28:09.328 }, 00:28:09.328 { 00:28:09.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.328 "dma_device_type": 2 00:28:09.328 }, 00:28:09.328 { 00:28:09.328 "dma_device_id": "system", 00:28:09.328 "dma_device_type": 1 00:28:09.329 }, 00:28:09.329 { 00:28:09.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.329 "dma_device_type": 2 00:28:09.329 }, 00:28:09.329 { 00:28:09.329 "dma_device_id": "system", 00:28:09.329 "dma_device_type": 1 00:28:09.329 }, 00:28:09.329 { 00:28:09.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.329 "dma_device_type": 2 00:28:09.329 } 00:28:09.329 ], 00:28:09.329 "driver_specific": { 00:28:09.329 "raid": { 00:28:09.329 "uuid": "4212516f-fc71-482d-8f74-13d6c030730d", 00:28:09.329 "strip_size_kb": 0, 00:28:09.329 "state": "online", 00:28:09.329 "raid_level": "raid1", 00:28:09.329 "superblock": false, 00:28:09.329 "num_base_bdevs": 4, 00:28:09.329 "num_base_bdevs_discovered": 4, 00:28:09.329 "num_base_bdevs_operational": 4, 00:28:09.329 "base_bdevs_list": [ 00:28:09.329 { 00:28:09.329 "name": "BaseBdev1", 00:28:09.329 "uuid": "a46a2a9a-c302-43ac-97f4-6957680bf74a", 00:28:09.329 "is_configured": true, 00:28:09.329 "data_offset": 0, 00:28:09.329 "data_size": 65536 00:28:09.329 }, 00:28:09.329 { 00:28:09.329 "name": "BaseBdev2", 00:28:09.329 "uuid": "ca43762a-482a-40e2-840d-ef0fc69900d8", 00:28:09.329 "is_configured": true, 00:28:09.329 "data_offset": 0, 00:28:09.329 "data_size": 65536 00:28:09.329 }, 00:28:09.329 { 00:28:09.329 "name": "BaseBdev3", 00:28:09.329 "uuid": "7b59f9ec-3fac-40c5-85eb-a2b4bb330ffa", 00:28:09.329 "is_configured": true, 00:28:09.329 "data_offset": 0, 00:28:09.329 "data_size": 65536 00:28:09.329 }, 00:28:09.329 { 00:28:09.329 "name": "BaseBdev4", 00:28:09.329 "uuid": "56b839e2-6c0a-4b3e-9326-10c2ab56a238", 00:28:09.329 "is_configured": true, 00:28:09.329 "data_offset": 0, 00:28:09.329 "data_size": 65536 00:28:09.329 } 00:28:09.329 ] 00:28:09.329 } 00:28:09.329 } 00:28:09.329 }' 00:28:09.329 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:09.329 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:28:09.329 BaseBdev2 00:28:09.329 BaseBdev3 00:28:09.329 BaseBdev4' 00:28:09.329 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:09.329 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:09.329 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:28:09.586 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:09.586 "name": "BaseBdev1", 00:28:09.586 "aliases": [ 00:28:09.586 "a46a2a9a-c302-43ac-97f4-6957680bf74a" 00:28:09.586 ], 00:28:09.586 "product_name": "Malloc disk", 00:28:09.586 "block_size": 512, 00:28:09.586 "num_blocks": 65536, 00:28:09.586 "uuid": "a46a2a9a-c302-43ac-97f4-6957680bf74a", 00:28:09.586 "assigned_rate_limits": { 00:28:09.586 "rw_ios_per_sec": 0, 00:28:09.586 "rw_mbytes_per_sec": 0, 00:28:09.586 "r_mbytes_per_sec": 0, 00:28:09.586 "w_mbytes_per_sec": 0 00:28:09.586 }, 00:28:09.586 "claimed": true, 00:28:09.586 "claim_type": "exclusive_write", 00:28:09.586 "zoned": false, 00:28:09.586 "supported_io_types": { 00:28:09.586 "read": true, 00:28:09.586 "write": true, 00:28:09.586 "unmap": true, 00:28:09.586 "write_zeroes": true, 00:28:09.586 "flush": true, 00:28:09.586 "reset": true, 00:28:09.586 "compare": false, 00:28:09.586 "compare_and_write": false, 00:28:09.586 "abort": true, 00:28:09.586 "nvme_admin": false, 00:28:09.586 "nvme_io": false 00:28:09.586 }, 00:28:09.586 "memory_domains": [ 00:28:09.586 { 00:28:09.586 "dma_device_id": "system", 00:28:09.586 "dma_device_type": 1 00:28:09.586 }, 00:28:09.586 { 00:28:09.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.586 "dma_device_type": 2 00:28:09.586 } 00:28:09.586 ], 00:28:09.586 "driver_specific": {} 00:28:09.586 }' 00:28:09.586 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:09.586 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:09.844 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:09.844 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:09.844 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:09.844 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:09.844 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:09.844 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:09.844 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:09.844 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:10.154 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:10.154 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:10.154 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:10.154 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:10.154 11:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:10.411 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:10.411 "name": "BaseBdev2", 00:28:10.411 "aliases": [ 00:28:10.411 "ca43762a-482a-40e2-840d-ef0fc69900d8" 00:28:10.411 ], 00:28:10.411 "product_name": "Malloc disk", 00:28:10.411 "block_size": 512, 00:28:10.411 "num_blocks": 65536, 00:28:10.411 "uuid": "ca43762a-482a-40e2-840d-ef0fc69900d8", 00:28:10.411 "assigned_rate_limits": { 00:28:10.411 "rw_ios_per_sec": 0, 00:28:10.411 "rw_mbytes_per_sec": 0, 00:28:10.411 "r_mbytes_per_sec": 0, 00:28:10.411 "w_mbytes_per_sec": 0 00:28:10.411 }, 00:28:10.411 "claimed": true, 00:28:10.411 "claim_type": "exclusive_write", 00:28:10.411 "zoned": false, 00:28:10.411 "supported_io_types": { 00:28:10.411 "read": true, 00:28:10.411 "write": true, 00:28:10.411 "unmap": true, 00:28:10.411 "write_zeroes": true, 00:28:10.411 "flush": true, 00:28:10.411 "reset": true, 00:28:10.411 "compare": false, 00:28:10.411 "compare_and_write": false, 00:28:10.411 "abort": true, 00:28:10.411 "nvme_admin": false, 00:28:10.412 "nvme_io": false 00:28:10.412 }, 00:28:10.412 "memory_domains": [ 00:28:10.412 { 00:28:10.412 "dma_device_id": "system", 00:28:10.412 "dma_device_type": 1 00:28:10.412 }, 00:28:10.412 { 00:28:10.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.412 "dma_device_type": 2 00:28:10.412 } 00:28:10.412 ], 00:28:10.412 "driver_specific": {} 00:28:10.412 }' 00:28:10.412 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:10.412 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:10.412 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:10.412 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:10.412 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:10.412 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:10.412 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:10.675 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:10.675 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:10.675 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:10.675 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:10.675 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:10.676 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:10.676 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:10.676 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:10.934 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:10.934 "name": "BaseBdev3", 00:28:10.934 "aliases": [ 00:28:10.934 "7b59f9ec-3fac-40c5-85eb-a2b4bb330ffa" 00:28:10.934 ], 00:28:10.934 "product_name": "Malloc disk", 00:28:10.934 "block_size": 512, 00:28:10.934 "num_blocks": 65536, 00:28:10.934 "uuid": "7b59f9ec-3fac-40c5-85eb-a2b4bb330ffa", 00:28:10.934 "assigned_rate_limits": { 00:28:10.934 "rw_ios_per_sec": 0, 00:28:10.934 "rw_mbytes_per_sec": 0, 00:28:10.934 "r_mbytes_per_sec": 0, 00:28:10.934 "w_mbytes_per_sec": 0 00:28:10.934 }, 00:28:10.934 "claimed": true, 00:28:10.934 "claim_type": "exclusive_write", 00:28:10.934 "zoned": false, 00:28:10.934 "supported_io_types": { 00:28:10.934 "read": true, 00:28:10.934 "write": true, 00:28:10.934 "unmap": true, 00:28:10.934 "write_zeroes": true, 00:28:10.934 "flush": true, 00:28:10.934 "reset": true, 00:28:10.934 "compare": false, 00:28:10.934 "compare_and_write": false, 00:28:10.934 "abort": true, 00:28:10.934 "nvme_admin": false, 00:28:10.934 "nvme_io": false 00:28:10.934 }, 00:28:10.934 "memory_domains": [ 00:28:10.934 { 00:28:10.934 "dma_device_id": "system", 00:28:10.934 "dma_device_type": 1 00:28:10.934 }, 00:28:10.934 { 00:28:10.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.934 "dma_device_type": 2 00:28:10.934 } 00:28:10.934 ], 00:28:10.934 "driver_specific": {} 00:28:10.934 }' 00:28:10.934 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:10.934 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:10.934 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:10.934 11:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:11.192 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:11.192 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:11.192 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:11.192 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:11.192 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:11.192 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:11.192 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:11.192 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:11.192 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:11.192 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:28:11.192 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:11.450 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:11.450 "name": "BaseBdev4", 00:28:11.450 "aliases": [ 00:28:11.450 "56b839e2-6c0a-4b3e-9326-10c2ab56a238" 00:28:11.450 ], 00:28:11.450 "product_name": "Malloc disk", 00:28:11.450 "block_size": 512, 00:28:11.450 "num_blocks": 65536, 00:28:11.450 "uuid": "56b839e2-6c0a-4b3e-9326-10c2ab56a238", 00:28:11.450 "assigned_rate_limits": { 00:28:11.450 "rw_ios_per_sec": 0, 00:28:11.450 "rw_mbytes_per_sec": 0, 00:28:11.450 "r_mbytes_per_sec": 0, 00:28:11.450 "w_mbytes_per_sec": 0 00:28:11.450 }, 00:28:11.450 "claimed": true, 00:28:11.450 "claim_type": "exclusive_write", 00:28:11.450 "zoned": false, 00:28:11.450 "supported_io_types": { 00:28:11.450 "read": true, 00:28:11.450 "write": true, 00:28:11.450 "unmap": true, 00:28:11.450 "write_zeroes": true, 00:28:11.450 "flush": true, 00:28:11.450 "reset": true, 00:28:11.450 "compare": false, 00:28:11.450 "compare_and_write": false, 00:28:11.450 "abort": true, 00:28:11.450 "nvme_admin": false, 00:28:11.450 "nvme_io": false 00:28:11.450 }, 00:28:11.450 "memory_domains": [ 00:28:11.450 { 00:28:11.450 "dma_device_id": "system", 00:28:11.450 "dma_device_type": 1 00:28:11.450 }, 00:28:11.450 { 00:28:11.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.450 "dma_device_type": 2 00:28:11.450 } 00:28:11.450 ], 00:28:11.450 "driver_specific": {} 00:28:11.450 }' 00:28:11.450 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:11.707 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:11.707 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:11.707 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:11.707 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:11.707 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:11.707 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:11.707 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:11.707 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:11.965 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:11.965 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:11.965 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:11.965 11:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:12.224 [2024-06-10 11:51:44.102835] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:12.224 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:28:12.224 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:28:12.224 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:12.224 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:28:12.224 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:28:12.224 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:28:12.224 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:12.224 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:12.224 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:12.224 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:12.224 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:12.224 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:12.224 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:12.224 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:12.224 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:12.224 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:12.224 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.484 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:12.484 "name": "Existed_Raid", 00:28:12.484 "uuid": "4212516f-fc71-482d-8f74-13d6c030730d", 00:28:12.484 "strip_size_kb": 0, 00:28:12.484 "state": "online", 00:28:12.484 "raid_level": "raid1", 00:28:12.484 "superblock": false, 00:28:12.484 "num_base_bdevs": 4, 00:28:12.484 "num_base_bdevs_discovered": 3, 00:28:12.484 "num_base_bdevs_operational": 3, 00:28:12.484 "base_bdevs_list": [ 00:28:12.484 { 00:28:12.484 "name": null, 00:28:12.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:12.484 "is_configured": false, 00:28:12.484 "data_offset": 0, 00:28:12.484 "data_size": 65536 00:28:12.484 }, 00:28:12.484 { 00:28:12.484 "name": "BaseBdev2", 00:28:12.484 "uuid": "ca43762a-482a-40e2-840d-ef0fc69900d8", 00:28:12.484 "is_configured": true, 00:28:12.484 "data_offset": 0, 00:28:12.484 "data_size": 65536 00:28:12.484 }, 00:28:12.484 { 00:28:12.484 "name": "BaseBdev3", 00:28:12.484 "uuid": "7b59f9ec-3fac-40c5-85eb-a2b4bb330ffa", 00:28:12.484 "is_configured": true, 00:28:12.484 "data_offset": 0, 00:28:12.484 "data_size": 65536 00:28:12.484 }, 00:28:12.484 { 00:28:12.484 "name": "BaseBdev4", 00:28:12.484 "uuid": "56b839e2-6c0a-4b3e-9326-10c2ab56a238", 00:28:12.484 "is_configured": true, 00:28:12.484 "data_offset": 0, 00:28:12.484 "data_size": 65536 00:28:12.484 } 00:28:12.484 ] 00:28:12.484 }' 00:28:12.484 11:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:12.484 11:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.051 11:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:28:13.051 11:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:13.051 11:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.051 11:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:13.310 11:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:13.310 11:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:13.310 11:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:13.568 [2024-06-10 11:51:45.515250] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:13.827 11:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:13.827 11:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:13.827 11:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:13.827 11:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.827 11:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:13.827 11:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:13.827 11:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:28:14.085 [2024-06-10 11:51:46.056464] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:14.343 11:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:14.343 11:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:14.343 11:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:14.343 11:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:14.601 11:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:14.601 11:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:14.601 11:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:28:14.601 [2024-06-10 11:51:46.644741] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:28:14.601 [2024-06-10 11:51:46.645057] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:14.859 [2024-06-10 11:51:46.758724] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:14.859 [2024-06-10 11:51:46.759015] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:14.859 [2024-06-10 11:51:46.759120] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:28:14.859 11:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:14.859 11:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:14.859 11:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:14.859 11:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:28:15.117 11:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:28:15.117 11:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:28:15.117 11:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:28:15.117 11:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:28:15.117 11:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:15.117 11:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:15.375 BaseBdev2 00:28:15.375 11:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:28:15.375 11:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:28:15.375 11:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:15.375 11:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:28:15.375 11:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:15.375 11:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:15.375 11:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:15.633 11:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:15.891 [ 00:28:15.891 { 00:28:15.891 "name": "BaseBdev2", 00:28:15.891 "aliases": [ 00:28:15.891 "8a024e8d-f9bb-47a7-bd18-e1e044778522" 00:28:15.891 ], 00:28:15.891 "product_name": "Malloc disk", 00:28:15.891 "block_size": 512, 00:28:15.891 "num_blocks": 65536, 00:28:15.891 "uuid": "8a024e8d-f9bb-47a7-bd18-e1e044778522", 00:28:15.891 "assigned_rate_limits": { 00:28:15.891 "rw_ios_per_sec": 0, 00:28:15.891 "rw_mbytes_per_sec": 0, 00:28:15.891 "r_mbytes_per_sec": 0, 00:28:15.891 "w_mbytes_per_sec": 0 00:28:15.891 }, 00:28:15.891 "claimed": false, 00:28:15.891 "zoned": false, 00:28:15.891 "supported_io_types": { 00:28:15.891 "read": true, 00:28:15.891 "write": true, 00:28:15.891 "unmap": true, 00:28:15.891 "write_zeroes": true, 00:28:15.891 "flush": true, 00:28:15.891 "reset": true, 00:28:15.891 "compare": false, 00:28:15.891 "compare_and_write": false, 00:28:15.891 "abort": true, 00:28:15.891 "nvme_admin": false, 00:28:15.891 "nvme_io": false 00:28:15.891 }, 00:28:15.891 "memory_domains": [ 00:28:15.891 { 00:28:15.891 "dma_device_id": "system", 00:28:15.891 "dma_device_type": 1 00:28:15.891 }, 00:28:15.891 { 00:28:15.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.891 "dma_device_type": 2 00:28:15.891 } 00:28:15.891 ], 00:28:15.891 "driver_specific": {} 00:28:15.891 } 00:28:15.891 ] 00:28:15.891 11:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:28:15.891 11:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:15.891 11:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:15.891 11:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:16.148 BaseBdev3 00:28:16.148 11:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:28:16.148 11:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:28:16.148 11:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:16.148 11:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:28:16.148 11:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:16.148 11:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:16.148 11:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:16.406 11:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:16.664 [ 00:28:16.664 { 00:28:16.664 "name": "BaseBdev3", 00:28:16.664 "aliases": [ 00:28:16.664 "1b4c7ab2-466a-4350-b946-5ed295f78fa7" 00:28:16.664 ], 00:28:16.664 "product_name": "Malloc disk", 00:28:16.664 "block_size": 512, 00:28:16.664 "num_blocks": 65536, 00:28:16.664 "uuid": "1b4c7ab2-466a-4350-b946-5ed295f78fa7", 00:28:16.664 "assigned_rate_limits": { 00:28:16.664 "rw_ios_per_sec": 0, 00:28:16.664 "rw_mbytes_per_sec": 0, 00:28:16.664 "r_mbytes_per_sec": 0, 00:28:16.664 "w_mbytes_per_sec": 0 00:28:16.664 }, 00:28:16.664 "claimed": false, 00:28:16.664 "zoned": false, 00:28:16.664 "supported_io_types": { 00:28:16.664 "read": true, 00:28:16.664 "write": true, 00:28:16.664 "unmap": true, 00:28:16.664 "write_zeroes": true, 00:28:16.664 "flush": true, 00:28:16.664 "reset": true, 00:28:16.665 "compare": false, 00:28:16.665 "compare_and_write": false, 00:28:16.665 "abort": true, 00:28:16.665 "nvme_admin": false, 00:28:16.665 "nvme_io": false 00:28:16.665 }, 00:28:16.665 "memory_domains": [ 00:28:16.665 { 00:28:16.665 "dma_device_id": "system", 00:28:16.665 "dma_device_type": 1 00:28:16.665 }, 00:28:16.665 { 00:28:16.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:16.665 "dma_device_type": 2 00:28:16.665 } 00:28:16.665 ], 00:28:16.665 "driver_specific": {} 00:28:16.665 } 00:28:16.665 ] 00:28:16.665 11:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:28:16.665 11:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:16.665 11:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:16.665 11:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:16.923 BaseBdev4 00:28:16.923 11:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:28:16.923 11:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:28:16.923 11:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:16.923 11:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:28:16.923 11:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:16.923 11:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:16.923 11:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:17.181 11:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:17.439 [ 00:28:17.439 { 00:28:17.439 "name": "BaseBdev4", 00:28:17.439 "aliases": [ 00:28:17.439 "fd0046e9-da44-49ff-8cbb-7e7c5fa50f83" 00:28:17.439 ], 00:28:17.439 "product_name": "Malloc disk", 00:28:17.439 "block_size": 512, 00:28:17.439 "num_blocks": 65536, 00:28:17.439 "uuid": "fd0046e9-da44-49ff-8cbb-7e7c5fa50f83", 00:28:17.439 "assigned_rate_limits": { 00:28:17.439 "rw_ios_per_sec": 0, 00:28:17.439 "rw_mbytes_per_sec": 0, 00:28:17.439 "r_mbytes_per_sec": 0, 00:28:17.439 "w_mbytes_per_sec": 0 00:28:17.439 }, 00:28:17.439 "claimed": false, 00:28:17.439 "zoned": false, 00:28:17.439 "supported_io_types": { 00:28:17.439 "read": true, 00:28:17.439 "write": true, 00:28:17.439 "unmap": true, 00:28:17.439 "write_zeroes": true, 00:28:17.439 "flush": true, 00:28:17.439 "reset": true, 00:28:17.439 "compare": false, 00:28:17.439 "compare_and_write": false, 00:28:17.439 "abort": true, 00:28:17.439 "nvme_admin": false, 00:28:17.439 "nvme_io": false 00:28:17.439 }, 00:28:17.439 "memory_domains": [ 00:28:17.439 { 00:28:17.439 "dma_device_id": "system", 00:28:17.439 "dma_device_type": 1 00:28:17.439 }, 00:28:17.439 { 00:28:17.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:17.439 "dma_device_type": 2 00:28:17.439 } 00:28:17.439 ], 00:28:17.439 "driver_specific": {} 00:28:17.439 } 00:28:17.439 ] 00:28:17.439 11:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:28:17.439 11:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:17.439 11:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:17.440 11:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:17.697 [2024-06-10 11:51:49.586217] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:17.697 [2024-06-10 11:51:49.587267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:17.697 [2024-06-10 11:51:49.587466] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:17.697 [2024-06-10 11:51:49.589994] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:17.697 [2024-06-10 11:51:49.590201] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:17.697 11:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:17.697 11:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:17.697 11:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:17.697 11:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:17.697 11:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:17.697 11:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:17.697 11:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:17.697 11:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:17.697 11:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:17.697 11:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:17.697 11:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:17.697 11:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.955 11:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:17.955 "name": "Existed_Raid", 00:28:17.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.955 "strip_size_kb": 0, 00:28:17.955 "state": "configuring", 00:28:17.955 "raid_level": "raid1", 00:28:17.955 "superblock": false, 00:28:17.955 "num_base_bdevs": 4, 00:28:17.955 "num_base_bdevs_discovered": 3, 00:28:17.955 "num_base_bdevs_operational": 4, 00:28:17.955 "base_bdevs_list": [ 00:28:17.955 { 00:28:17.955 "name": "BaseBdev1", 00:28:17.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.955 "is_configured": false, 00:28:17.955 "data_offset": 0, 00:28:17.955 "data_size": 0 00:28:17.955 }, 00:28:17.955 { 00:28:17.955 "name": "BaseBdev2", 00:28:17.955 "uuid": "8a024e8d-f9bb-47a7-bd18-e1e044778522", 00:28:17.955 "is_configured": true, 00:28:17.955 "data_offset": 0, 00:28:17.955 "data_size": 65536 00:28:17.955 }, 00:28:17.955 { 00:28:17.955 "name": "BaseBdev3", 00:28:17.955 "uuid": "1b4c7ab2-466a-4350-b946-5ed295f78fa7", 00:28:17.955 "is_configured": true, 00:28:17.955 "data_offset": 0, 00:28:17.955 "data_size": 65536 00:28:17.955 }, 00:28:17.955 { 00:28:17.955 "name": "BaseBdev4", 00:28:17.955 "uuid": "fd0046e9-da44-49ff-8cbb-7e7c5fa50f83", 00:28:17.955 "is_configured": true, 00:28:17.955 "data_offset": 0, 00:28:17.955 "data_size": 65536 00:28:17.955 } 00:28:17.955 ] 00:28:17.955 }' 00:28:17.955 11:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:17.955 11:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:18.519 11:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:18.778 [2024-06-10 11:51:50.698810] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:18.778 11:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:18.778 11:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:18.778 11:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:18.778 11:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:18.778 11:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:18.778 11:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:18.778 11:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:18.778 11:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:18.778 11:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:18.778 11:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:18.778 11:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.778 11:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:19.036 11:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:19.036 "name": "Existed_Raid", 00:28:19.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.036 "strip_size_kb": 0, 00:28:19.036 "state": "configuring", 00:28:19.036 "raid_level": "raid1", 00:28:19.036 "superblock": false, 00:28:19.036 "num_base_bdevs": 4, 00:28:19.036 "num_base_bdevs_discovered": 2, 00:28:19.036 "num_base_bdevs_operational": 4, 00:28:19.036 "base_bdevs_list": [ 00:28:19.036 { 00:28:19.036 "name": "BaseBdev1", 00:28:19.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.036 "is_configured": false, 00:28:19.036 "data_offset": 0, 00:28:19.036 "data_size": 0 00:28:19.036 }, 00:28:19.036 { 00:28:19.036 "name": null, 00:28:19.036 "uuid": "8a024e8d-f9bb-47a7-bd18-e1e044778522", 00:28:19.036 "is_configured": false, 00:28:19.036 "data_offset": 0, 00:28:19.036 "data_size": 65536 00:28:19.036 }, 00:28:19.036 { 00:28:19.036 "name": "BaseBdev3", 00:28:19.036 "uuid": "1b4c7ab2-466a-4350-b946-5ed295f78fa7", 00:28:19.036 "is_configured": true, 00:28:19.036 "data_offset": 0, 00:28:19.036 "data_size": 65536 00:28:19.036 }, 00:28:19.036 { 00:28:19.036 "name": "BaseBdev4", 00:28:19.036 "uuid": "fd0046e9-da44-49ff-8cbb-7e7c5fa50f83", 00:28:19.036 "is_configured": true, 00:28:19.036 "data_offset": 0, 00:28:19.036 "data_size": 65536 00:28:19.036 } 00:28:19.036 ] 00:28:19.036 }' 00:28:19.036 11:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:19.036 11:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:19.971 11:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:19.971 11:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.971 11:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:28:19.971 11:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:20.237 [2024-06-10 11:51:52.239340] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:20.237 BaseBdev1 00:28:20.237 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:28:20.237 11:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:28:20.237 11:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:20.237 11:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:28:20.237 11:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:20.237 11:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:20.237 11:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:20.806 11:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:21.065 [ 00:28:21.065 { 00:28:21.065 "name": "BaseBdev1", 00:28:21.065 "aliases": [ 00:28:21.065 "8a8b36a2-e7e6-4b6b-8148-89baf7c3b1d9" 00:28:21.065 ], 00:28:21.065 "product_name": "Malloc disk", 00:28:21.065 "block_size": 512, 00:28:21.065 "num_blocks": 65536, 00:28:21.065 "uuid": "8a8b36a2-e7e6-4b6b-8148-89baf7c3b1d9", 00:28:21.065 "assigned_rate_limits": { 00:28:21.065 "rw_ios_per_sec": 0, 00:28:21.065 "rw_mbytes_per_sec": 0, 00:28:21.065 "r_mbytes_per_sec": 0, 00:28:21.065 "w_mbytes_per_sec": 0 00:28:21.065 }, 00:28:21.065 "claimed": true, 00:28:21.065 "claim_type": "exclusive_write", 00:28:21.065 "zoned": false, 00:28:21.065 "supported_io_types": { 00:28:21.065 "read": true, 00:28:21.065 "write": true, 00:28:21.065 "unmap": true, 00:28:21.065 "write_zeroes": true, 00:28:21.065 "flush": true, 00:28:21.065 "reset": true, 00:28:21.065 "compare": false, 00:28:21.065 "compare_and_write": false, 00:28:21.065 "abort": true, 00:28:21.065 "nvme_admin": false, 00:28:21.065 "nvme_io": false 00:28:21.065 }, 00:28:21.065 "memory_domains": [ 00:28:21.065 { 00:28:21.065 "dma_device_id": "system", 00:28:21.065 "dma_device_type": 1 00:28:21.065 }, 00:28:21.065 { 00:28:21.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:21.065 "dma_device_type": 2 00:28:21.065 } 00:28:21.065 ], 00:28:21.065 "driver_specific": {} 00:28:21.065 } 00:28:21.065 ] 00:28:21.065 11:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:28:21.065 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:21.065 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:21.065 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:21.065 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:21.065 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:21.065 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:21.065 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:21.065 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:21.065 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:21.065 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:21.065 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:21.065 11:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.324 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:21.324 "name": "Existed_Raid", 00:28:21.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.324 "strip_size_kb": 0, 00:28:21.324 "state": "configuring", 00:28:21.324 "raid_level": "raid1", 00:28:21.324 "superblock": false, 00:28:21.324 "num_base_bdevs": 4, 00:28:21.324 "num_base_bdevs_discovered": 3, 00:28:21.324 "num_base_bdevs_operational": 4, 00:28:21.324 "base_bdevs_list": [ 00:28:21.324 { 00:28:21.324 "name": "BaseBdev1", 00:28:21.324 "uuid": "8a8b36a2-e7e6-4b6b-8148-89baf7c3b1d9", 00:28:21.324 "is_configured": true, 00:28:21.324 "data_offset": 0, 00:28:21.324 "data_size": 65536 00:28:21.324 }, 00:28:21.324 { 00:28:21.324 "name": null, 00:28:21.324 "uuid": "8a024e8d-f9bb-47a7-bd18-e1e044778522", 00:28:21.324 "is_configured": false, 00:28:21.324 "data_offset": 0, 00:28:21.324 "data_size": 65536 00:28:21.324 }, 00:28:21.324 { 00:28:21.324 "name": "BaseBdev3", 00:28:21.324 "uuid": "1b4c7ab2-466a-4350-b946-5ed295f78fa7", 00:28:21.324 "is_configured": true, 00:28:21.324 "data_offset": 0, 00:28:21.324 "data_size": 65536 00:28:21.324 }, 00:28:21.324 { 00:28:21.324 "name": "BaseBdev4", 00:28:21.324 "uuid": "fd0046e9-da44-49ff-8cbb-7e7c5fa50f83", 00:28:21.324 "is_configured": true, 00:28:21.324 "data_offset": 0, 00:28:21.324 "data_size": 65536 00:28:21.324 } 00:28:21.324 ] 00:28:21.324 }' 00:28:21.324 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:21.324 11:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.891 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.891 11:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:22.149 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:28:22.149 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:28:22.408 [2024-06-10 11:51:54.367187] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:22.408 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:22.408 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:22.408 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:22.408 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:22.408 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:22.408 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:22.408 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:22.408 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:22.408 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:22.408 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:22.408 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.408 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:22.666 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:22.666 "name": "Existed_Raid", 00:28:22.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:22.666 "strip_size_kb": 0, 00:28:22.666 "state": "configuring", 00:28:22.666 "raid_level": "raid1", 00:28:22.666 "superblock": false, 00:28:22.666 "num_base_bdevs": 4, 00:28:22.666 "num_base_bdevs_discovered": 2, 00:28:22.666 "num_base_bdevs_operational": 4, 00:28:22.666 "base_bdevs_list": [ 00:28:22.666 { 00:28:22.666 "name": "BaseBdev1", 00:28:22.666 "uuid": "8a8b36a2-e7e6-4b6b-8148-89baf7c3b1d9", 00:28:22.666 "is_configured": true, 00:28:22.666 "data_offset": 0, 00:28:22.666 "data_size": 65536 00:28:22.666 }, 00:28:22.666 { 00:28:22.666 "name": null, 00:28:22.666 "uuid": "8a024e8d-f9bb-47a7-bd18-e1e044778522", 00:28:22.666 "is_configured": false, 00:28:22.666 "data_offset": 0, 00:28:22.666 "data_size": 65536 00:28:22.666 }, 00:28:22.666 { 00:28:22.666 "name": null, 00:28:22.666 "uuid": "1b4c7ab2-466a-4350-b946-5ed295f78fa7", 00:28:22.666 "is_configured": false, 00:28:22.666 "data_offset": 0, 00:28:22.666 "data_size": 65536 00:28:22.666 }, 00:28:22.666 { 00:28:22.666 "name": "BaseBdev4", 00:28:22.666 "uuid": "fd0046e9-da44-49ff-8cbb-7e7c5fa50f83", 00:28:22.666 "is_configured": true, 00:28:22.666 "data_offset": 0, 00:28:22.666 "data_size": 65536 00:28:22.666 } 00:28:22.666 ] 00:28:22.666 }' 00:28:22.666 11:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:22.666 11:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.600 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.600 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:23.858 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:28:23.858 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:24.117 [2024-06-10 11:51:55.935845] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:24.117 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:24.117 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:24.117 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:24.117 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:24.117 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:24.117 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:24.117 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:24.117 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:24.117 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:24.117 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:24.117 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.117 11:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:24.375 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:24.375 "name": "Existed_Raid", 00:28:24.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.375 "strip_size_kb": 0, 00:28:24.375 "state": "configuring", 00:28:24.375 "raid_level": "raid1", 00:28:24.375 "superblock": false, 00:28:24.375 "num_base_bdevs": 4, 00:28:24.375 "num_base_bdevs_discovered": 3, 00:28:24.375 "num_base_bdevs_operational": 4, 00:28:24.375 "base_bdevs_list": [ 00:28:24.375 { 00:28:24.375 "name": "BaseBdev1", 00:28:24.375 "uuid": "8a8b36a2-e7e6-4b6b-8148-89baf7c3b1d9", 00:28:24.375 "is_configured": true, 00:28:24.375 "data_offset": 0, 00:28:24.375 "data_size": 65536 00:28:24.375 }, 00:28:24.375 { 00:28:24.375 "name": null, 00:28:24.375 "uuid": "8a024e8d-f9bb-47a7-bd18-e1e044778522", 00:28:24.375 "is_configured": false, 00:28:24.375 "data_offset": 0, 00:28:24.375 "data_size": 65536 00:28:24.375 }, 00:28:24.375 { 00:28:24.375 "name": "BaseBdev3", 00:28:24.375 "uuid": "1b4c7ab2-466a-4350-b946-5ed295f78fa7", 00:28:24.375 "is_configured": true, 00:28:24.375 "data_offset": 0, 00:28:24.375 "data_size": 65536 00:28:24.375 }, 00:28:24.375 { 00:28:24.375 "name": "BaseBdev4", 00:28:24.375 "uuid": "fd0046e9-da44-49ff-8cbb-7e7c5fa50f83", 00:28:24.375 "is_configured": true, 00:28:24.375 "data_offset": 0, 00:28:24.375 "data_size": 65536 00:28:24.375 } 00:28:24.375 ] 00:28:24.375 }' 00:28:24.375 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:24.375 11:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.940 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.940 11:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:25.199 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:28:25.199 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:25.457 [2024-06-10 11:51:57.288195] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:25.457 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:25.457 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:25.457 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:25.457 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:25.457 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:25.457 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:25.457 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:25.457 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:25.457 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:25.457 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:25.457 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.457 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:25.716 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:25.716 "name": "Existed_Raid", 00:28:25.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:25.716 "strip_size_kb": 0, 00:28:25.716 "state": "configuring", 00:28:25.716 "raid_level": "raid1", 00:28:25.716 "superblock": false, 00:28:25.716 "num_base_bdevs": 4, 00:28:25.716 "num_base_bdevs_discovered": 2, 00:28:25.716 "num_base_bdevs_operational": 4, 00:28:25.716 "base_bdevs_list": [ 00:28:25.716 { 00:28:25.716 "name": null, 00:28:25.716 "uuid": "8a8b36a2-e7e6-4b6b-8148-89baf7c3b1d9", 00:28:25.716 "is_configured": false, 00:28:25.716 "data_offset": 0, 00:28:25.716 "data_size": 65536 00:28:25.716 }, 00:28:25.716 { 00:28:25.716 "name": null, 00:28:25.716 "uuid": "8a024e8d-f9bb-47a7-bd18-e1e044778522", 00:28:25.716 "is_configured": false, 00:28:25.716 "data_offset": 0, 00:28:25.716 "data_size": 65536 00:28:25.716 }, 00:28:25.716 { 00:28:25.716 "name": "BaseBdev3", 00:28:25.716 "uuid": "1b4c7ab2-466a-4350-b946-5ed295f78fa7", 00:28:25.716 "is_configured": true, 00:28:25.716 "data_offset": 0, 00:28:25.716 "data_size": 65536 00:28:25.716 }, 00:28:25.716 { 00:28:25.716 "name": "BaseBdev4", 00:28:25.716 "uuid": "fd0046e9-da44-49ff-8cbb-7e7c5fa50f83", 00:28:25.716 "is_configured": true, 00:28:25.716 "data_offset": 0, 00:28:25.716 "data_size": 65536 00:28:25.716 } 00:28:25.716 ] 00:28:25.716 }' 00:28:25.716 11:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:25.716 11:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.283 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:26.283 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.849 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:28:26.849 11:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:27.107 [2024-06-10 11:51:58.985341] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:27.107 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:27.107 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:27.107 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:27.107 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:27.107 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:27.107 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:27.107 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:27.107 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:27.107 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:27.107 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:27.107 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.107 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:27.366 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:27.366 "name": "Existed_Raid", 00:28:27.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:27.366 "strip_size_kb": 0, 00:28:27.366 "state": "configuring", 00:28:27.366 "raid_level": "raid1", 00:28:27.366 "superblock": false, 00:28:27.366 "num_base_bdevs": 4, 00:28:27.366 "num_base_bdevs_discovered": 3, 00:28:27.366 "num_base_bdevs_operational": 4, 00:28:27.366 "base_bdevs_list": [ 00:28:27.366 { 00:28:27.366 "name": null, 00:28:27.366 "uuid": "8a8b36a2-e7e6-4b6b-8148-89baf7c3b1d9", 00:28:27.366 "is_configured": false, 00:28:27.366 "data_offset": 0, 00:28:27.366 "data_size": 65536 00:28:27.366 }, 00:28:27.366 { 00:28:27.366 "name": "BaseBdev2", 00:28:27.366 "uuid": "8a024e8d-f9bb-47a7-bd18-e1e044778522", 00:28:27.366 "is_configured": true, 00:28:27.366 "data_offset": 0, 00:28:27.366 "data_size": 65536 00:28:27.366 }, 00:28:27.366 { 00:28:27.366 "name": "BaseBdev3", 00:28:27.366 "uuid": "1b4c7ab2-466a-4350-b946-5ed295f78fa7", 00:28:27.366 "is_configured": true, 00:28:27.366 "data_offset": 0, 00:28:27.366 "data_size": 65536 00:28:27.366 }, 00:28:27.366 { 00:28:27.366 "name": "BaseBdev4", 00:28:27.366 "uuid": "fd0046e9-da44-49ff-8cbb-7e7c5fa50f83", 00:28:27.366 "is_configured": true, 00:28:27.366 "data_offset": 0, 00:28:27.366 "data_size": 65536 00:28:27.366 } 00:28:27.366 ] 00:28:27.366 }' 00:28:27.366 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:27.366 11:51:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.931 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:27.931 11:51:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.190 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:28:28.190 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:28.190 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.448 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8a8b36a2-e7e6-4b6b-8148-89baf7c3b1d9 00:28:28.706 [2024-06-10 11:52:00.624031] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:28.706 [2024-06-10 11:52:00.624321] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:28:28.706 [2024-06-10 11:52:00.624364] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:28.706 [2024-06-10 11:52:00.624643] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:28.706 [2024-06-10 11:52:00.625081] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:28:28.706 [2024-06-10 11:52:00.625201] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:28:28.706 [2024-06-10 11:52:00.625527] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:28.706 NewBaseBdev 00:28:28.706 11:52:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:28:28.706 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:28:28.706 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:28.706 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:28:28.706 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:28.706 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:28.706 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:28.964 11:52:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:29.222 [ 00:28:29.222 { 00:28:29.222 "name": "NewBaseBdev", 00:28:29.222 "aliases": [ 00:28:29.222 "8a8b36a2-e7e6-4b6b-8148-89baf7c3b1d9" 00:28:29.222 ], 00:28:29.222 "product_name": "Malloc disk", 00:28:29.222 "block_size": 512, 00:28:29.222 "num_blocks": 65536, 00:28:29.222 "uuid": "8a8b36a2-e7e6-4b6b-8148-89baf7c3b1d9", 00:28:29.222 "assigned_rate_limits": { 00:28:29.222 "rw_ios_per_sec": 0, 00:28:29.222 "rw_mbytes_per_sec": 0, 00:28:29.222 "r_mbytes_per_sec": 0, 00:28:29.222 "w_mbytes_per_sec": 0 00:28:29.222 }, 00:28:29.222 "claimed": true, 00:28:29.222 "claim_type": "exclusive_write", 00:28:29.222 "zoned": false, 00:28:29.222 "supported_io_types": { 00:28:29.222 "read": true, 00:28:29.222 "write": true, 00:28:29.222 "unmap": true, 00:28:29.222 "write_zeroes": true, 00:28:29.222 "flush": true, 00:28:29.222 "reset": true, 00:28:29.222 "compare": false, 00:28:29.222 "compare_and_write": false, 00:28:29.222 "abort": true, 00:28:29.222 "nvme_admin": false, 00:28:29.222 "nvme_io": false 00:28:29.222 }, 00:28:29.222 "memory_domains": [ 00:28:29.222 { 00:28:29.222 "dma_device_id": "system", 00:28:29.222 "dma_device_type": 1 00:28:29.222 }, 00:28:29.222 { 00:28:29.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:29.222 "dma_device_type": 2 00:28:29.222 } 00:28:29.222 ], 00:28:29.222 "driver_specific": {} 00:28:29.222 } 00:28:29.222 ] 00:28:29.222 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:28:29.222 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:29.222 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:29.222 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:29.222 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:29.222 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:29.222 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:29.222 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:29.222 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:29.222 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:29.222 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:29.222 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.222 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:29.481 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:29.481 "name": "Existed_Raid", 00:28:29.481 "uuid": "754b04ac-753f-4966-a7e8-71380899b143", 00:28:29.481 "strip_size_kb": 0, 00:28:29.481 "state": "online", 00:28:29.481 "raid_level": "raid1", 00:28:29.481 "superblock": false, 00:28:29.481 "num_base_bdevs": 4, 00:28:29.481 "num_base_bdevs_discovered": 4, 00:28:29.481 "num_base_bdevs_operational": 4, 00:28:29.481 "base_bdevs_list": [ 00:28:29.481 { 00:28:29.481 "name": "NewBaseBdev", 00:28:29.481 "uuid": "8a8b36a2-e7e6-4b6b-8148-89baf7c3b1d9", 00:28:29.481 "is_configured": true, 00:28:29.481 "data_offset": 0, 00:28:29.481 "data_size": 65536 00:28:29.481 }, 00:28:29.481 { 00:28:29.481 "name": "BaseBdev2", 00:28:29.481 "uuid": "8a024e8d-f9bb-47a7-bd18-e1e044778522", 00:28:29.481 "is_configured": true, 00:28:29.481 "data_offset": 0, 00:28:29.481 "data_size": 65536 00:28:29.481 }, 00:28:29.481 { 00:28:29.481 "name": "BaseBdev3", 00:28:29.481 "uuid": "1b4c7ab2-466a-4350-b946-5ed295f78fa7", 00:28:29.481 "is_configured": true, 00:28:29.481 "data_offset": 0, 00:28:29.481 "data_size": 65536 00:28:29.481 }, 00:28:29.481 { 00:28:29.481 "name": "BaseBdev4", 00:28:29.481 "uuid": "fd0046e9-da44-49ff-8cbb-7e7c5fa50f83", 00:28:29.481 "is_configured": true, 00:28:29.481 "data_offset": 0, 00:28:29.481 "data_size": 65536 00:28:29.481 } 00:28:29.481 ] 00:28:29.481 }' 00:28:29.481 11:52:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:29.481 11:52:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.413 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:28:30.413 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:30.413 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:30.413 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:30.413 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:30.413 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:28:30.413 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:30.413 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:30.413 [2024-06-10 11:52:02.308738] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:30.413 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:30.413 "name": "Existed_Raid", 00:28:30.413 "aliases": [ 00:28:30.413 "754b04ac-753f-4966-a7e8-71380899b143" 00:28:30.413 ], 00:28:30.413 "product_name": "Raid Volume", 00:28:30.413 "block_size": 512, 00:28:30.413 "num_blocks": 65536, 00:28:30.413 "uuid": "754b04ac-753f-4966-a7e8-71380899b143", 00:28:30.413 "assigned_rate_limits": { 00:28:30.413 "rw_ios_per_sec": 0, 00:28:30.413 "rw_mbytes_per_sec": 0, 00:28:30.413 "r_mbytes_per_sec": 0, 00:28:30.413 "w_mbytes_per_sec": 0 00:28:30.413 }, 00:28:30.413 "claimed": false, 00:28:30.413 "zoned": false, 00:28:30.413 "supported_io_types": { 00:28:30.413 "read": true, 00:28:30.413 "write": true, 00:28:30.413 "unmap": false, 00:28:30.413 "write_zeroes": true, 00:28:30.413 "flush": false, 00:28:30.413 "reset": true, 00:28:30.413 "compare": false, 00:28:30.413 "compare_and_write": false, 00:28:30.413 "abort": false, 00:28:30.413 "nvme_admin": false, 00:28:30.413 "nvme_io": false 00:28:30.413 }, 00:28:30.413 "memory_domains": [ 00:28:30.413 { 00:28:30.413 "dma_device_id": "system", 00:28:30.413 "dma_device_type": 1 00:28:30.413 }, 00:28:30.413 { 00:28:30.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:30.413 "dma_device_type": 2 00:28:30.413 }, 00:28:30.413 { 00:28:30.413 "dma_device_id": "system", 00:28:30.413 "dma_device_type": 1 00:28:30.413 }, 00:28:30.413 { 00:28:30.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:30.413 "dma_device_type": 2 00:28:30.413 }, 00:28:30.413 { 00:28:30.413 "dma_device_id": "system", 00:28:30.413 "dma_device_type": 1 00:28:30.413 }, 00:28:30.413 { 00:28:30.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:30.413 "dma_device_type": 2 00:28:30.413 }, 00:28:30.413 { 00:28:30.413 "dma_device_id": "system", 00:28:30.413 "dma_device_type": 1 00:28:30.413 }, 00:28:30.413 { 00:28:30.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:30.413 "dma_device_type": 2 00:28:30.413 } 00:28:30.413 ], 00:28:30.413 "driver_specific": { 00:28:30.413 "raid": { 00:28:30.413 "uuid": "754b04ac-753f-4966-a7e8-71380899b143", 00:28:30.413 "strip_size_kb": 0, 00:28:30.413 "state": "online", 00:28:30.413 "raid_level": "raid1", 00:28:30.413 "superblock": false, 00:28:30.413 "num_base_bdevs": 4, 00:28:30.413 "num_base_bdevs_discovered": 4, 00:28:30.413 "num_base_bdevs_operational": 4, 00:28:30.413 "base_bdevs_list": [ 00:28:30.413 { 00:28:30.413 "name": "NewBaseBdev", 00:28:30.413 "uuid": "8a8b36a2-e7e6-4b6b-8148-89baf7c3b1d9", 00:28:30.414 "is_configured": true, 00:28:30.414 "data_offset": 0, 00:28:30.414 "data_size": 65536 00:28:30.414 }, 00:28:30.414 { 00:28:30.414 "name": "BaseBdev2", 00:28:30.414 "uuid": "8a024e8d-f9bb-47a7-bd18-e1e044778522", 00:28:30.414 "is_configured": true, 00:28:30.414 "data_offset": 0, 00:28:30.414 "data_size": 65536 00:28:30.414 }, 00:28:30.414 { 00:28:30.414 "name": "BaseBdev3", 00:28:30.414 "uuid": "1b4c7ab2-466a-4350-b946-5ed295f78fa7", 00:28:30.414 "is_configured": true, 00:28:30.414 "data_offset": 0, 00:28:30.414 "data_size": 65536 00:28:30.414 }, 00:28:30.414 { 00:28:30.414 "name": "BaseBdev4", 00:28:30.414 "uuid": "fd0046e9-da44-49ff-8cbb-7e7c5fa50f83", 00:28:30.414 "is_configured": true, 00:28:30.414 "data_offset": 0, 00:28:30.414 "data_size": 65536 00:28:30.414 } 00:28:30.414 ] 00:28:30.414 } 00:28:30.414 } 00:28:30.414 }' 00:28:30.414 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:30.414 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:28:30.414 BaseBdev2 00:28:30.414 BaseBdev3 00:28:30.414 BaseBdev4' 00:28:30.414 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:30.414 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:28:30.414 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:30.672 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:30.672 "name": "NewBaseBdev", 00:28:30.672 "aliases": [ 00:28:30.672 "8a8b36a2-e7e6-4b6b-8148-89baf7c3b1d9" 00:28:30.672 ], 00:28:30.672 "product_name": "Malloc disk", 00:28:30.672 "block_size": 512, 00:28:30.672 "num_blocks": 65536, 00:28:30.672 "uuid": "8a8b36a2-e7e6-4b6b-8148-89baf7c3b1d9", 00:28:30.672 "assigned_rate_limits": { 00:28:30.672 "rw_ios_per_sec": 0, 00:28:30.672 "rw_mbytes_per_sec": 0, 00:28:30.672 "r_mbytes_per_sec": 0, 00:28:30.672 "w_mbytes_per_sec": 0 00:28:30.672 }, 00:28:30.672 "claimed": true, 00:28:30.672 "claim_type": "exclusive_write", 00:28:30.672 "zoned": false, 00:28:30.672 "supported_io_types": { 00:28:30.672 "read": true, 00:28:30.672 "write": true, 00:28:30.672 "unmap": true, 00:28:30.672 "write_zeroes": true, 00:28:30.672 "flush": true, 00:28:30.672 "reset": true, 00:28:30.672 "compare": false, 00:28:30.672 "compare_and_write": false, 00:28:30.672 "abort": true, 00:28:30.672 "nvme_admin": false, 00:28:30.672 "nvme_io": false 00:28:30.672 }, 00:28:30.672 "memory_domains": [ 00:28:30.672 { 00:28:30.672 "dma_device_id": "system", 00:28:30.672 "dma_device_type": 1 00:28:30.672 }, 00:28:30.672 { 00:28:30.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:30.672 "dma_device_type": 2 00:28:30.672 } 00:28:30.672 ], 00:28:30.672 "driver_specific": {} 00:28:30.672 }' 00:28:30.672 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:30.672 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:30.930 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:30.930 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:30.930 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:30.930 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:30.930 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:30.930 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:30.930 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:30.930 11:52:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:31.187 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:31.187 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:31.187 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:31.187 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:31.187 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:31.445 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:31.445 "name": "BaseBdev2", 00:28:31.445 "aliases": [ 00:28:31.445 "8a024e8d-f9bb-47a7-bd18-e1e044778522" 00:28:31.445 ], 00:28:31.445 "product_name": "Malloc disk", 00:28:31.445 "block_size": 512, 00:28:31.445 "num_blocks": 65536, 00:28:31.445 "uuid": "8a024e8d-f9bb-47a7-bd18-e1e044778522", 00:28:31.445 "assigned_rate_limits": { 00:28:31.445 "rw_ios_per_sec": 0, 00:28:31.445 "rw_mbytes_per_sec": 0, 00:28:31.445 "r_mbytes_per_sec": 0, 00:28:31.445 "w_mbytes_per_sec": 0 00:28:31.445 }, 00:28:31.445 "claimed": true, 00:28:31.445 "claim_type": "exclusive_write", 00:28:31.445 "zoned": false, 00:28:31.445 "supported_io_types": { 00:28:31.445 "read": true, 00:28:31.445 "write": true, 00:28:31.445 "unmap": true, 00:28:31.445 "write_zeroes": true, 00:28:31.445 "flush": true, 00:28:31.445 "reset": true, 00:28:31.445 "compare": false, 00:28:31.445 "compare_and_write": false, 00:28:31.445 "abort": true, 00:28:31.445 "nvme_admin": false, 00:28:31.445 "nvme_io": false 00:28:31.445 }, 00:28:31.445 "memory_domains": [ 00:28:31.445 { 00:28:31.445 "dma_device_id": "system", 00:28:31.445 "dma_device_type": 1 00:28:31.445 }, 00:28:31.445 { 00:28:31.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:31.445 "dma_device_type": 2 00:28:31.445 } 00:28:31.445 ], 00:28:31.445 "driver_specific": {} 00:28:31.445 }' 00:28:31.445 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:31.445 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:31.445 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:31.445 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:31.445 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:31.445 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:31.445 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:31.445 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:31.703 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:31.703 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:31.703 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:31.703 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:31.703 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:31.703 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:31.703 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:32.007 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:32.007 "name": "BaseBdev3", 00:28:32.007 "aliases": [ 00:28:32.007 "1b4c7ab2-466a-4350-b946-5ed295f78fa7" 00:28:32.007 ], 00:28:32.007 "product_name": "Malloc disk", 00:28:32.007 "block_size": 512, 00:28:32.007 "num_blocks": 65536, 00:28:32.007 "uuid": "1b4c7ab2-466a-4350-b946-5ed295f78fa7", 00:28:32.007 "assigned_rate_limits": { 00:28:32.007 "rw_ios_per_sec": 0, 00:28:32.007 "rw_mbytes_per_sec": 0, 00:28:32.007 "r_mbytes_per_sec": 0, 00:28:32.007 "w_mbytes_per_sec": 0 00:28:32.007 }, 00:28:32.007 "claimed": true, 00:28:32.007 "claim_type": "exclusive_write", 00:28:32.007 "zoned": false, 00:28:32.007 "supported_io_types": { 00:28:32.007 "read": true, 00:28:32.007 "write": true, 00:28:32.007 "unmap": true, 00:28:32.007 "write_zeroes": true, 00:28:32.007 "flush": true, 00:28:32.007 "reset": true, 00:28:32.007 "compare": false, 00:28:32.007 "compare_and_write": false, 00:28:32.007 "abort": true, 00:28:32.007 "nvme_admin": false, 00:28:32.007 "nvme_io": false 00:28:32.007 }, 00:28:32.007 "memory_domains": [ 00:28:32.007 { 00:28:32.007 "dma_device_id": "system", 00:28:32.007 "dma_device_type": 1 00:28:32.007 }, 00:28:32.007 { 00:28:32.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:32.007 "dma_device_type": 2 00:28:32.007 } 00:28:32.007 ], 00:28:32.007 "driver_specific": {} 00:28:32.007 }' 00:28:32.007 11:52:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:32.007 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:32.267 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:32.267 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:32.267 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:32.267 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:32.267 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:32.267 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:32.267 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:32.267 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:32.526 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:32.526 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:32.526 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:32.526 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:28:32.526 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:32.784 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:32.784 "name": "BaseBdev4", 00:28:32.784 "aliases": [ 00:28:32.784 "fd0046e9-da44-49ff-8cbb-7e7c5fa50f83" 00:28:32.784 ], 00:28:32.784 "product_name": "Malloc disk", 00:28:32.784 "block_size": 512, 00:28:32.784 "num_blocks": 65536, 00:28:32.784 "uuid": "fd0046e9-da44-49ff-8cbb-7e7c5fa50f83", 00:28:32.784 "assigned_rate_limits": { 00:28:32.784 "rw_ios_per_sec": 0, 00:28:32.784 "rw_mbytes_per_sec": 0, 00:28:32.784 "r_mbytes_per_sec": 0, 00:28:32.784 "w_mbytes_per_sec": 0 00:28:32.784 }, 00:28:32.784 "claimed": true, 00:28:32.784 "claim_type": "exclusive_write", 00:28:32.784 "zoned": false, 00:28:32.784 "supported_io_types": { 00:28:32.784 "read": true, 00:28:32.784 "write": true, 00:28:32.784 "unmap": true, 00:28:32.784 "write_zeroes": true, 00:28:32.784 "flush": true, 00:28:32.784 "reset": true, 00:28:32.784 "compare": false, 00:28:32.784 "compare_and_write": false, 00:28:32.784 "abort": true, 00:28:32.784 "nvme_admin": false, 00:28:32.784 "nvme_io": false 00:28:32.784 }, 00:28:32.784 "memory_domains": [ 00:28:32.784 { 00:28:32.784 "dma_device_id": "system", 00:28:32.784 "dma_device_type": 1 00:28:32.785 }, 00:28:32.785 { 00:28:32.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:32.785 "dma_device_type": 2 00:28:32.785 } 00:28:32.785 ], 00:28:32.785 "driver_specific": {} 00:28:32.785 }' 00:28:32.785 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:32.785 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:32.785 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:33.041 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:33.041 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:33.041 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:33.042 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:33.042 11:52:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:33.042 11:52:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:33.042 11:52:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:33.042 11:52:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:33.299 11:52:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:33.299 11:52:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:33.558 [2024-06-10 11:52:05.385148] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:33.558 [2024-06-10 11:52:05.385317] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:33.558 [2024-06-10 11:52:05.385472] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:33.558 [2024-06-10 11:52:05.385859] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:33.558 [2024-06-10 11:52:05.385962] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:28:33.558 11:52:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 142202 00:28:33.558 11:52:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 142202 ']' 00:28:33.558 11:52:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 142202 00:28:33.558 11:52:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:28:33.558 11:52:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:33.558 11:52:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 142202 00:28:33.558 killing process with pid 142202 00:28:33.558 11:52:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:33.558 11:52:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:33.558 11:52:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 142202' 00:28:33.558 11:52:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 142202 00:28:33.558 [2024-06-10 11:52:05.438036] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:33.558 11:52:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 142202 00:28:34.137 [2024-06-10 11:52:05.890391] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:28:35.511 00:28:35.511 real 0m36.688s 00:28:35.511 user 1m6.493s 00:28:35.511 sys 0m5.176s 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:35.511 ************************************ 00:28:35.511 END TEST raid_state_function_test 00:28:35.511 ************************************ 00:28:35.511 11:52:07 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:28:35.511 11:52:07 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:28:35.511 11:52:07 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:35.511 11:52:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:35.511 ************************************ 00:28:35.511 START TEST raid_state_function_test_sb 00:28:35.511 ************************************ 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 4 true 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=143332 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 143332' 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:35.511 Process raid pid: 143332 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 143332 /var/tmp/spdk-raid.sock 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 143332 ']' 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:35.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:35.511 11:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:35.511 [2024-06-10 11:52:07.539851] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:28:35.511 [2024-06-10 11:52:07.540260] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.769 [2024-06-10 11:52:07.713412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.028 [2024-06-10 11:52:07.991441] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.287 [2024-06-10 11:52:08.230247] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:36.545 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:36.545 11:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:28:36.545 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:36.803 [2024-06-10 11:52:08.783519] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:36.803 [2024-06-10 11:52:08.783874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:36.803 [2024-06-10 11:52:08.783983] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:36.803 [2024-06-10 11:52:08.784148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:36.803 [2024-06-10 11:52:08.784242] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:36.803 [2024-06-10 11:52:08.784300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:36.803 [2024-06-10 11:52:08.784404] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:36.803 [2024-06-10 11:52:08.784477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:36.803 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:36.803 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:36.803 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:36.803 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:36.803 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:36.803 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:36.803 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:36.803 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:36.803 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:36.804 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:36.804 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.804 11:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:37.369 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:37.369 "name": "Existed_Raid", 00:28:37.369 "uuid": "b2b50dd7-be2f-43dc-a608-b94cb86af939", 00:28:37.369 "strip_size_kb": 0, 00:28:37.369 "state": "configuring", 00:28:37.369 "raid_level": "raid1", 00:28:37.369 "superblock": true, 00:28:37.369 "num_base_bdevs": 4, 00:28:37.369 "num_base_bdevs_discovered": 0, 00:28:37.369 "num_base_bdevs_operational": 4, 00:28:37.369 "base_bdevs_list": [ 00:28:37.369 { 00:28:37.369 "name": "BaseBdev1", 00:28:37.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:37.369 "is_configured": false, 00:28:37.369 "data_offset": 0, 00:28:37.369 "data_size": 0 00:28:37.369 }, 00:28:37.369 { 00:28:37.369 "name": "BaseBdev2", 00:28:37.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:37.369 "is_configured": false, 00:28:37.369 "data_offset": 0, 00:28:37.369 "data_size": 0 00:28:37.369 }, 00:28:37.369 { 00:28:37.369 "name": "BaseBdev3", 00:28:37.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:37.369 "is_configured": false, 00:28:37.369 "data_offset": 0, 00:28:37.369 "data_size": 0 00:28:37.369 }, 00:28:37.369 { 00:28:37.369 "name": "BaseBdev4", 00:28:37.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:37.369 "is_configured": false, 00:28:37.369 "data_offset": 0, 00:28:37.369 "data_size": 0 00:28:37.369 } 00:28:37.369 ] 00:28:37.369 }' 00:28:37.369 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:37.369 11:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:37.936 11:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:38.195 [2024-06-10 11:52:10.047631] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:38.195 [2024-06-10 11:52:10.047896] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:28:38.195 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:38.454 [2024-06-10 11:52:10.455734] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:38.454 [2024-06-10 11:52:10.456036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:38.454 [2024-06-10 11:52:10.456213] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:38.454 [2024-06-10 11:52:10.456319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:38.454 [2024-06-10 11:52:10.456467] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:38.454 [2024-06-10 11:52:10.456549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:38.454 [2024-06-10 11:52:10.456755] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:38.454 [2024-06-10 11:52:10.456822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:38.454 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:38.732 [2024-06-10 11:52:10.709663] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:38.732 BaseBdev1 00:28:38.732 11:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:28:38.732 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:28:38.732 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:38.732 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:28:38.732 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:38.732 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:38.732 11:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:38.989 11:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:39.247 [ 00:28:39.247 { 00:28:39.247 "name": "BaseBdev1", 00:28:39.247 "aliases": [ 00:28:39.247 "a3d0455c-1429-4c5d-826c-92790e0b7bab" 00:28:39.247 ], 00:28:39.247 "product_name": "Malloc disk", 00:28:39.247 "block_size": 512, 00:28:39.247 "num_blocks": 65536, 00:28:39.247 "uuid": "a3d0455c-1429-4c5d-826c-92790e0b7bab", 00:28:39.247 "assigned_rate_limits": { 00:28:39.247 "rw_ios_per_sec": 0, 00:28:39.247 "rw_mbytes_per_sec": 0, 00:28:39.247 "r_mbytes_per_sec": 0, 00:28:39.247 "w_mbytes_per_sec": 0 00:28:39.247 }, 00:28:39.247 "claimed": true, 00:28:39.247 "claim_type": "exclusive_write", 00:28:39.247 "zoned": false, 00:28:39.247 "supported_io_types": { 00:28:39.247 "read": true, 00:28:39.247 "write": true, 00:28:39.247 "unmap": true, 00:28:39.247 "write_zeroes": true, 00:28:39.247 "flush": true, 00:28:39.247 "reset": true, 00:28:39.247 "compare": false, 00:28:39.247 "compare_and_write": false, 00:28:39.247 "abort": true, 00:28:39.247 "nvme_admin": false, 00:28:39.247 "nvme_io": false 00:28:39.247 }, 00:28:39.247 "memory_domains": [ 00:28:39.247 { 00:28:39.247 "dma_device_id": "system", 00:28:39.247 "dma_device_type": 1 00:28:39.247 }, 00:28:39.247 { 00:28:39.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:39.247 "dma_device_type": 2 00:28:39.247 } 00:28:39.247 ], 00:28:39.247 "driver_specific": {} 00:28:39.247 } 00:28:39.247 ] 00:28:39.247 11:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:28:39.247 11:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:39.247 11:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:39.247 11:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:39.247 11:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:39.247 11:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:39.247 11:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:39.247 11:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:39.247 11:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:39.247 11:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:39.247 11:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:39.247 11:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.247 11:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:39.814 11:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:39.814 "name": "Existed_Raid", 00:28:39.814 "uuid": "e4583ec4-8be0-4273-997e-7cc12141c533", 00:28:39.814 "strip_size_kb": 0, 00:28:39.814 "state": "configuring", 00:28:39.814 "raid_level": "raid1", 00:28:39.814 "superblock": true, 00:28:39.814 "num_base_bdevs": 4, 00:28:39.814 "num_base_bdevs_discovered": 1, 00:28:39.814 "num_base_bdevs_operational": 4, 00:28:39.814 "base_bdevs_list": [ 00:28:39.814 { 00:28:39.814 "name": "BaseBdev1", 00:28:39.814 "uuid": "a3d0455c-1429-4c5d-826c-92790e0b7bab", 00:28:39.814 "is_configured": true, 00:28:39.814 "data_offset": 2048, 00:28:39.814 "data_size": 63488 00:28:39.814 }, 00:28:39.814 { 00:28:39.814 "name": "BaseBdev2", 00:28:39.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.814 "is_configured": false, 00:28:39.814 "data_offset": 0, 00:28:39.814 "data_size": 0 00:28:39.814 }, 00:28:39.814 { 00:28:39.814 "name": "BaseBdev3", 00:28:39.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.814 "is_configured": false, 00:28:39.814 "data_offset": 0, 00:28:39.814 "data_size": 0 00:28:39.814 }, 00:28:39.814 { 00:28:39.814 "name": "BaseBdev4", 00:28:39.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.814 "is_configured": false, 00:28:39.814 "data_offset": 0, 00:28:39.814 "data_size": 0 00:28:39.814 } 00:28:39.814 ] 00:28:39.814 }' 00:28:39.814 11:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:39.814 11:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:40.379 11:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:40.637 [2024-06-10 11:52:12.450129] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:40.637 [2024-06-10 11:52:12.450401] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:28:40.637 11:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:40.895 [2024-06-10 11:52:12.742254] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:40.895 [2024-06-10 11:52:12.744797] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:40.895 [2024-06-10 11:52:12.745000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:40.895 [2024-06-10 11:52:12.745108] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:40.895 [2024-06-10 11:52:12.745179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:40.895 [2024-06-10 11:52:12.745264] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:40.895 [2024-06-10 11:52:12.745322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:40.895 11:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:28:40.895 11:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:40.895 11:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:40.895 11:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:40.895 11:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:40.895 11:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:40.895 11:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:40.895 11:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:40.895 11:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:40.895 11:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:40.895 11:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:40.895 11:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:40.895 11:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.895 11:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:41.153 11:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:41.153 "name": "Existed_Raid", 00:28:41.153 "uuid": "7bddfc51-996f-47f5-8719-2b41c4e2ddc5", 00:28:41.153 "strip_size_kb": 0, 00:28:41.153 "state": "configuring", 00:28:41.153 "raid_level": "raid1", 00:28:41.153 "superblock": true, 00:28:41.153 "num_base_bdevs": 4, 00:28:41.153 "num_base_bdevs_discovered": 1, 00:28:41.153 "num_base_bdevs_operational": 4, 00:28:41.153 "base_bdevs_list": [ 00:28:41.153 { 00:28:41.153 "name": "BaseBdev1", 00:28:41.153 "uuid": "a3d0455c-1429-4c5d-826c-92790e0b7bab", 00:28:41.153 "is_configured": true, 00:28:41.153 "data_offset": 2048, 00:28:41.153 "data_size": 63488 00:28:41.153 }, 00:28:41.153 { 00:28:41.153 "name": "BaseBdev2", 00:28:41.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:41.153 "is_configured": false, 00:28:41.153 "data_offset": 0, 00:28:41.153 "data_size": 0 00:28:41.153 }, 00:28:41.153 { 00:28:41.153 "name": "BaseBdev3", 00:28:41.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:41.153 "is_configured": false, 00:28:41.153 "data_offset": 0, 00:28:41.153 "data_size": 0 00:28:41.153 }, 00:28:41.153 { 00:28:41.153 "name": "BaseBdev4", 00:28:41.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:41.153 "is_configured": false, 00:28:41.153 "data_offset": 0, 00:28:41.153 "data_size": 0 00:28:41.153 } 00:28:41.153 ] 00:28:41.153 }' 00:28:41.153 11:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:41.153 11:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:42.086 11:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:42.086 [2024-06-10 11:52:14.124004] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:42.086 BaseBdev2 00:28:42.086 11:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:28:42.086 11:52:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:28:42.086 11:52:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:42.086 11:52:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:28:42.345 11:52:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:42.345 11:52:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:42.345 11:52:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:42.345 11:52:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:42.603 [ 00:28:42.603 { 00:28:42.603 "name": "BaseBdev2", 00:28:42.603 "aliases": [ 00:28:42.603 "98c5ccbd-c4d2-4f01-97cd-107facbfbecd" 00:28:42.603 ], 00:28:42.603 "product_name": "Malloc disk", 00:28:42.603 "block_size": 512, 00:28:42.603 "num_blocks": 65536, 00:28:42.603 "uuid": "98c5ccbd-c4d2-4f01-97cd-107facbfbecd", 00:28:42.603 "assigned_rate_limits": { 00:28:42.603 "rw_ios_per_sec": 0, 00:28:42.603 "rw_mbytes_per_sec": 0, 00:28:42.603 "r_mbytes_per_sec": 0, 00:28:42.603 "w_mbytes_per_sec": 0 00:28:42.603 }, 00:28:42.603 "claimed": true, 00:28:42.603 "claim_type": "exclusive_write", 00:28:42.603 "zoned": false, 00:28:42.603 "supported_io_types": { 00:28:42.603 "read": true, 00:28:42.603 "write": true, 00:28:42.603 "unmap": true, 00:28:42.603 "write_zeroes": true, 00:28:42.603 "flush": true, 00:28:42.603 "reset": true, 00:28:42.603 "compare": false, 00:28:42.603 "compare_and_write": false, 00:28:42.603 "abort": true, 00:28:42.603 "nvme_admin": false, 00:28:42.603 "nvme_io": false 00:28:42.603 }, 00:28:42.603 "memory_domains": [ 00:28:42.603 { 00:28:42.603 "dma_device_id": "system", 00:28:42.603 "dma_device_type": 1 00:28:42.603 }, 00:28:42.603 { 00:28:42.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:42.603 "dma_device_type": 2 00:28:42.603 } 00:28:42.603 ], 00:28:42.603 "driver_specific": {} 00:28:42.603 } 00:28:42.603 ] 00:28:42.603 11:52:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:28:42.603 11:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:42.603 11:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:42.603 11:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:42.603 11:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:42.603 11:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:42.603 11:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:42.603 11:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:42.603 11:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:42.603 11:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:42.603 11:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:42.603 11:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:42.603 11:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:42.603 11:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.603 11:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:43.170 11:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:43.170 "name": "Existed_Raid", 00:28:43.170 "uuid": "7bddfc51-996f-47f5-8719-2b41c4e2ddc5", 00:28:43.170 "strip_size_kb": 0, 00:28:43.170 "state": "configuring", 00:28:43.170 "raid_level": "raid1", 00:28:43.170 "superblock": true, 00:28:43.170 "num_base_bdevs": 4, 00:28:43.170 "num_base_bdevs_discovered": 2, 00:28:43.170 "num_base_bdevs_operational": 4, 00:28:43.170 "base_bdevs_list": [ 00:28:43.170 { 00:28:43.170 "name": "BaseBdev1", 00:28:43.170 "uuid": "a3d0455c-1429-4c5d-826c-92790e0b7bab", 00:28:43.170 "is_configured": true, 00:28:43.170 "data_offset": 2048, 00:28:43.170 "data_size": 63488 00:28:43.170 }, 00:28:43.170 { 00:28:43.170 "name": "BaseBdev2", 00:28:43.170 "uuid": "98c5ccbd-c4d2-4f01-97cd-107facbfbecd", 00:28:43.170 "is_configured": true, 00:28:43.170 "data_offset": 2048, 00:28:43.170 "data_size": 63488 00:28:43.170 }, 00:28:43.170 { 00:28:43.170 "name": "BaseBdev3", 00:28:43.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:43.170 "is_configured": false, 00:28:43.170 "data_offset": 0, 00:28:43.170 "data_size": 0 00:28:43.170 }, 00:28:43.170 { 00:28:43.170 "name": "BaseBdev4", 00:28:43.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:43.170 "is_configured": false, 00:28:43.170 "data_offset": 0, 00:28:43.170 "data_size": 0 00:28:43.170 } 00:28:43.170 ] 00:28:43.170 }' 00:28:43.170 11:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:43.170 11:52:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:43.737 11:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:43.996 [2024-06-10 11:52:15.843310] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:43.996 BaseBdev3 00:28:43.996 11:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:28:43.996 11:52:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:28:43.996 11:52:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:43.996 11:52:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:28:43.996 11:52:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:43.996 11:52:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:43.996 11:52:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:44.254 11:52:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:44.513 [ 00:28:44.513 { 00:28:44.513 "name": "BaseBdev3", 00:28:44.513 "aliases": [ 00:28:44.513 "e9d56f67-98cd-4621-bfc4-d6d370b94eae" 00:28:44.513 ], 00:28:44.513 "product_name": "Malloc disk", 00:28:44.513 "block_size": 512, 00:28:44.513 "num_blocks": 65536, 00:28:44.513 "uuid": "e9d56f67-98cd-4621-bfc4-d6d370b94eae", 00:28:44.513 "assigned_rate_limits": { 00:28:44.513 "rw_ios_per_sec": 0, 00:28:44.513 "rw_mbytes_per_sec": 0, 00:28:44.513 "r_mbytes_per_sec": 0, 00:28:44.513 "w_mbytes_per_sec": 0 00:28:44.513 }, 00:28:44.513 "claimed": true, 00:28:44.513 "claim_type": "exclusive_write", 00:28:44.513 "zoned": false, 00:28:44.513 "supported_io_types": { 00:28:44.513 "read": true, 00:28:44.513 "write": true, 00:28:44.513 "unmap": true, 00:28:44.513 "write_zeroes": true, 00:28:44.513 "flush": true, 00:28:44.513 "reset": true, 00:28:44.513 "compare": false, 00:28:44.513 "compare_and_write": false, 00:28:44.513 "abort": true, 00:28:44.513 "nvme_admin": false, 00:28:44.513 "nvme_io": false 00:28:44.513 }, 00:28:44.513 "memory_domains": [ 00:28:44.513 { 00:28:44.513 "dma_device_id": "system", 00:28:44.513 "dma_device_type": 1 00:28:44.513 }, 00:28:44.513 { 00:28:44.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:44.513 "dma_device_type": 2 00:28:44.513 } 00:28:44.513 ], 00:28:44.513 "driver_specific": {} 00:28:44.513 } 00:28:44.513 ] 00:28:44.513 11:52:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:28:44.513 11:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:44.513 11:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:44.513 11:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:44.513 11:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:44.513 11:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:44.513 11:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:44.513 11:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:44.513 11:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:44.513 11:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:44.513 11:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:44.513 11:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:44.513 11:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:44.513 11:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.513 11:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:44.772 11:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:44.772 "name": "Existed_Raid", 00:28:44.772 "uuid": "7bddfc51-996f-47f5-8719-2b41c4e2ddc5", 00:28:44.772 "strip_size_kb": 0, 00:28:44.772 "state": "configuring", 00:28:44.772 "raid_level": "raid1", 00:28:44.772 "superblock": true, 00:28:44.772 "num_base_bdevs": 4, 00:28:44.772 "num_base_bdevs_discovered": 3, 00:28:44.772 "num_base_bdevs_operational": 4, 00:28:44.772 "base_bdevs_list": [ 00:28:44.772 { 00:28:44.772 "name": "BaseBdev1", 00:28:44.772 "uuid": "a3d0455c-1429-4c5d-826c-92790e0b7bab", 00:28:44.772 "is_configured": true, 00:28:44.772 "data_offset": 2048, 00:28:44.772 "data_size": 63488 00:28:44.772 }, 00:28:44.772 { 00:28:44.772 "name": "BaseBdev2", 00:28:44.772 "uuid": "98c5ccbd-c4d2-4f01-97cd-107facbfbecd", 00:28:44.772 "is_configured": true, 00:28:44.772 "data_offset": 2048, 00:28:44.772 "data_size": 63488 00:28:44.772 }, 00:28:44.772 { 00:28:44.772 "name": "BaseBdev3", 00:28:44.772 "uuid": "e9d56f67-98cd-4621-bfc4-d6d370b94eae", 00:28:44.772 "is_configured": true, 00:28:44.772 "data_offset": 2048, 00:28:44.772 "data_size": 63488 00:28:44.772 }, 00:28:44.772 { 00:28:44.772 "name": "BaseBdev4", 00:28:44.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:44.772 "is_configured": false, 00:28:44.772 "data_offset": 0, 00:28:44.772 "data_size": 0 00:28:44.772 } 00:28:44.772 ] 00:28:44.772 }' 00:28:44.772 11:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:44.772 11:52:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:45.338 11:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:45.596 [2024-06-10 11:52:17.643293] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:45.596 [2024-06-10 11:52:17.643960] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:28:45.596 [2024-06-10 11:52:17.644145] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:45.596 [2024-06-10 11:52:17.644522] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:28:45.596 [2024-06-10 11:52:17.645217] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:28:45.596 BaseBdev4 00:28:45.596 [2024-06-10 11:52:17.645424] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:28:45.596 [2024-06-10 11:52:17.645823] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:45.854 11:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:28:45.854 11:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:28:45.854 11:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:45.854 11:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:28:45.854 11:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:45.854 11:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:45.854 11:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:46.112 11:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:46.371 [ 00:28:46.371 { 00:28:46.371 "name": "BaseBdev4", 00:28:46.371 "aliases": [ 00:28:46.371 "05cd7420-0189-42da-ac51-6d68336c5839" 00:28:46.371 ], 00:28:46.371 "product_name": "Malloc disk", 00:28:46.371 "block_size": 512, 00:28:46.371 "num_blocks": 65536, 00:28:46.371 "uuid": "05cd7420-0189-42da-ac51-6d68336c5839", 00:28:46.371 "assigned_rate_limits": { 00:28:46.371 "rw_ios_per_sec": 0, 00:28:46.371 "rw_mbytes_per_sec": 0, 00:28:46.371 "r_mbytes_per_sec": 0, 00:28:46.371 "w_mbytes_per_sec": 0 00:28:46.371 }, 00:28:46.371 "claimed": true, 00:28:46.371 "claim_type": "exclusive_write", 00:28:46.371 "zoned": false, 00:28:46.371 "supported_io_types": { 00:28:46.371 "read": true, 00:28:46.371 "write": true, 00:28:46.371 "unmap": true, 00:28:46.371 "write_zeroes": true, 00:28:46.371 "flush": true, 00:28:46.371 "reset": true, 00:28:46.371 "compare": false, 00:28:46.371 "compare_and_write": false, 00:28:46.371 "abort": true, 00:28:46.371 "nvme_admin": false, 00:28:46.371 "nvme_io": false 00:28:46.371 }, 00:28:46.371 "memory_domains": [ 00:28:46.371 { 00:28:46.371 "dma_device_id": "system", 00:28:46.371 "dma_device_type": 1 00:28:46.371 }, 00:28:46.371 { 00:28:46.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:46.371 "dma_device_type": 2 00:28:46.371 } 00:28:46.371 ], 00:28:46.371 "driver_specific": {} 00:28:46.371 } 00:28:46.371 ] 00:28:46.371 11:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:28:46.371 11:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:46.371 11:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:46.371 11:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:46.371 11:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:46.371 11:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:46.371 11:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:46.371 11:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:46.371 11:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:46.371 11:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:46.371 11:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:46.371 11:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:46.371 11:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:46.371 11:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.371 11:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:46.629 11:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:46.629 "name": "Existed_Raid", 00:28:46.629 "uuid": "7bddfc51-996f-47f5-8719-2b41c4e2ddc5", 00:28:46.629 "strip_size_kb": 0, 00:28:46.629 "state": "online", 00:28:46.629 "raid_level": "raid1", 00:28:46.629 "superblock": true, 00:28:46.629 "num_base_bdevs": 4, 00:28:46.629 "num_base_bdevs_discovered": 4, 00:28:46.629 "num_base_bdevs_operational": 4, 00:28:46.629 "base_bdevs_list": [ 00:28:46.629 { 00:28:46.629 "name": "BaseBdev1", 00:28:46.629 "uuid": "a3d0455c-1429-4c5d-826c-92790e0b7bab", 00:28:46.629 "is_configured": true, 00:28:46.629 "data_offset": 2048, 00:28:46.629 "data_size": 63488 00:28:46.629 }, 00:28:46.629 { 00:28:46.629 "name": "BaseBdev2", 00:28:46.629 "uuid": "98c5ccbd-c4d2-4f01-97cd-107facbfbecd", 00:28:46.629 "is_configured": true, 00:28:46.629 "data_offset": 2048, 00:28:46.629 "data_size": 63488 00:28:46.629 }, 00:28:46.629 { 00:28:46.629 "name": "BaseBdev3", 00:28:46.629 "uuid": "e9d56f67-98cd-4621-bfc4-d6d370b94eae", 00:28:46.629 "is_configured": true, 00:28:46.629 "data_offset": 2048, 00:28:46.629 "data_size": 63488 00:28:46.629 }, 00:28:46.629 { 00:28:46.629 "name": "BaseBdev4", 00:28:46.629 "uuid": "05cd7420-0189-42da-ac51-6d68336c5839", 00:28:46.629 "is_configured": true, 00:28:46.629 "data_offset": 2048, 00:28:46.629 "data_size": 63488 00:28:46.629 } 00:28:46.629 ] 00:28:46.629 }' 00:28:46.629 11:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:46.629 11:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:47.195 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:28:47.195 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:47.195 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:47.195 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:47.195 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:47.195 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:28:47.195 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:47.195 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:47.454 [2024-06-10 11:52:19.448039] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:47.455 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:47.455 "name": "Existed_Raid", 00:28:47.455 "aliases": [ 00:28:47.455 "7bddfc51-996f-47f5-8719-2b41c4e2ddc5" 00:28:47.455 ], 00:28:47.455 "product_name": "Raid Volume", 00:28:47.455 "block_size": 512, 00:28:47.455 "num_blocks": 63488, 00:28:47.455 "uuid": "7bddfc51-996f-47f5-8719-2b41c4e2ddc5", 00:28:47.455 "assigned_rate_limits": { 00:28:47.455 "rw_ios_per_sec": 0, 00:28:47.455 "rw_mbytes_per_sec": 0, 00:28:47.455 "r_mbytes_per_sec": 0, 00:28:47.455 "w_mbytes_per_sec": 0 00:28:47.455 }, 00:28:47.455 "claimed": false, 00:28:47.455 "zoned": false, 00:28:47.455 "supported_io_types": { 00:28:47.455 "read": true, 00:28:47.455 "write": true, 00:28:47.455 "unmap": false, 00:28:47.455 "write_zeroes": true, 00:28:47.455 "flush": false, 00:28:47.455 "reset": true, 00:28:47.455 "compare": false, 00:28:47.455 "compare_and_write": false, 00:28:47.455 "abort": false, 00:28:47.455 "nvme_admin": false, 00:28:47.455 "nvme_io": false 00:28:47.455 }, 00:28:47.455 "memory_domains": [ 00:28:47.455 { 00:28:47.455 "dma_device_id": "system", 00:28:47.455 "dma_device_type": 1 00:28:47.455 }, 00:28:47.455 { 00:28:47.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:47.455 "dma_device_type": 2 00:28:47.455 }, 00:28:47.455 { 00:28:47.455 "dma_device_id": "system", 00:28:47.455 "dma_device_type": 1 00:28:47.455 }, 00:28:47.455 { 00:28:47.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:47.455 "dma_device_type": 2 00:28:47.455 }, 00:28:47.455 { 00:28:47.455 "dma_device_id": "system", 00:28:47.455 "dma_device_type": 1 00:28:47.455 }, 00:28:47.455 { 00:28:47.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:47.455 "dma_device_type": 2 00:28:47.455 }, 00:28:47.455 { 00:28:47.455 "dma_device_id": "system", 00:28:47.455 "dma_device_type": 1 00:28:47.455 }, 00:28:47.455 { 00:28:47.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:47.455 "dma_device_type": 2 00:28:47.455 } 00:28:47.455 ], 00:28:47.455 "driver_specific": { 00:28:47.455 "raid": { 00:28:47.455 "uuid": "7bddfc51-996f-47f5-8719-2b41c4e2ddc5", 00:28:47.455 "strip_size_kb": 0, 00:28:47.455 "state": "online", 00:28:47.455 "raid_level": "raid1", 00:28:47.455 "superblock": true, 00:28:47.455 "num_base_bdevs": 4, 00:28:47.455 "num_base_bdevs_discovered": 4, 00:28:47.455 "num_base_bdevs_operational": 4, 00:28:47.455 "base_bdevs_list": [ 00:28:47.455 { 00:28:47.455 "name": "BaseBdev1", 00:28:47.455 "uuid": "a3d0455c-1429-4c5d-826c-92790e0b7bab", 00:28:47.455 "is_configured": true, 00:28:47.455 "data_offset": 2048, 00:28:47.455 "data_size": 63488 00:28:47.455 }, 00:28:47.455 { 00:28:47.455 "name": "BaseBdev2", 00:28:47.455 "uuid": "98c5ccbd-c4d2-4f01-97cd-107facbfbecd", 00:28:47.455 "is_configured": true, 00:28:47.455 "data_offset": 2048, 00:28:47.455 "data_size": 63488 00:28:47.455 }, 00:28:47.455 { 00:28:47.455 "name": "BaseBdev3", 00:28:47.455 "uuid": "e9d56f67-98cd-4621-bfc4-d6d370b94eae", 00:28:47.455 "is_configured": true, 00:28:47.455 "data_offset": 2048, 00:28:47.455 "data_size": 63488 00:28:47.455 }, 00:28:47.455 { 00:28:47.455 "name": "BaseBdev4", 00:28:47.455 "uuid": "05cd7420-0189-42da-ac51-6d68336c5839", 00:28:47.455 "is_configured": true, 00:28:47.455 "data_offset": 2048, 00:28:47.455 "data_size": 63488 00:28:47.455 } 00:28:47.455 ] 00:28:47.455 } 00:28:47.455 } 00:28:47.455 }' 00:28:47.455 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:47.714 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:28:47.714 BaseBdev2 00:28:47.714 BaseBdev3 00:28:47.714 BaseBdev4' 00:28:47.714 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:47.714 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:28:47.714 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:47.973 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:47.973 "name": "BaseBdev1", 00:28:47.973 "aliases": [ 00:28:47.973 "a3d0455c-1429-4c5d-826c-92790e0b7bab" 00:28:47.973 ], 00:28:47.973 "product_name": "Malloc disk", 00:28:47.973 "block_size": 512, 00:28:47.973 "num_blocks": 65536, 00:28:47.973 "uuid": "a3d0455c-1429-4c5d-826c-92790e0b7bab", 00:28:47.973 "assigned_rate_limits": { 00:28:47.973 "rw_ios_per_sec": 0, 00:28:47.973 "rw_mbytes_per_sec": 0, 00:28:47.973 "r_mbytes_per_sec": 0, 00:28:47.973 "w_mbytes_per_sec": 0 00:28:47.973 }, 00:28:47.973 "claimed": true, 00:28:47.973 "claim_type": "exclusive_write", 00:28:47.973 "zoned": false, 00:28:47.973 "supported_io_types": { 00:28:47.973 "read": true, 00:28:47.973 "write": true, 00:28:47.973 "unmap": true, 00:28:47.973 "write_zeroes": true, 00:28:47.973 "flush": true, 00:28:47.973 "reset": true, 00:28:47.973 "compare": false, 00:28:47.973 "compare_and_write": false, 00:28:47.973 "abort": true, 00:28:47.973 "nvme_admin": false, 00:28:47.973 "nvme_io": false 00:28:47.973 }, 00:28:47.973 "memory_domains": [ 00:28:47.973 { 00:28:47.973 "dma_device_id": "system", 00:28:47.973 "dma_device_type": 1 00:28:47.973 }, 00:28:47.973 { 00:28:47.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:47.973 "dma_device_type": 2 00:28:47.973 } 00:28:47.973 ], 00:28:47.973 "driver_specific": {} 00:28:47.973 }' 00:28:47.973 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:47.973 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:47.973 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:47.973 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:47.973 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:47.973 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:47.973 11:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:47.973 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:48.232 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:48.232 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:48.232 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:48.232 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:48.232 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:48.232 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:48.232 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:48.489 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:48.489 "name": "BaseBdev2", 00:28:48.489 "aliases": [ 00:28:48.489 "98c5ccbd-c4d2-4f01-97cd-107facbfbecd" 00:28:48.489 ], 00:28:48.489 "product_name": "Malloc disk", 00:28:48.489 "block_size": 512, 00:28:48.489 "num_blocks": 65536, 00:28:48.489 "uuid": "98c5ccbd-c4d2-4f01-97cd-107facbfbecd", 00:28:48.490 "assigned_rate_limits": { 00:28:48.490 "rw_ios_per_sec": 0, 00:28:48.490 "rw_mbytes_per_sec": 0, 00:28:48.490 "r_mbytes_per_sec": 0, 00:28:48.490 "w_mbytes_per_sec": 0 00:28:48.490 }, 00:28:48.490 "claimed": true, 00:28:48.490 "claim_type": "exclusive_write", 00:28:48.490 "zoned": false, 00:28:48.490 "supported_io_types": { 00:28:48.490 "read": true, 00:28:48.490 "write": true, 00:28:48.490 "unmap": true, 00:28:48.490 "write_zeroes": true, 00:28:48.490 "flush": true, 00:28:48.490 "reset": true, 00:28:48.490 "compare": false, 00:28:48.490 "compare_and_write": false, 00:28:48.490 "abort": true, 00:28:48.490 "nvme_admin": false, 00:28:48.490 "nvme_io": false 00:28:48.490 }, 00:28:48.490 "memory_domains": [ 00:28:48.490 { 00:28:48.490 "dma_device_id": "system", 00:28:48.490 "dma_device_type": 1 00:28:48.490 }, 00:28:48.490 { 00:28:48.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:48.490 "dma_device_type": 2 00:28:48.490 } 00:28:48.490 ], 00:28:48.490 "driver_specific": {} 00:28:48.490 }' 00:28:48.490 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:48.490 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:48.490 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:48.490 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:48.747 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:48.747 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:48.748 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:48.748 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:48.748 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:48.748 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:48.748 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:48.748 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:48.748 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:48.748 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:48.748 11:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:49.091 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:49.091 "name": "BaseBdev3", 00:28:49.091 "aliases": [ 00:28:49.091 "e9d56f67-98cd-4621-bfc4-d6d370b94eae" 00:28:49.091 ], 00:28:49.091 "product_name": "Malloc disk", 00:28:49.091 "block_size": 512, 00:28:49.091 "num_blocks": 65536, 00:28:49.091 "uuid": "e9d56f67-98cd-4621-bfc4-d6d370b94eae", 00:28:49.091 "assigned_rate_limits": { 00:28:49.091 "rw_ios_per_sec": 0, 00:28:49.091 "rw_mbytes_per_sec": 0, 00:28:49.091 "r_mbytes_per_sec": 0, 00:28:49.091 "w_mbytes_per_sec": 0 00:28:49.091 }, 00:28:49.091 "claimed": true, 00:28:49.091 "claim_type": "exclusive_write", 00:28:49.091 "zoned": false, 00:28:49.091 "supported_io_types": { 00:28:49.091 "read": true, 00:28:49.091 "write": true, 00:28:49.091 "unmap": true, 00:28:49.091 "write_zeroes": true, 00:28:49.091 "flush": true, 00:28:49.091 "reset": true, 00:28:49.091 "compare": false, 00:28:49.091 "compare_and_write": false, 00:28:49.091 "abort": true, 00:28:49.091 "nvme_admin": false, 00:28:49.091 "nvme_io": false 00:28:49.091 }, 00:28:49.091 "memory_domains": [ 00:28:49.091 { 00:28:49.091 "dma_device_id": "system", 00:28:49.091 "dma_device_type": 1 00:28:49.091 }, 00:28:49.091 { 00:28:49.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:49.091 "dma_device_type": 2 00:28:49.091 } 00:28:49.091 ], 00:28:49.091 "driver_specific": {} 00:28:49.091 }' 00:28:49.091 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:49.350 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:49.350 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:49.350 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:49.350 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:49.350 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:49.350 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:49.350 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:49.350 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:49.350 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:49.609 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:49.609 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:49.609 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:49.609 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:28:49.609 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:49.867 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:49.867 "name": "BaseBdev4", 00:28:49.867 "aliases": [ 00:28:49.867 "05cd7420-0189-42da-ac51-6d68336c5839" 00:28:49.867 ], 00:28:49.867 "product_name": "Malloc disk", 00:28:49.867 "block_size": 512, 00:28:49.867 "num_blocks": 65536, 00:28:49.867 "uuid": "05cd7420-0189-42da-ac51-6d68336c5839", 00:28:49.867 "assigned_rate_limits": { 00:28:49.867 "rw_ios_per_sec": 0, 00:28:49.867 "rw_mbytes_per_sec": 0, 00:28:49.867 "r_mbytes_per_sec": 0, 00:28:49.867 "w_mbytes_per_sec": 0 00:28:49.867 }, 00:28:49.867 "claimed": true, 00:28:49.867 "claim_type": "exclusive_write", 00:28:49.867 "zoned": false, 00:28:49.867 "supported_io_types": { 00:28:49.867 "read": true, 00:28:49.867 "write": true, 00:28:49.867 "unmap": true, 00:28:49.867 "write_zeroes": true, 00:28:49.867 "flush": true, 00:28:49.867 "reset": true, 00:28:49.867 "compare": false, 00:28:49.867 "compare_and_write": false, 00:28:49.867 "abort": true, 00:28:49.867 "nvme_admin": false, 00:28:49.867 "nvme_io": false 00:28:49.867 }, 00:28:49.867 "memory_domains": [ 00:28:49.867 { 00:28:49.867 "dma_device_id": "system", 00:28:49.867 "dma_device_type": 1 00:28:49.867 }, 00:28:49.867 { 00:28:49.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:49.867 "dma_device_type": 2 00:28:49.867 } 00:28:49.867 ], 00:28:49.867 "driver_specific": {} 00:28:49.867 }' 00:28:49.867 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:49.867 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:50.126 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:50.126 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:50.126 11:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:50.126 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:50.126 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:50.126 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:50.126 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:50.126 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:50.126 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:50.384 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:50.384 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:50.384 [2024-06-10 11:52:22.420619] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:50.641 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:28:50.641 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:28:50.641 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:50.641 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:28:50.641 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:28:50.641 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:28:50.641 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:50.641 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:50.641 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:50.641 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:50.641 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:50.641 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:50.641 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:50.641 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:50.642 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:50.642 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:50.642 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.899 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:50.899 "name": "Existed_Raid", 00:28:50.899 "uuid": "7bddfc51-996f-47f5-8719-2b41c4e2ddc5", 00:28:50.899 "strip_size_kb": 0, 00:28:50.899 "state": "online", 00:28:50.899 "raid_level": "raid1", 00:28:50.899 "superblock": true, 00:28:50.899 "num_base_bdevs": 4, 00:28:50.899 "num_base_bdevs_discovered": 3, 00:28:50.899 "num_base_bdevs_operational": 3, 00:28:50.899 "base_bdevs_list": [ 00:28:50.899 { 00:28:50.899 "name": null, 00:28:50.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:50.899 "is_configured": false, 00:28:50.899 "data_offset": 2048, 00:28:50.899 "data_size": 63488 00:28:50.899 }, 00:28:50.899 { 00:28:50.899 "name": "BaseBdev2", 00:28:50.899 "uuid": "98c5ccbd-c4d2-4f01-97cd-107facbfbecd", 00:28:50.899 "is_configured": true, 00:28:50.899 "data_offset": 2048, 00:28:50.899 "data_size": 63488 00:28:50.899 }, 00:28:50.899 { 00:28:50.899 "name": "BaseBdev3", 00:28:50.899 "uuid": "e9d56f67-98cd-4621-bfc4-d6d370b94eae", 00:28:50.900 "is_configured": true, 00:28:50.900 "data_offset": 2048, 00:28:50.900 "data_size": 63488 00:28:50.900 }, 00:28:50.900 { 00:28:50.900 "name": "BaseBdev4", 00:28:50.900 "uuid": "05cd7420-0189-42da-ac51-6d68336c5839", 00:28:50.900 "is_configured": true, 00:28:50.900 "data_offset": 2048, 00:28:50.900 "data_size": 63488 00:28:50.900 } 00:28:50.900 ] 00:28:50.900 }' 00:28:50.900 11:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:50.900 11:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.466 11:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:28:51.466 11:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:51.466 11:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:51.466 11:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:51.727 11:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:51.727 11:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:51.727 11:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:51.985 [2024-06-10 11:52:24.017470] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:52.244 11:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:52.244 11:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:52.244 11:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:52.244 11:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:52.585 11:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:52.585 11:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:52.585 11:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:28:52.843 [2024-06-10 11:52:24.712582] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:52.843 11:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:52.844 11:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:52.844 11:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:52.844 11:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:53.102 11:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:53.102 11:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:53.103 11:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:28:53.362 [2024-06-10 11:52:25.342683] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:28:53.362 [2024-06-10 11:52:25.343044] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:53.620 [2024-06-10 11:52:25.460798] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:53.620 [2024-06-10 11:52:25.461060] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:53.620 [2024-06-10 11:52:25.461163] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:28:53.620 11:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:53.620 11:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:53.620 11:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:28:53.620 11:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.878 11:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:28:53.878 11:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:28:53.878 11:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:28:53.878 11:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:28:53.878 11:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:53.878 11:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:54.135 BaseBdev2 00:28:54.135 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:28:54.135 11:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:28:54.135 11:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:54.135 11:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:28:54.135 11:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:54.135 11:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:54.135 11:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:54.701 11:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:54.701 [ 00:28:54.701 { 00:28:54.701 "name": "BaseBdev2", 00:28:54.701 "aliases": [ 00:28:54.701 "054075b6-a65d-431f-86ce-776d0a3c2656" 00:28:54.701 ], 00:28:54.701 "product_name": "Malloc disk", 00:28:54.701 "block_size": 512, 00:28:54.701 "num_blocks": 65536, 00:28:54.701 "uuid": "054075b6-a65d-431f-86ce-776d0a3c2656", 00:28:54.701 "assigned_rate_limits": { 00:28:54.701 "rw_ios_per_sec": 0, 00:28:54.701 "rw_mbytes_per_sec": 0, 00:28:54.701 "r_mbytes_per_sec": 0, 00:28:54.701 "w_mbytes_per_sec": 0 00:28:54.701 }, 00:28:54.701 "claimed": false, 00:28:54.701 "zoned": false, 00:28:54.701 "supported_io_types": { 00:28:54.701 "read": true, 00:28:54.701 "write": true, 00:28:54.701 "unmap": true, 00:28:54.701 "write_zeroes": true, 00:28:54.701 "flush": true, 00:28:54.701 "reset": true, 00:28:54.701 "compare": false, 00:28:54.701 "compare_and_write": false, 00:28:54.701 "abort": true, 00:28:54.701 "nvme_admin": false, 00:28:54.701 "nvme_io": false 00:28:54.701 }, 00:28:54.701 "memory_domains": [ 00:28:54.701 { 00:28:54.701 "dma_device_id": "system", 00:28:54.701 "dma_device_type": 1 00:28:54.701 }, 00:28:54.701 { 00:28:54.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:54.701 "dma_device_type": 2 00:28:54.701 } 00:28:54.701 ], 00:28:54.701 "driver_specific": {} 00:28:54.701 } 00:28:54.701 ] 00:28:54.701 11:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:28:54.701 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:54.701 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:54.701 11:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:55.267 BaseBdev3 00:28:55.267 11:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:28:55.267 11:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:28:55.267 11:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:55.267 11:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:28:55.267 11:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:55.267 11:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:55.267 11:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:55.525 11:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:55.805 [ 00:28:55.805 { 00:28:55.805 "name": "BaseBdev3", 00:28:55.805 "aliases": [ 00:28:55.805 "18805b5c-6dc2-4864-89cd-c536dabca630" 00:28:55.805 ], 00:28:55.805 "product_name": "Malloc disk", 00:28:55.805 "block_size": 512, 00:28:55.805 "num_blocks": 65536, 00:28:55.805 "uuid": "18805b5c-6dc2-4864-89cd-c536dabca630", 00:28:55.805 "assigned_rate_limits": { 00:28:55.805 "rw_ios_per_sec": 0, 00:28:55.805 "rw_mbytes_per_sec": 0, 00:28:55.805 "r_mbytes_per_sec": 0, 00:28:55.805 "w_mbytes_per_sec": 0 00:28:55.805 }, 00:28:55.805 "claimed": false, 00:28:55.805 "zoned": false, 00:28:55.805 "supported_io_types": { 00:28:55.805 "read": true, 00:28:55.805 "write": true, 00:28:55.805 "unmap": true, 00:28:55.805 "write_zeroes": true, 00:28:55.805 "flush": true, 00:28:55.805 "reset": true, 00:28:55.805 "compare": false, 00:28:55.805 "compare_and_write": false, 00:28:55.805 "abort": true, 00:28:55.805 "nvme_admin": false, 00:28:55.805 "nvme_io": false 00:28:55.805 }, 00:28:55.805 "memory_domains": [ 00:28:55.805 { 00:28:55.805 "dma_device_id": "system", 00:28:55.805 "dma_device_type": 1 00:28:55.805 }, 00:28:55.805 { 00:28:55.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:55.805 "dma_device_type": 2 00:28:55.805 } 00:28:55.805 ], 00:28:55.805 "driver_specific": {} 00:28:55.805 } 00:28:55.805 ] 00:28:55.805 11:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:28:55.805 11:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:55.805 11:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:55.805 11:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:56.064 BaseBdev4 00:28:56.064 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:28:56.064 11:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:28:56.064 11:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:56.064 11:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:28:56.064 11:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:56.064 11:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:56.064 11:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:56.322 11:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:56.580 [ 00:28:56.581 { 00:28:56.581 "name": "BaseBdev4", 00:28:56.581 "aliases": [ 00:28:56.581 "f75d6095-c0e6-45b3-a1ef-f7a4f8fb51eb" 00:28:56.581 ], 00:28:56.581 "product_name": "Malloc disk", 00:28:56.581 "block_size": 512, 00:28:56.581 "num_blocks": 65536, 00:28:56.581 "uuid": "f75d6095-c0e6-45b3-a1ef-f7a4f8fb51eb", 00:28:56.581 "assigned_rate_limits": { 00:28:56.581 "rw_ios_per_sec": 0, 00:28:56.581 "rw_mbytes_per_sec": 0, 00:28:56.581 "r_mbytes_per_sec": 0, 00:28:56.581 "w_mbytes_per_sec": 0 00:28:56.581 }, 00:28:56.581 "claimed": false, 00:28:56.581 "zoned": false, 00:28:56.581 "supported_io_types": { 00:28:56.581 "read": true, 00:28:56.581 "write": true, 00:28:56.581 "unmap": true, 00:28:56.581 "write_zeroes": true, 00:28:56.581 "flush": true, 00:28:56.581 "reset": true, 00:28:56.581 "compare": false, 00:28:56.581 "compare_and_write": false, 00:28:56.581 "abort": true, 00:28:56.581 "nvme_admin": false, 00:28:56.581 "nvme_io": false 00:28:56.581 }, 00:28:56.581 "memory_domains": [ 00:28:56.581 { 00:28:56.581 "dma_device_id": "system", 00:28:56.581 "dma_device_type": 1 00:28:56.581 }, 00:28:56.581 { 00:28:56.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:56.581 "dma_device_type": 2 00:28:56.581 } 00:28:56.581 ], 00:28:56.581 "driver_specific": {} 00:28:56.581 } 00:28:56.581 ] 00:28:56.581 11:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:28:56.581 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:56.581 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:56.581 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:56.840 [2024-06-10 11:52:28.849517] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:56.840 [2024-06-10 11:52:28.849663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:56.840 [2024-06-10 11:52:28.849728] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:56.840 [2024-06-10 11:52:28.851992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:56.840 [2024-06-10 11:52:28.852518] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:56.840 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:56.840 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:56.840 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:56.840 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:56.840 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:56.840 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:56.840 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:56.840 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:56.840 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:56.840 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:56.840 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.840 11:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:57.406 11:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:57.406 "name": "Existed_Raid", 00:28:57.406 "uuid": "11865093-c965-4158-ac65-cb8b5c56062d", 00:28:57.406 "strip_size_kb": 0, 00:28:57.406 "state": "configuring", 00:28:57.406 "raid_level": "raid1", 00:28:57.406 "superblock": true, 00:28:57.406 "num_base_bdevs": 4, 00:28:57.406 "num_base_bdevs_discovered": 3, 00:28:57.406 "num_base_bdevs_operational": 4, 00:28:57.406 "base_bdevs_list": [ 00:28:57.406 { 00:28:57.406 "name": "BaseBdev1", 00:28:57.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.406 "is_configured": false, 00:28:57.406 "data_offset": 0, 00:28:57.406 "data_size": 0 00:28:57.406 }, 00:28:57.406 { 00:28:57.406 "name": "BaseBdev2", 00:28:57.406 "uuid": "054075b6-a65d-431f-86ce-776d0a3c2656", 00:28:57.406 "is_configured": true, 00:28:57.406 "data_offset": 2048, 00:28:57.406 "data_size": 63488 00:28:57.406 }, 00:28:57.406 { 00:28:57.406 "name": "BaseBdev3", 00:28:57.406 "uuid": "18805b5c-6dc2-4864-89cd-c536dabca630", 00:28:57.406 "is_configured": true, 00:28:57.406 "data_offset": 2048, 00:28:57.406 "data_size": 63488 00:28:57.406 }, 00:28:57.406 { 00:28:57.406 "name": "BaseBdev4", 00:28:57.406 "uuid": "f75d6095-c0e6-45b3-a1ef-f7a4f8fb51eb", 00:28:57.406 "is_configured": true, 00:28:57.406 "data_offset": 2048, 00:28:57.406 "data_size": 63488 00:28:57.406 } 00:28:57.406 ] 00:28:57.406 }' 00:28:57.406 11:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:57.406 11:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.969 11:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:58.244 [2024-06-10 11:52:30.145833] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:58.244 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:58.244 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:58.244 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:58.244 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:58.244 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:58.244 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:58.244 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:58.244 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:58.244 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:58.244 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:58.244 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:58.244 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:58.502 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:58.502 "name": "Existed_Raid", 00:28:58.502 "uuid": "11865093-c965-4158-ac65-cb8b5c56062d", 00:28:58.502 "strip_size_kb": 0, 00:28:58.502 "state": "configuring", 00:28:58.502 "raid_level": "raid1", 00:28:58.502 "superblock": true, 00:28:58.502 "num_base_bdevs": 4, 00:28:58.502 "num_base_bdevs_discovered": 2, 00:28:58.502 "num_base_bdevs_operational": 4, 00:28:58.502 "base_bdevs_list": [ 00:28:58.502 { 00:28:58.502 "name": "BaseBdev1", 00:28:58.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:58.502 "is_configured": false, 00:28:58.502 "data_offset": 0, 00:28:58.502 "data_size": 0 00:28:58.502 }, 00:28:58.502 { 00:28:58.502 "name": null, 00:28:58.502 "uuid": "054075b6-a65d-431f-86ce-776d0a3c2656", 00:28:58.502 "is_configured": false, 00:28:58.502 "data_offset": 2048, 00:28:58.502 "data_size": 63488 00:28:58.502 }, 00:28:58.502 { 00:28:58.502 "name": "BaseBdev3", 00:28:58.502 "uuid": "18805b5c-6dc2-4864-89cd-c536dabca630", 00:28:58.502 "is_configured": true, 00:28:58.502 "data_offset": 2048, 00:28:58.502 "data_size": 63488 00:28:58.502 }, 00:28:58.502 { 00:28:58.502 "name": "BaseBdev4", 00:28:58.502 "uuid": "f75d6095-c0e6-45b3-a1ef-f7a4f8fb51eb", 00:28:58.502 "is_configured": true, 00:28:58.502 "data_offset": 2048, 00:28:58.502 "data_size": 63488 00:28:58.502 } 00:28:58.502 ] 00:28:58.502 }' 00:28:58.502 11:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:58.502 11:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:59.438 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.438 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:59.696 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:28:59.696 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:59.954 [2024-06-10 11:52:31.762470] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:59.954 BaseBdev1 00:28:59.954 11:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:28:59.954 11:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:28:59.954 11:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:59.954 11:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:28:59.954 11:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:59.954 11:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:59.954 11:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:00.210 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:00.486 [ 00:29:00.486 { 00:29:00.486 "name": "BaseBdev1", 00:29:00.486 "aliases": [ 00:29:00.486 "01937f73-a822-4600-9915-91745d82f0ee" 00:29:00.486 ], 00:29:00.486 "product_name": "Malloc disk", 00:29:00.486 "block_size": 512, 00:29:00.486 "num_blocks": 65536, 00:29:00.486 "uuid": "01937f73-a822-4600-9915-91745d82f0ee", 00:29:00.486 "assigned_rate_limits": { 00:29:00.486 "rw_ios_per_sec": 0, 00:29:00.486 "rw_mbytes_per_sec": 0, 00:29:00.486 "r_mbytes_per_sec": 0, 00:29:00.486 "w_mbytes_per_sec": 0 00:29:00.486 }, 00:29:00.486 "claimed": true, 00:29:00.486 "claim_type": "exclusive_write", 00:29:00.486 "zoned": false, 00:29:00.486 "supported_io_types": { 00:29:00.486 "read": true, 00:29:00.486 "write": true, 00:29:00.486 "unmap": true, 00:29:00.486 "write_zeroes": true, 00:29:00.486 "flush": true, 00:29:00.486 "reset": true, 00:29:00.486 "compare": false, 00:29:00.487 "compare_and_write": false, 00:29:00.487 "abort": true, 00:29:00.487 "nvme_admin": false, 00:29:00.487 "nvme_io": false 00:29:00.487 }, 00:29:00.487 "memory_domains": [ 00:29:00.487 { 00:29:00.487 "dma_device_id": "system", 00:29:00.487 "dma_device_type": 1 00:29:00.487 }, 00:29:00.487 { 00:29:00.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:00.487 "dma_device_type": 2 00:29:00.487 } 00:29:00.487 ], 00:29:00.487 "driver_specific": {} 00:29:00.487 } 00:29:00.487 ] 00:29:00.487 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:29:00.487 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:29:00.487 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:00.487 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:00.487 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:00.487 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:00.487 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:00.487 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:00.487 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:00.487 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:00.487 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:00.487 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:00.487 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.745 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:00.745 "name": "Existed_Raid", 00:29:00.745 "uuid": "11865093-c965-4158-ac65-cb8b5c56062d", 00:29:00.745 "strip_size_kb": 0, 00:29:00.745 "state": "configuring", 00:29:00.745 "raid_level": "raid1", 00:29:00.745 "superblock": true, 00:29:00.745 "num_base_bdevs": 4, 00:29:00.745 "num_base_bdevs_discovered": 3, 00:29:00.745 "num_base_bdevs_operational": 4, 00:29:00.745 "base_bdevs_list": [ 00:29:00.745 { 00:29:00.745 "name": "BaseBdev1", 00:29:00.745 "uuid": "01937f73-a822-4600-9915-91745d82f0ee", 00:29:00.745 "is_configured": true, 00:29:00.745 "data_offset": 2048, 00:29:00.745 "data_size": 63488 00:29:00.745 }, 00:29:00.745 { 00:29:00.745 "name": null, 00:29:00.745 "uuid": "054075b6-a65d-431f-86ce-776d0a3c2656", 00:29:00.745 "is_configured": false, 00:29:00.745 "data_offset": 2048, 00:29:00.745 "data_size": 63488 00:29:00.745 }, 00:29:00.745 { 00:29:00.745 "name": "BaseBdev3", 00:29:00.745 "uuid": "18805b5c-6dc2-4864-89cd-c536dabca630", 00:29:00.745 "is_configured": true, 00:29:00.745 "data_offset": 2048, 00:29:00.745 "data_size": 63488 00:29:00.745 }, 00:29:00.745 { 00:29:00.745 "name": "BaseBdev4", 00:29:00.745 "uuid": "f75d6095-c0e6-45b3-a1ef-f7a4f8fb51eb", 00:29:00.745 "is_configured": true, 00:29:00.745 "data_offset": 2048, 00:29:00.745 "data_size": 63488 00:29:00.745 } 00:29:00.745 ] 00:29:00.745 }' 00:29:00.745 11:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:00.745 11:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.310 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.310 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:01.568 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:29:01.568 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:29:01.826 [2024-06-10 11:52:33.739094] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:01.826 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:29:01.826 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:01.826 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:01.826 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:01.826 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:01.827 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:01.827 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:01.827 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:01.827 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:01.827 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:01.827 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.827 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:02.084 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:02.084 "name": "Existed_Raid", 00:29:02.084 "uuid": "11865093-c965-4158-ac65-cb8b5c56062d", 00:29:02.084 "strip_size_kb": 0, 00:29:02.084 "state": "configuring", 00:29:02.084 "raid_level": "raid1", 00:29:02.084 "superblock": true, 00:29:02.084 "num_base_bdevs": 4, 00:29:02.084 "num_base_bdevs_discovered": 2, 00:29:02.084 "num_base_bdevs_operational": 4, 00:29:02.084 "base_bdevs_list": [ 00:29:02.084 { 00:29:02.084 "name": "BaseBdev1", 00:29:02.084 "uuid": "01937f73-a822-4600-9915-91745d82f0ee", 00:29:02.084 "is_configured": true, 00:29:02.084 "data_offset": 2048, 00:29:02.084 "data_size": 63488 00:29:02.084 }, 00:29:02.084 { 00:29:02.084 "name": null, 00:29:02.084 "uuid": "054075b6-a65d-431f-86ce-776d0a3c2656", 00:29:02.084 "is_configured": false, 00:29:02.084 "data_offset": 2048, 00:29:02.084 "data_size": 63488 00:29:02.084 }, 00:29:02.084 { 00:29:02.084 "name": null, 00:29:02.084 "uuid": "18805b5c-6dc2-4864-89cd-c536dabca630", 00:29:02.084 "is_configured": false, 00:29:02.084 "data_offset": 2048, 00:29:02.084 "data_size": 63488 00:29:02.084 }, 00:29:02.084 { 00:29:02.084 "name": "BaseBdev4", 00:29:02.084 "uuid": "f75d6095-c0e6-45b3-a1ef-f7a4f8fb51eb", 00:29:02.084 "is_configured": true, 00:29:02.084 "data_offset": 2048, 00:29:02.084 "data_size": 63488 00:29:02.084 } 00:29:02.084 ] 00:29:02.084 }' 00:29:02.084 11:52:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:02.084 11:52:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.651 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.651 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:03.216 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:29:03.216 11:52:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:29:03.474 [2024-06-10 11:52:35.279063] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:03.474 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:29:03.474 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:03.474 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:03.474 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:03.474 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:03.474 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:03.474 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:03.474 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:03.474 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:03.474 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:03.474 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.474 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:03.733 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:03.733 "name": "Existed_Raid", 00:29:03.733 "uuid": "11865093-c965-4158-ac65-cb8b5c56062d", 00:29:03.733 "strip_size_kb": 0, 00:29:03.733 "state": "configuring", 00:29:03.733 "raid_level": "raid1", 00:29:03.733 "superblock": true, 00:29:03.733 "num_base_bdevs": 4, 00:29:03.733 "num_base_bdevs_discovered": 3, 00:29:03.733 "num_base_bdevs_operational": 4, 00:29:03.733 "base_bdevs_list": [ 00:29:03.733 { 00:29:03.733 "name": "BaseBdev1", 00:29:03.733 "uuid": "01937f73-a822-4600-9915-91745d82f0ee", 00:29:03.733 "is_configured": true, 00:29:03.733 "data_offset": 2048, 00:29:03.733 "data_size": 63488 00:29:03.733 }, 00:29:03.733 { 00:29:03.733 "name": null, 00:29:03.733 "uuid": "054075b6-a65d-431f-86ce-776d0a3c2656", 00:29:03.733 "is_configured": false, 00:29:03.733 "data_offset": 2048, 00:29:03.733 "data_size": 63488 00:29:03.733 }, 00:29:03.733 { 00:29:03.733 "name": "BaseBdev3", 00:29:03.733 "uuid": "18805b5c-6dc2-4864-89cd-c536dabca630", 00:29:03.733 "is_configured": true, 00:29:03.733 "data_offset": 2048, 00:29:03.733 "data_size": 63488 00:29:03.733 }, 00:29:03.733 { 00:29:03.733 "name": "BaseBdev4", 00:29:03.733 "uuid": "f75d6095-c0e6-45b3-a1ef-f7a4f8fb51eb", 00:29:03.733 "is_configured": true, 00:29:03.733 "data_offset": 2048, 00:29:03.733 "data_size": 63488 00:29:03.733 } 00:29:03.733 ] 00:29:03.733 }' 00:29:03.733 11:52:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:03.733 11:52:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.300 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.300 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:04.865 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:29:04.865 11:52:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:29:04.865 [2024-06-10 11:52:36.891777] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:05.123 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:29:05.123 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:05.123 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:05.123 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:05.123 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:05.123 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:05.123 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:05.123 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:05.123 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:05.123 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:05.123 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.123 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:05.382 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:05.382 "name": "Existed_Raid", 00:29:05.382 "uuid": "11865093-c965-4158-ac65-cb8b5c56062d", 00:29:05.382 "strip_size_kb": 0, 00:29:05.382 "state": "configuring", 00:29:05.382 "raid_level": "raid1", 00:29:05.382 "superblock": true, 00:29:05.382 "num_base_bdevs": 4, 00:29:05.382 "num_base_bdevs_discovered": 2, 00:29:05.382 "num_base_bdevs_operational": 4, 00:29:05.382 "base_bdevs_list": [ 00:29:05.382 { 00:29:05.382 "name": null, 00:29:05.382 "uuid": "01937f73-a822-4600-9915-91745d82f0ee", 00:29:05.382 "is_configured": false, 00:29:05.382 "data_offset": 2048, 00:29:05.382 "data_size": 63488 00:29:05.382 }, 00:29:05.382 { 00:29:05.382 "name": null, 00:29:05.382 "uuid": "054075b6-a65d-431f-86ce-776d0a3c2656", 00:29:05.382 "is_configured": false, 00:29:05.382 "data_offset": 2048, 00:29:05.382 "data_size": 63488 00:29:05.382 }, 00:29:05.382 { 00:29:05.382 "name": "BaseBdev3", 00:29:05.382 "uuid": "18805b5c-6dc2-4864-89cd-c536dabca630", 00:29:05.382 "is_configured": true, 00:29:05.382 "data_offset": 2048, 00:29:05.382 "data_size": 63488 00:29:05.382 }, 00:29:05.382 { 00:29:05.382 "name": "BaseBdev4", 00:29:05.382 "uuid": "f75d6095-c0e6-45b3-a1ef-f7a4f8fb51eb", 00:29:05.382 "is_configured": true, 00:29:05.382 "data_offset": 2048, 00:29:05.382 "data_size": 63488 00:29:05.382 } 00:29:05.382 ] 00:29:05.382 }' 00:29:05.382 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:05.382 11:52:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.949 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.949 11:52:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:06.207 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:29:06.207 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:29:06.207 [2024-06-10 11:52:38.260651] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:06.466 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:29:06.466 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:06.466 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:06.466 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:06.466 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:06.466 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:06.466 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:06.466 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:06.466 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:06.466 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:06.466 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:06.466 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:06.724 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:06.724 "name": "Existed_Raid", 00:29:06.724 "uuid": "11865093-c965-4158-ac65-cb8b5c56062d", 00:29:06.724 "strip_size_kb": 0, 00:29:06.724 "state": "configuring", 00:29:06.724 "raid_level": "raid1", 00:29:06.724 "superblock": true, 00:29:06.724 "num_base_bdevs": 4, 00:29:06.724 "num_base_bdevs_discovered": 3, 00:29:06.724 "num_base_bdevs_operational": 4, 00:29:06.724 "base_bdevs_list": [ 00:29:06.724 { 00:29:06.724 "name": null, 00:29:06.724 "uuid": "01937f73-a822-4600-9915-91745d82f0ee", 00:29:06.724 "is_configured": false, 00:29:06.724 "data_offset": 2048, 00:29:06.724 "data_size": 63488 00:29:06.724 }, 00:29:06.724 { 00:29:06.724 "name": "BaseBdev2", 00:29:06.724 "uuid": "054075b6-a65d-431f-86ce-776d0a3c2656", 00:29:06.724 "is_configured": true, 00:29:06.724 "data_offset": 2048, 00:29:06.724 "data_size": 63488 00:29:06.724 }, 00:29:06.724 { 00:29:06.724 "name": "BaseBdev3", 00:29:06.724 "uuid": "18805b5c-6dc2-4864-89cd-c536dabca630", 00:29:06.724 "is_configured": true, 00:29:06.724 "data_offset": 2048, 00:29:06.724 "data_size": 63488 00:29:06.724 }, 00:29:06.724 { 00:29:06.724 "name": "BaseBdev4", 00:29:06.724 "uuid": "f75d6095-c0e6-45b3-a1ef-f7a4f8fb51eb", 00:29:06.724 "is_configured": true, 00:29:06.724 "data_offset": 2048, 00:29:06.724 "data_size": 63488 00:29:06.724 } 00:29:06.724 ] 00:29:06.724 }' 00:29:06.724 11:52:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:06.724 11:52:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:07.290 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.290 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:07.549 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:29:07.549 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.549 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:29:07.817 11:52:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 01937f73-a822-4600-9915-91745d82f0ee 00:29:08.385 [2024-06-10 11:52:40.185114] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:29:08.385 [2024-06-10 11:52:40.185611] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:29:08.385 [2024-06-10 11:52:40.185751] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:08.385 [2024-06-10 11:52:40.185915] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:08.385 [2024-06-10 11:52:40.186472] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:29:08.385 [2024-06-10 11:52:40.186598] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:29:08.385 NewBaseBdev 00:29:08.385 [2024-06-10 11:52:40.186935] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:08.385 11:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:29:08.385 11:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:29:08.385 11:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:29:08.385 11:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:29:08.385 11:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:29:08.385 11:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:29:08.385 11:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:08.644 11:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:29:08.902 [ 00:29:08.902 { 00:29:08.902 "name": "NewBaseBdev", 00:29:08.902 "aliases": [ 00:29:08.902 "01937f73-a822-4600-9915-91745d82f0ee" 00:29:08.902 ], 00:29:08.902 "product_name": "Malloc disk", 00:29:08.902 "block_size": 512, 00:29:08.902 "num_blocks": 65536, 00:29:08.902 "uuid": "01937f73-a822-4600-9915-91745d82f0ee", 00:29:08.902 "assigned_rate_limits": { 00:29:08.902 "rw_ios_per_sec": 0, 00:29:08.902 "rw_mbytes_per_sec": 0, 00:29:08.902 "r_mbytes_per_sec": 0, 00:29:08.902 "w_mbytes_per_sec": 0 00:29:08.902 }, 00:29:08.902 "claimed": true, 00:29:08.902 "claim_type": "exclusive_write", 00:29:08.902 "zoned": false, 00:29:08.902 "supported_io_types": { 00:29:08.902 "read": true, 00:29:08.902 "write": true, 00:29:08.902 "unmap": true, 00:29:08.902 "write_zeroes": true, 00:29:08.902 "flush": true, 00:29:08.902 "reset": true, 00:29:08.902 "compare": false, 00:29:08.902 "compare_and_write": false, 00:29:08.902 "abort": true, 00:29:08.902 "nvme_admin": false, 00:29:08.902 "nvme_io": false 00:29:08.902 }, 00:29:08.902 "memory_domains": [ 00:29:08.902 { 00:29:08.902 "dma_device_id": "system", 00:29:08.902 "dma_device_type": 1 00:29:08.902 }, 00:29:08.902 { 00:29:08.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:08.902 "dma_device_type": 2 00:29:08.902 } 00:29:08.902 ], 00:29:08.902 "driver_specific": {} 00:29:08.902 } 00:29:08.902 ] 00:29:08.902 11:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:29:08.902 11:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:29:08.902 11:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:08.902 11:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:08.902 11:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:08.902 11:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:08.902 11:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:08.902 11:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:08.902 11:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:08.902 11:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:08.902 11:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:08.902 11:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:08.902 11:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:09.161 11:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:09.161 "name": "Existed_Raid", 00:29:09.161 "uuid": "11865093-c965-4158-ac65-cb8b5c56062d", 00:29:09.161 "strip_size_kb": 0, 00:29:09.161 "state": "online", 00:29:09.161 "raid_level": "raid1", 00:29:09.161 "superblock": true, 00:29:09.161 "num_base_bdevs": 4, 00:29:09.161 "num_base_bdevs_discovered": 4, 00:29:09.161 "num_base_bdevs_operational": 4, 00:29:09.161 "base_bdevs_list": [ 00:29:09.161 { 00:29:09.161 "name": "NewBaseBdev", 00:29:09.161 "uuid": "01937f73-a822-4600-9915-91745d82f0ee", 00:29:09.161 "is_configured": true, 00:29:09.161 "data_offset": 2048, 00:29:09.161 "data_size": 63488 00:29:09.161 }, 00:29:09.161 { 00:29:09.161 "name": "BaseBdev2", 00:29:09.161 "uuid": "054075b6-a65d-431f-86ce-776d0a3c2656", 00:29:09.161 "is_configured": true, 00:29:09.161 "data_offset": 2048, 00:29:09.161 "data_size": 63488 00:29:09.161 }, 00:29:09.161 { 00:29:09.161 "name": "BaseBdev3", 00:29:09.161 "uuid": "18805b5c-6dc2-4864-89cd-c536dabca630", 00:29:09.161 "is_configured": true, 00:29:09.161 "data_offset": 2048, 00:29:09.161 "data_size": 63488 00:29:09.161 }, 00:29:09.161 { 00:29:09.161 "name": "BaseBdev4", 00:29:09.161 "uuid": "f75d6095-c0e6-45b3-a1ef-f7a4f8fb51eb", 00:29:09.161 "is_configured": true, 00:29:09.161 "data_offset": 2048, 00:29:09.161 "data_size": 63488 00:29:09.161 } 00:29:09.161 ] 00:29:09.161 }' 00:29:09.161 11:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:09.161 11:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:09.730 11:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:29:09.730 11:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:29:09.730 11:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:29:09.730 11:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:29:09.730 11:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:29:09.730 11:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:29:09.730 11:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:29:09.730 11:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:29:09.988 [2024-06-10 11:52:42.013939] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:09.988 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:29:09.988 "name": "Existed_Raid", 00:29:09.988 "aliases": [ 00:29:09.988 "11865093-c965-4158-ac65-cb8b5c56062d" 00:29:09.988 ], 00:29:09.988 "product_name": "Raid Volume", 00:29:09.988 "block_size": 512, 00:29:09.988 "num_blocks": 63488, 00:29:09.988 "uuid": "11865093-c965-4158-ac65-cb8b5c56062d", 00:29:09.988 "assigned_rate_limits": { 00:29:09.988 "rw_ios_per_sec": 0, 00:29:09.988 "rw_mbytes_per_sec": 0, 00:29:09.988 "r_mbytes_per_sec": 0, 00:29:09.988 "w_mbytes_per_sec": 0 00:29:09.988 }, 00:29:09.988 "claimed": false, 00:29:09.988 "zoned": false, 00:29:09.988 "supported_io_types": { 00:29:09.988 "read": true, 00:29:09.988 "write": true, 00:29:09.988 "unmap": false, 00:29:09.988 "write_zeroes": true, 00:29:09.988 "flush": false, 00:29:09.988 "reset": true, 00:29:09.988 "compare": false, 00:29:09.988 "compare_and_write": false, 00:29:09.988 "abort": false, 00:29:09.988 "nvme_admin": false, 00:29:09.988 "nvme_io": false 00:29:09.988 }, 00:29:09.988 "memory_domains": [ 00:29:09.988 { 00:29:09.988 "dma_device_id": "system", 00:29:09.988 "dma_device_type": 1 00:29:09.988 }, 00:29:09.988 { 00:29:09.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:09.988 "dma_device_type": 2 00:29:09.988 }, 00:29:09.988 { 00:29:09.988 "dma_device_id": "system", 00:29:09.988 "dma_device_type": 1 00:29:09.988 }, 00:29:09.988 { 00:29:09.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:09.988 "dma_device_type": 2 00:29:09.988 }, 00:29:09.988 { 00:29:09.988 "dma_device_id": "system", 00:29:09.988 "dma_device_type": 1 00:29:09.988 }, 00:29:09.988 { 00:29:09.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:09.988 "dma_device_type": 2 00:29:09.988 }, 00:29:09.988 { 00:29:09.988 "dma_device_id": "system", 00:29:09.988 "dma_device_type": 1 00:29:09.988 }, 00:29:09.988 { 00:29:09.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:09.988 "dma_device_type": 2 00:29:09.988 } 00:29:09.988 ], 00:29:09.988 "driver_specific": { 00:29:09.988 "raid": { 00:29:09.988 "uuid": "11865093-c965-4158-ac65-cb8b5c56062d", 00:29:09.988 "strip_size_kb": 0, 00:29:09.988 "state": "online", 00:29:09.988 "raid_level": "raid1", 00:29:09.988 "superblock": true, 00:29:09.988 "num_base_bdevs": 4, 00:29:09.988 "num_base_bdevs_discovered": 4, 00:29:09.988 "num_base_bdevs_operational": 4, 00:29:09.988 "base_bdevs_list": [ 00:29:09.988 { 00:29:09.988 "name": "NewBaseBdev", 00:29:09.988 "uuid": "01937f73-a822-4600-9915-91745d82f0ee", 00:29:09.988 "is_configured": true, 00:29:09.988 "data_offset": 2048, 00:29:09.988 "data_size": 63488 00:29:09.988 }, 00:29:09.988 { 00:29:09.988 "name": "BaseBdev2", 00:29:09.988 "uuid": "054075b6-a65d-431f-86ce-776d0a3c2656", 00:29:09.988 "is_configured": true, 00:29:09.988 "data_offset": 2048, 00:29:09.988 "data_size": 63488 00:29:09.988 }, 00:29:09.988 { 00:29:09.988 "name": "BaseBdev3", 00:29:09.988 "uuid": "18805b5c-6dc2-4864-89cd-c536dabca630", 00:29:09.988 "is_configured": true, 00:29:09.988 "data_offset": 2048, 00:29:09.988 "data_size": 63488 00:29:09.988 }, 00:29:09.988 { 00:29:09.988 "name": "BaseBdev4", 00:29:09.988 "uuid": "f75d6095-c0e6-45b3-a1ef-f7a4f8fb51eb", 00:29:09.988 "is_configured": true, 00:29:09.988 "data_offset": 2048, 00:29:09.988 "data_size": 63488 00:29:09.988 } 00:29:09.988 ] 00:29:09.988 } 00:29:09.988 } 00:29:09.988 }' 00:29:09.988 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:10.252 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:29:10.252 BaseBdev2 00:29:10.252 BaseBdev3 00:29:10.252 BaseBdev4' 00:29:10.252 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:10.252 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:10.252 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:29:10.511 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:10.511 "name": "NewBaseBdev", 00:29:10.512 "aliases": [ 00:29:10.512 "01937f73-a822-4600-9915-91745d82f0ee" 00:29:10.512 ], 00:29:10.512 "product_name": "Malloc disk", 00:29:10.512 "block_size": 512, 00:29:10.512 "num_blocks": 65536, 00:29:10.512 "uuid": "01937f73-a822-4600-9915-91745d82f0ee", 00:29:10.512 "assigned_rate_limits": { 00:29:10.512 "rw_ios_per_sec": 0, 00:29:10.512 "rw_mbytes_per_sec": 0, 00:29:10.512 "r_mbytes_per_sec": 0, 00:29:10.512 "w_mbytes_per_sec": 0 00:29:10.512 }, 00:29:10.512 "claimed": true, 00:29:10.512 "claim_type": "exclusive_write", 00:29:10.512 "zoned": false, 00:29:10.512 "supported_io_types": { 00:29:10.512 "read": true, 00:29:10.512 "write": true, 00:29:10.512 "unmap": true, 00:29:10.512 "write_zeroes": true, 00:29:10.512 "flush": true, 00:29:10.512 "reset": true, 00:29:10.512 "compare": false, 00:29:10.512 "compare_and_write": false, 00:29:10.512 "abort": true, 00:29:10.512 "nvme_admin": false, 00:29:10.512 "nvme_io": false 00:29:10.512 }, 00:29:10.512 "memory_domains": [ 00:29:10.512 { 00:29:10.512 "dma_device_id": "system", 00:29:10.512 "dma_device_type": 1 00:29:10.512 }, 00:29:10.512 { 00:29:10.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:10.512 "dma_device_type": 2 00:29:10.512 } 00:29:10.512 ], 00:29:10.512 "driver_specific": {} 00:29:10.512 }' 00:29:10.512 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:10.512 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:10.512 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:10.512 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:10.512 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:10.512 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:10.512 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:10.512 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:10.770 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:10.770 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:10.770 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:10.770 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:10.770 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:10.770 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:29:10.770 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:11.029 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:11.029 "name": "BaseBdev2", 00:29:11.029 "aliases": [ 00:29:11.029 "054075b6-a65d-431f-86ce-776d0a3c2656" 00:29:11.029 ], 00:29:11.029 "product_name": "Malloc disk", 00:29:11.029 "block_size": 512, 00:29:11.029 "num_blocks": 65536, 00:29:11.029 "uuid": "054075b6-a65d-431f-86ce-776d0a3c2656", 00:29:11.029 "assigned_rate_limits": { 00:29:11.029 "rw_ios_per_sec": 0, 00:29:11.029 "rw_mbytes_per_sec": 0, 00:29:11.029 "r_mbytes_per_sec": 0, 00:29:11.029 "w_mbytes_per_sec": 0 00:29:11.029 }, 00:29:11.029 "claimed": true, 00:29:11.029 "claim_type": "exclusive_write", 00:29:11.029 "zoned": false, 00:29:11.029 "supported_io_types": { 00:29:11.029 "read": true, 00:29:11.029 "write": true, 00:29:11.029 "unmap": true, 00:29:11.029 "write_zeroes": true, 00:29:11.029 "flush": true, 00:29:11.029 "reset": true, 00:29:11.029 "compare": false, 00:29:11.029 "compare_and_write": false, 00:29:11.029 "abort": true, 00:29:11.029 "nvme_admin": false, 00:29:11.029 "nvme_io": false 00:29:11.029 }, 00:29:11.029 "memory_domains": [ 00:29:11.029 { 00:29:11.029 "dma_device_id": "system", 00:29:11.029 "dma_device_type": 1 00:29:11.029 }, 00:29:11.029 { 00:29:11.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:11.029 "dma_device_type": 2 00:29:11.029 } 00:29:11.029 ], 00:29:11.029 "driver_specific": {} 00:29:11.029 }' 00:29:11.029 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:11.029 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:11.029 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:11.029 11:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:11.029 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:11.287 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:11.287 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:11.287 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:11.287 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:11.287 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:11.287 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:11.287 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:11.287 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:11.287 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:29:11.287 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:11.545 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:11.545 "name": "BaseBdev3", 00:29:11.545 "aliases": [ 00:29:11.545 "18805b5c-6dc2-4864-89cd-c536dabca630" 00:29:11.545 ], 00:29:11.545 "product_name": "Malloc disk", 00:29:11.545 "block_size": 512, 00:29:11.545 "num_blocks": 65536, 00:29:11.545 "uuid": "18805b5c-6dc2-4864-89cd-c536dabca630", 00:29:11.545 "assigned_rate_limits": { 00:29:11.545 "rw_ios_per_sec": 0, 00:29:11.545 "rw_mbytes_per_sec": 0, 00:29:11.545 "r_mbytes_per_sec": 0, 00:29:11.545 "w_mbytes_per_sec": 0 00:29:11.545 }, 00:29:11.545 "claimed": true, 00:29:11.545 "claim_type": "exclusive_write", 00:29:11.545 "zoned": false, 00:29:11.545 "supported_io_types": { 00:29:11.545 "read": true, 00:29:11.545 "write": true, 00:29:11.545 "unmap": true, 00:29:11.545 "write_zeroes": true, 00:29:11.545 "flush": true, 00:29:11.545 "reset": true, 00:29:11.545 "compare": false, 00:29:11.545 "compare_and_write": false, 00:29:11.545 "abort": true, 00:29:11.545 "nvme_admin": false, 00:29:11.545 "nvme_io": false 00:29:11.545 }, 00:29:11.545 "memory_domains": [ 00:29:11.545 { 00:29:11.545 "dma_device_id": "system", 00:29:11.545 "dma_device_type": 1 00:29:11.545 }, 00:29:11.545 { 00:29:11.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:11.545 "dma_device_type": 2 00:29:11.545 } 00:29:11.545 ], 00:29:11.545 "driver_specific": {} 00:29:11.545 }' 00:29:11.545 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:11.802 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:11.802 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:11.802 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:11.802 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:11.802 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:11.802 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:11.802 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:11.802 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:11.802 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:12.060 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:12.060 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:12.060 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:12.060 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:29:12.060 11:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:12.319 11:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:12.319 "name": "BaseBdev4", 00:29:12.319 "aliases": [ 00:29:12.319 "f75d6095-c0e6-45b3-a1ef-f7a4f8fb51eb" 00:29:12.319 ], 00:29:12.319 "product_name": "Malloc disk", 00:29:12.319 "block_size": 512, 00:29:12.319 "num_blocks": 65536, 00:29:12.319 "uuid": "f75d6095-c0e6-45b3-a1ef-f7a4f8fb51eb", 00:29:12.319 "assigned_rate_limits": { 00:29:12.319 "rw_ios_per_sec": 0, 00:29:12.319 "rw_mbytes_per_sec": 0, 00:29:12.319 "r_mbytes_per_sec": 0, 00:29:12.319 "w_mbytes_per_sec": 0 00:29:12.319 }, 00:29:12.319 "claimed": true, 00:29:12.319 "claim_type": "exclusive_write", 00:29:12.319 "zoned": false, 00:29:12.319 "supported_io_types": { 00:29:12.319 "read": true, 00:29:12.319 "write": true, 00:29:12.319 "unmap": true, 00:29:12.319 "write_zeroes": true, 00:29:12.319 "flush": true, 00:29:12.319 "reset": true, 00:29:12.319 "compare": false, 00:29:12.319 "compare_and_write": false, 00:29:12.319 "abort": true, 00:29:12.319 "nvme_admin": false, 00:29:12.319 "nvme_io": false 00:29:12.319 }, 00:29:12.319 "memory_domains": [ 00:29:12.319 { 00:29:12.319 "dma_device_id": "system", 00:29:12.319 "dma_device_type": 1 00:29:12.319 }, 00:29:12.319 { 00:29:12.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:12.319 "dma_device_type": 2 00:29:12.319 } 00:29:12.319 ], 00:29:12.319 "driver_specific": {} 00:29:12.319 }' 00:29:12.319 11:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:12.319 11:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:12.319 11:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:12.319 11:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:12.319 11:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:12.577 11:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:12.578 11:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:12.578 11:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:12.578 11:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:12.578 11:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:12.578 11:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:12.578 11:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:12.578 11:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:12.836 [2024-06-10 11:52:44.826317] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:12.836 [2024-06-10 11:52:44.826617] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:12.836 [2024-06-10 11:52:44.826820] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:12.836 [2024-06-10 11:52:44.827257] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:12.836 [2024-06-10 11:52:44.827366] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:29:12.836 11:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 143332 00:29:12.836 11:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 143332 ']' 00:29:12.836 11:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 143332 00:29:12.836 11:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:29:12.836 11:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:12.836 11:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 143332 00:29:12.836 11:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:12.836 11:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:12.836 11:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 143332' 00:29:12.836 killing process with pid 143332 00:29:12.836 11:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 143332 00:29:12.836 [2024-06-10 11:52:44.882647] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:12.836 11:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 143332 00:29:13.402 [2024-06-10 11:52:45.349478] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:15.302 11:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:29:15.302 00:29:15.302 real 0m39.416s 00:29:15.302 user 1m11.726s 00:29:15.302 sys 0m5.352s 00:29:15.302 11:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:15.302 ************************************ 00:29:15.302 END TEST raid_state_function_test_sb 00:29:15.302 ************************************ 00:29:15.302 11:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:15.302 11:52:46 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:29:15.302 11:52:46 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:29:15.302 11:52:46 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:15.302 11:52:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:15.302 ************************************ 00:29:15.302 START TEST raid_superblock_test 00:29:15.302 ************************************ 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 4 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=144484 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 144484 /var/tmp/spdk-raid.sock 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 144484 ']' 00:29:15.302 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:15.303 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:15.303 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:15.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:15.303 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:15.303 11:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.303 [2024-06-10 11:52:47.045812] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:29:15.303 [2024-06-10 11:52:47.046402] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144484 ] 00:29:15.303 [2024-06-10 11:52:47.255745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.560 [2024-06-10 11:52:47.533654] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.843 [2024-06-10 11:52:47.774842] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:16.101 11:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:16.101 11:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:29:16.101 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:29:16.101 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:29:16.101 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:29:16.101 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:29:16.101 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:29:16.101 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:16.101 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:29:16.101 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:16.101 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:29:16.359 malloc1 00:29:16.359 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:16.617 [2024-06-10 11:52:48.558344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:16.617 [2024-06-10 11:52:48.558670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:16.617 [2024-06-10 11:52:48.558752] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:29:16.617 [2024-06-10 11:52:48.558898] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:16.617 [2024-06-10 11:52:48.561648] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:16.617 [2024-06-10 11:52:48.561837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:16.617 pt1 00:29:16.617 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:29:16.617 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:29:16.617 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:29:16.617 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:29:16.617 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:29:16.617 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:16.617 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:29:16.617 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:16.617 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:29:16.875 malloc2 00:29:17.133 11:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:17.133 [2024-06-10 11:52:49.149869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:17.133 [2024-06-10 11:52:49.150875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:17.133 [2024-06-10 11:52:49.151052] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:29:17.133 [2024-06-10 11:52:49.151169] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:17.133 [2024-06-10 11:52:49.153881] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:17.133 [2024-06-10 11:52:49.154060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:17.133 pt2 00:29:17.133 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:29:17.133 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:29:17.133 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:29:17.133 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:29:17.133 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:29:17.133 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:17.133 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:29:17.133 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:17.133 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:29:17.697 malloc3 00:29:17.697 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:17.697 [2024-06-10 11:52:49.752697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:17.697 [2024-06-10 11:52:49.753047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:17.697 [2024-06-10 11:52:49.753189] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:29:17.697 [2024-06-10 11:52:49.753302] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:17.697 [2024-06-10 11:52:49.756013] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:17.955 [2024-06-10 11:52:49.756197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:17.955 pt3 00:29:17.955 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:29:17.955 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:29:17.955 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:29:17.955 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:29:17.955 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:29:17.955 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:17.955 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:29:17.955 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:17.955 11:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:29:18.212 malloc4 00:29:18.212 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:18.469 [2024-06-10 11:52:50.296146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:18.469 [2024-06-10 11:52:50.296517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:18.469 [2024-06-10 11:52:50.296661] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:18.469 [2024-06-10 11:52:50.296782] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:18.469 [2024-06-10 11:52:50.299478] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:18.469 [2024-06-10 11:52:50.299665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:18.469 pt4 00:29:18.469 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:29:18.469 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:29:18.469 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:29:18.727 [2024-06-10 11:52:50.532200] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:18.727 [2024-06-10 11:52:50.534684] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:18.727 [2024-06-10 11:52:50.534911] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:18.727 [2024-06-10 11:52:50.535088] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:18.727 [2024-06-10 11:52:50.535469] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:29:18.727 [2024-06-10 11:52:50.535587] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:18.727 [2024-06-10 11:52:50.535817] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:18.727 [2024-06-10 11:52:50.536303] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:29:18.727 [2024-06-10 11:52:50.536413] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:29:18.727 [2024-06-10 11:52:50.536727] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:18.727 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:18.727 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:18.727 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:18.727 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:18.727 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:18.727 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:18.727 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:18.727 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:18.727 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:18.727 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:18.727 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:18.727 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:18.985 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:18.985 "name": "raid_bdev1", 00:29:18.985 "uuid": "e597ad74-da60-4ef7-99a6-0d6e57a3d7d0", 00:29:18.985 "strip_size_kb": 0, 00:29:18.985 "state": "online", 00:29:18.985 "raid_level": "raid1", 00:29:18.985 "superblock": true, 00:29:18.985 "num_base_bdevs": 4, 00:29:18.985 "num_base_bdevs_discovered": 4, 00:29:18.985 "num_base_bdevs_operational": 4, 00:29:18.985 "base_bdevs_list": [ 00:29:18.985 { 00:29:18.985 "name": "pt1", 00:29:18.985 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:18.985 "is_configured": true, 00:29:18.985 "data_offset": 2048, 00:29:18.985 "data_size": 63488 00:29:18.985 }, 00:29:18.985 { 00:29:18.985 "name": "pt2", 00:29:18.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:18.985 "is_configured": true, 00:29:18.985 "data_offset": 2048, 00:29:18.985 "data_size": 63488 00:29:18.985 }, 00:29:18.985 { 00:29:18.985 "name": "pt3", 00:29:18.985 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:18.985 "is_configured": true, 00:29:18.985 "data_offset": 2048, 00:29:18.985 "data_size": 63488 00:29:18.985 }, 00:29:18.985 { 00:29:18.985 "name": "pt4", 00:29:18.985 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:18.985 "is_configured": true, 00:29:18.985 "data_offset": 2048, 00:29:18.985 "data_size": 63488 00:29:18.985 } 00:29:18.985 ] 00:29:18.985 }' 00:29:18.985 11:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:18.985 11:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.552 11:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:29:19.552 11:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:29:19.552 11:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:29:19.552 11:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:29:19.552 11:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:29:19.552 11:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:29:19.552 11:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:19.552 11:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:29:19.810 [2024-06-10 11:52:51.809326] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:19.810 11:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:29:19.810 "name": "raid_bdev1", 00:29:19.810 "aliases": [ 00:29:19.810 "e597ad74-da60-4ef7-99a6-0d6e57a3d7d0" 00:29:19.810 ], 00:29:19.810 "product_name": "Raid Volume", 00:29:19.810 "block_size": 512, 00:29:19.810 "num_blocks": 63488, 00:29:19.810 "uuid": "e597ad74-da60-4ef7-99a6-0d6e57a3d7d0", 00:29:19.810 "assigned_rate_limits": { 00:29:19.810 "rw_ios_per_sec": 0, 00:29:19.810 "rw_mbytes_per_sec": 0, 00:29:19.810 "r_mbytes_per_sec": 0, 00:29:19.810 "w_mbytes_per_sec": 0 00:29:19.810 }, 00:29:19.810 "claimed": false, 00:29:19.810 "zoned": false, 00:29:19.810 "supported_io_types": { 00:29:19.810 "read": true, 00:29:19.810 "write": true, 00:29:19.810 "unmap": false, 00:29:19.810 "write_zeroes": true, 00:29:19.810 "flush": false, 00:29:19.810 "reset": true, 00:29:19.810 "compare": false, 00:29:19.810 "compare_and_write": false, 00:29:19.810 "abort": false, 00:29:19.810 "nvme_admin": false, 00:29:19.810 "nvme_io": false 00:29:19.810 }, 00:29:19.810 "memory_domains": [ 00:29:19.810 { 00:29:19.810 "dma_device_id": "system", 00:29:19.810 "dma_device_type": 1 00:29:19.810 }, 00:29:19.810 { 00:29:19.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.810 "dma_device_type": 2 00:29:19.810 }, 00:29:19.810 { 00:29:19.810 "dma_device_id": "system", 00:29:19.810 "dma_device_type": 1 00:29:19.810 }, 00:29:19.810 { 00:29:19.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.810 "dma_device_type": 2 00:29:19.810 }, 00:29:19.810 { 00:29:19.810 "dma_device_id": "system", 00:29:19.810 "dma_device_type": 1 00:29:19.810 }, 00:29:19.810 { 00:29:19.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.810 "dma_device_type": 2 00:29:19.810 }, 00:29:19.810 { 00:29:19.810 "dma_device_id": "system", 00:29:19.810 "dma_device_type": 1 00:29:19.810 }, 00:29:19.810 { 00:29:19.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.810 "dma_device_type": 2 00:29:19.810 } 00:29:19.810 ], 00:29:19.810 "driver_specific": { 00:29:19.810 "raid": { 00:29:19.810 "uuid": "e597ad74-da60-4ef7-99a6-0d6e57a3d7d0", 00:29:19.810 "strip_size_kb": 0, 00:29:19.810 "state": "online", 00:29:19.810 "raid_level": "raid1", 00:29:19.810 "superblock": true, 00:29:19.810 "num_base_bdevs": 4, 00:29:19.810 "num_base_bdevs_discovered": 4, 00:29:19.810 "num_base_bdevs_operational": 4, 00:29:19.810 "base_bdevs_list": [ 00:29:19.810 { 00:29:19.810 "name": "pt1", 00:29:19.810 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:19.810 "is_configured": true, 00:29:19.810 "data_offset": 2048, 00:29:19.810 "data_size": 63488 00:29:19.810 }, 00:29:19.810 { 00:29:19.810 "name": "pt2", 00:29:19.810 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:19.810 "is_configured": true, 00:29:19.810 "data_offset": 2048, 00:29:19.810 "data_size": 63488 00:29:19.810 }, 00:29:19.810 { 00:29:19.810 "name": "pt3", 00:29:19.810 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:19.810 "is_configured": true, 00:29:19.810 "data_offset": 2048, 00:29:19.810 "data_size": 63488 00:29:19.810 }, 00:29:19.810 { 00:29:19.810 "name": "pt4", 00:29:19.810 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:19.810 "is_configured": true, 00:29:19.810 "data_offset": 2048, 00:29:19.810 "data_size": 63488 00:29:19.810 } 00:29:19.810 ] 00:29:19.810 } 00:29:19.810 } 00:29:19.811 }' 00:29:19.811 11:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:20.068 11:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:29:20.069 pt2 00:29:20.069 pt3 00:29:20.069 pt4' 00:29:20.069 11:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:20.069 11:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:29:20.069 11:52:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:20.326 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:20.326 "name": "pt1", 00:29:20.326 "aliases": [ 00:29:20.326 "00000000-0000-0000-0000-000000000001" 00:29:20.326 ], 00:29:20.326 "product_name": "passthru", 00:29:20.326 "block_size": 512, 00:29:20.326 "num_blocks": 65536, 00:29:20.326 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:20.326 "assigned_rate_limits": { 00:29:20.326 "rw_ios_per_sec": 0, 00:29:20.326 "rw_mbytes_per_sec": 0, 00:29:20.326 "r_mbytes_per_sec": 0, 00:29:20.326 "w_mbytes_per_sec": 0 00:29:20.326 }, 00:29:20.326 "claimed": true, 00:29:20.326 "claim_type": "exclusive_write", 00:29:20.326 "zoned": false, 00:29:20.326 "supported_io_types": { 00:29:20.326 "read": true, 00:29:20.326 "write": true, 00:29:20.326 "unmap": true, 00:29:20.326 "write_zeroes": true, 00:29:20.326 "flush": true, 00:29:20.326 "reset": true, 00:29:20.326 "compare": false, 00:29:20.326 "compare_and_write": false, 00:29:20.326 "abort": true, 00:29:20.326 "nvme_admin": false, 00:29:20.326 "nvme_io": false 00:29:20.326 }, 00:29:20.326 "memory_domains": [ 00:29:20.326 { 00:29:20.326 "dma_device_id": "system", 00:29:20.326 "dma_device_type": 1 00:29:20.326 }, 00:29:20.326 { 00:29:20.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:20.326 "dma_device_type": 2 00:29:20.326 } 00:29:20.326 ], 00:29:20.326 "driver_specific": { 00:29:20.326 "passthru": { 00:29:20.326 "name": "pt1", 00:29:20.326 "base_bdev_name": "malloc1" 00:29:20.326 } 00:29:20.326 } 00:29:20.326 }' 00:29:20.326 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:20.326 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:20.326 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:20.326 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:20.326 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:20.326 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:20.326 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:20.326 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:20.584 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:20.584 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:20.584 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:20.584 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:20.584 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:20.584 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:29:20.584 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:20.843 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:20.843 "name": "pt2", 00:29:20.843 "aliases": [ 00:29:20.843 "00000000-0000-0000-0000-000000000002" 00:29:20.843 ], 00:29:20.843 "product_name": "passthru", 00:29:20.843 "block_size": 512, 00:29:20.843 "num_blocks": 65536, 00:29:20.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:20.843 "assigned_rate_limits": { 00:29:20.843 "rw_ios_per_sec": 0, 00:29:20.843 "rw_mbytes_per_sec": 0, 00:29:20.843 "r_mbytes_per_sec": 0, 00:29:20.843 "w_mbytes_per_sec": 0 00:29:20.843 }, 00:29:20.843 "claimed": true, 00:29:20.843 "claim_type": "exclusive_write", 00:29:20.843 "zoned": false, 00:29:20.843 "supported_io_types": { 00:29:20.843 "read": true, 00:29:20.843 "write": true, 00:29:20.843 "unmap": true, 00:29:20.843 "write_zeroes": true, 00:29:20.843 "flush": true, 00:29:20.843 "reset": true, 00:29:20.843 "compare": false, 00:29:20.843 "compare_and_write": false, 00:29:20.843 "abort": true, 00:29:20.843 "nvme_admin": false, 00:29:20.843 "nvme_io": false 00:29:20.843 }, 00:29:20.843 "memory_domains": [ 00:29:20.843 { 00:29:20.843 "dma_device_id": "system", 00:29:20.843 "dma_device_type": 1 00:29:20.843 }, 00:29:20.843 { 00:29:20.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:20.843 "dma_device_type": 2 00:29:20.843 } 00:29:20.843 ], 00:29:20.843 "driver_specific": { 00:29:20.843 "passthru": { 00:29:20.843 "name": "pt2", 00:29:20.843 "base_bdev_name": "malloc2" 00:29:20.843 } 00:29:20.843 } 00:29:20.843 }' 00:29:20.843 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:20.843 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:20.843 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:20.843 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:21.101 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:21.101 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:21.101 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:21.101 11:52:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:21.101 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:21.101 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:21.101 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:21.101 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:21.101 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:21.101 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:29:21.101 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:21.359 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:21.359 "name": "pt3", 00:29:21.359 "aliases": [ 00:29:21.359 "00000000-0000-0000-0000-000000000003" 00:29:21.359 ], 00:29:21.359 "product_name": "passthru", 00:29:21.359 "block_size": 512, 00:29:21.359 "num_blocks": 65536, 00:29:21.359 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:21.359 "assigned_rate_limits": { 00:29:21.359 "rw_ios_per_sec": 0, 00:29:21.359 "rw_mbytes_per_sec": 0, 00:29:21.359 "r_mbytes_per_sec": 0, 00:29:21.359 "w_mbytes_per_sec": 0 00:29:21.359 }, 00:29:21.359 "claimed": true, 00:29:21.359 "claim_type": "exclusive_write", 00:29:21.359 "zoned": false, 00:29:21.359 "supported_io_types": { 00:29:21.359 "read": true, 00:29:21.359 "write": true, 00:29:21.359 "unmap": true, 00:29:21.359 "write_zeroes": true, 00:29:21.359 "flush": true, 00:29:21.359 "reset": true, 00:29:21.359 "compare": false, 00:29:21.359 "compare_and_write": false, 00:29:21.359 "abort": true, 00:29:21.359 "nvme_admin": false, 00:29:21.359 "nvme_io": false 00:29:21.359 }, 00:29:21.359 "memory_domains": [ 00:29:21.359 { 00:29:21.359 "dma_device_id": "system", 00:29:21.359 "dma_device_type": 1 00:29:21.359 }, 00:29:21.359 { 00:29:21.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:21.359 "dma_device_type": 2 00:29:21.359 } 00:29:21.359 ], 00:29:21.359 "driver_specific": { 00:29:21.359 "passthru": { 00:29:21.359 "name": "pt3", 00:29:21.359 "base_bdev_name": "malloc3" 00:29:21.359 } 00:29:21.359 } 00:29:21.359 }' 00:29:21.359 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:21.359 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:21.618 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:21.618 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:21.618 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:21.618 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:21.618 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:21.618 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:21.618 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:21.618 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:21.618 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:21.875 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:21.875 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:21.875 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:29:21.875 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:22.133 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:22.133 "name": "pt4", 00:29:22.133 "aliases": [ 00:29:22.133 "00000000-0000-0000-0000-000000000004" 00:29:22.133 ], 00:29:22.133 "product_name": "passthru", 00:29:22.133 "block_size": 512, 00:29:22.133 "num_blocks": 65536, 00:29:22.133 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:22.133 "assigned_rate_limits": { 00:29:22.133 "rw_ios_per_sec": 0, 00:29:22.133 "rw_mbytes_per_sec": 0, 00:29:22.133 "r_mbytes_per_sec": 0, 00:29:22.133 "w_mbytes_per_sec": 0 00:29:22.133 }, 00:29:22.133 "claimed": true, 00:29:22.133 "claim_type": "exclusive_write", 00:29:22.133 "zoned": false, 00:29:22.133 "supported_io_types": { 00:29:22.133 "read": true, 00:29:22.133 "write": true, 00:29:22.133 "unmap": true, 00:29:22.133 "write_zeroes": true, 00:29:22.133 "flush": true, 00:29:22.133 "reset": true, 00:29:22.133 "compare": false, 00:29:22.133 "compare_and_write": false, 00:29:22.133 "abort": true, 00:29:22.133 "nvme_admin": false, 00:29:22.133 "nvme_io": false 00:29:22.133 }, 00:29:22.133 "memory_domains": [ 00:29:22.133 { 00:29:22.133 "dma_device_id": "system", 00:29:22.133 "dma_device_type": 1 00:29:22.133 }, 00:29:22.133 { 00:29:22.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:22.133 "dma_device_type": 2 00:29:22.133 } 00:29:22.133 ], 00:29:22.133 "driver_specific": { 00:29:22.133 "passthru": { 00:29:22.133 "name": "pt4", 00:29:22.133 "base_bdev_name": "malloc4" 00:29:22.133 } 00:29:22.133 } 00:29:22.133 }' 00:29:22.133 11:52:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:22.133 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:22.133 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:22.133 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:22.133 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:22.133 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:22.133 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:22.390 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:22.390 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:22.390 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:22.390 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:22.390 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:22.390 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:22.390 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:29:22.647 [2024-06-10 11:52:54.601942] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:22.647 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=e597ad74-da60-4ef7-99a6-0d6e57a3d7d0 00:29:22.647 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z e597ad74-da60-4ef7-99a6-0d6e57a3d7d0 ']' 00:29:22.647 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:22.905 [2024-06-10 11:52:54.897730] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:22.905 [2024-06-10 11:52:54.897913] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:22.905 [2024-06-10 11:52:54.898078] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:22.905 [2024-06-10 11:52:54.898283] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:22.905 [2024-06-10 11:52:54.898393] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:29:22.905 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.905 11:52:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:29:23.162 11:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:29:23.162 11:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:29:23.162 11:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:29:23.162 11:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:29:23.727 11:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:29:23.727 11:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:23.727 11:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:29:23.727 11:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:29:23.987 11:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:29:23.987 11:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:29:24.245 11:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:29:24.245 11:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:24.503 11:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:29:24.503 11:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:29:24.503 11:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:29:24.503 11:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:29:24.503 11:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:24.503 11:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:24.503 11:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:24.503 11:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:24.503 11:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:24.503 11:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:24.503 11:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:24.503 11:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:24.503 11:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:29:24.761 [2024-06-10 11:52:56.750071] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:24.761 [2024-06-10 11:52:56.752584] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:24.761 [2024-06-10 11:52:56.752809] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:29:24.761 [2024-06-10 11:52:56.752889] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:29:24.761 [2024-06-10 11:52:56.753057] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:24.761 [2024-06-10 11:52:56.753192] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:24.761 [2024-06-10 11:52:56.753321] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:29:24.761 [2024-06-10 11:52:56.753487] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:29:24.761 [2024-06-10 11:52:56.753628] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:24.761 [2024-06-10 11:52:56.753672] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:29:24.761 request: 00:29:24.761 { 00:29:24.761 "name": "raid_bdev1", 00:29:24.761 "raid_level": "raid1", 00:29:24.761 "base_bdevs": [ 00:29:24.761 "malloc1", 00:29:24.761 "malloc2", 00:29:24.761 "malloc3", 00:29:24.761 "malloc4" 00:29:24.761 ], 00:29:24.761 "superblock": false, 00:29:24.761 "method": "bdev_raid_create", 00:29:24.761 "req_id": 1 00:29:24.761 } 00:29:24.761 Got JSON-RPC error response 00:29:24.761 response: 00:29:24.761 { 00:29:24.761 "code": -17, 00:29:24.761 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:24.761 } 00:29:24.761 11:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:29:24.761 11:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:24.761 11:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:24.761 11:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:24.761 11:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:29:24.761 11:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:25.020 11:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:29:25.020 11:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:29:25.020 11:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:25.277 [2024-06-10 11:52:57.318290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:25.277 [2024-06-10 11:52:57.318541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:25.277 [2024-06-10 11:52:57.318680] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:25.277 [2024-06-10 11:52:57.318811] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:25.277 [2024-06-10 11:52:57.321473] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:25.277 [2024-06-10 11:52:57.321647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:25.277 [2024-06-10 11:52:57.321865] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:25.277 [2024-06-10 11:52:57.321995] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:25.277 pt1 00:29:25.536 11:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:29:25.536 11:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:25.536 11:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:25.536 11:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:25.536 11:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:25.536 11:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:25.536 11:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:25.536 11:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:25.536 11:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:25.536 11:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:25.536 11:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:25.536 11:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.794 11:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:25.794 "name": "raid_bdev1", 00:29:25.794 "uuid": "e597ad74-da60-4ef7-99a6-0d6e57a3d7d0", 00:29:25.794 "strip_size_kb": 0, 00:29:25.794 "state": "configuring", 00:29:25.794 "raid_level": "raid1", 00:29:25.794 "superblock": true, 00:29:25.794 "num_base_bdevs": 4, 00:29:25.794 "num_base_bdevs_discovered": 1, 00:29:25.794 "num_base_bdevs_operational": 4, 00:29:25.794 "base_bdevs_list": [ 00:29:25.794 { 00:29:25.794 "name": "pt1", 00:29:25.794 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:25.794 "is_configured": true, 00:29:25.794 "data_offset": 2048, 00:29:25.794 "data_size": 63488 00:29:25.794 }, 00:29:25.794 { 00:29:25.794 "name": null, 00:29:25.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:25.794 "is_configured": false, 00:29:25.794 "data_offset": 2048, 00:29:25.794 "data_size": 63488 00:29:25.794 }, 00:29:25.794 { 00:29:25.794 "name": null, 00:29:25.794 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:25.794 "is_configured": false, 00:29:25.794 "data_offset": 2048, 00:29:25.794 "data_size": 63488 00:29:25.794 }, 00:29:25.794 { 00:29:25.794 "name": null, 00:29:25.794 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:25.794 "is_configured": false, 00:29:25.794 "data_offset": 2048, 00:29:25.794 "data_size": 63488 00:29:25.794 } 00:29:25.794 ] 00:29:25.794 }' 00:29:25.794 11:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:25.794 11:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:26.359 11:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:29:26.359 11:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:26.616 [2024-06-10 11:52:58.622604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:26.616 [2024-06-10 11:52:58.622914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:26.616 [2024-06-10 11:52:58.623096] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:29:26.616 [2024-06-10 11:52:58.623221] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:26.616 [2024-06-10 11:52:58.623756] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:26.616 [2024-06-10 11:52:58.623906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:26.616 [2024-06-10 11:52:58.624126] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:26.616 [2024-06-10 11:52:58.624228] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:26.616 pt2 00:29:26.616 11:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:26.875 [2024-06-10 11:52:58.906729] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:29:26.875 11:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:29:26.875 11:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:26.875 11:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:26.875 11:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:26.875 11:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:26.875 11:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:26.875 11:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:26.875 11:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:26.875 11:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:26.875 11:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:27.132 11:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:27.132 11:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:27.389 11:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:27.389 "name": "raid_bdev1", 00:29:27.389 "uuid": "e597ad74-da60-4ef7-99a6-0d6e57a3d7d0", 00:29:27.389 "strip_size_kb": 0, 00:29:27.389 "state": "configuring", 00:29:27.389 "raid_level": "raid1", 00:29:27.389 "superblock": true, 00:29:27.389 "num_base_bdevs": 4, 00:29:27.389 "num_base_bdevs_discovered": 1, 00:29:27.389 "num_base_bdevs_operational": 4, 00:29:27.389 "base_bdevs_list": [ 00:29:27.389 { 00:29:27.389 "name": "pt1", 00:29:27.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:27.389 "is_configured": true, 00:29:27.389 "data_offset": 2048, 00:29:27.389 "data_size": 63488 00:29:27.389 }, 00:29:27.389 { 00:29:27.389 "name": null, 00:29:27.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:27.389 "is_configured": false, 00:29:27.389 "data_offset": 2048, 00:29:27.389 "data_size": 63488 00:29:27.389 }, 00:29:27.389 { 00:29:27.389 "name": null, 00:29:27.389 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:27.389 "is_configured": false, 00:29:27.389 "data_offset": 2048, 00:29:27.389 "data_size": 63488 00:29:27.389 }, 00:29:27.389 { 00:29:27.389 "name": null, 00:29:27.389 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:27.389 "is_configured": false, 00:29:27.389 "data_offset": 2048, 00:29:27.389 "data_size": 63488 00:29:27.389 } 00:29:27.389 ] 00:29:27.389 }' 00:29:27.389 11:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:27.390 11:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:27.955 11:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:29:27.955 11:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:29:27.955 11:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:28.214 [2024-06-10 11:53:00.223078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:28.214 [2024-06-10 11:53:00.223338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:28.214 [2024-06-10 11:53:00.223475] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:29:28.214 [2024-06-10 11:53:00.223598] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:28.214 [2024-06-10 11:53:00.224210] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:28.214 [2024-06-10 11:53:00.224366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:28.214 [2024-06-10 11:53:00.224584] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:28.214 [2024-06-10 11:53:00.224700] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:28.214 pt2 00:29:28.214 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:29:28.214 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:29:28.214 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:28.472 [2024-06-10 11:53:00.511163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:28.472 [2024-06-10 11:53:00.511455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:28.472 [2024-06-10 11:53:00.511522] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:29:28.472 [2024-06-10 11:53:00.511652] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:28.472 [2024-06-10 11:53:00.512171] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:28.472 [2024-06-10 11:53:00.512335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:28.472 [2024-06-10 11:53:00.512543] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:28.472 [2024-06-10 11:53:00.512674] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:28.472 pt3 00:29:28.730 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:29:28.730 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:29:28.730 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:28.730 [2024-06-10 11:53:00.787199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:28.730 [2024-06-10 11:53:00.787490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:28.730 [2024-06-10 11:53:00.787572] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:29:28.730 [2024-06-10 11:53:00.787700] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:28.730 [2024-06-10 11:53:00.788209] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:28.730 [2024-06-10 11:53:00.788368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:28.730 [2024-06-10 11:53:00.788560] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:29:28.730 [2024-06-10 11:53:00.788674] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:28.730 [2024-06-10 11:53:00.788874] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:29:28.987 [2024-06-10 11:53:00.788977] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:28.988 [2024-06-10 11:53:00.789128] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:28.988 [2024-06-10 11:53:00.789647] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:29:28.988 [2024-06-10 11:53:00.789768] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:29:28.988 [2024-06-10 11:53:00.790015] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:28.988 pt4 00:29:28.988 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:29:28.988 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:29:28.988 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:28.988 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:28.988 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:28.988 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:28.988 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:28.988 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:28.988 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:28.988 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:28.988 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:28.988 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:28.988 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.988 11:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:29.245 11:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:29.245 "name": "raid_bdev1", 00:29:29.245 "uuid": "e597ad74-da60-4ef7-99a6-0d6e57a3d7d0", 00:29:29.245 "strip_size_kb": 0, 00:29:29.245 "state": "online", 00:29:29.245 "raid_level": "raid1", 00:29:29.245 "superblock": true, 00:29:29.245 "num_base_bdevs": 4, 00:29:29.245 "num_base_bdevs_discovered": 4, 00:29:29.245 "num_base_bdevs_operational": 4, 00:29:29.245 "base_bdevs_list": [ 00:29:29.245 { 00:29:29.245 "name": "pt1", 00:29:29.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:29.245 "is_configured": true, 00:29:29.245 "data_offset": 2048, 00:29:29.245 "data_size": 63488 00:29:29.245 }, 00:29:29.245 { 00:29:29.245 "name": "pt2", 00:29:29.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:29.245 "is_configured": true, 00:29:29.245 "data_offset": 2048, 00:29:29.245 "data_size": 63488 00:29:29.245 }, 00:29:29.245 { 00:29:29.245 "name": "pt3", 00:29:29.245 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:29.245 "is_configured": true, 00:29:29.245 "data_offset": 2048, 00:29:29.245 "data_size": 63488 00:29:29.245 }, 00:29:29.245 { 00:29:29.245 "name": "pt4", 00:29:29.245 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:29.245 "is_configured": true, 00:29:29.245 "data_offset": 2048, 00:29:29.245 "data_size": 63488 00:29:29.245 } 00:29:29.245 ] 00:29:29.245 }' 00:29:29.245 11:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:29.245 11:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:29.811 11:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:29:29.811 11:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:29:29.811 11:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:29:29.811 11:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:29:29.811 11:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:29:29.811 11:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:29:29.811 11:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:29.811 11:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:29:30.070 [2024-06-10 11:53:02.023797] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:30.070 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:29:30.070 "name": "raid_bdev1", 00:29:30.070 "aliases": [ 00:29:30.070 "e597ad74-da60-4ef7-99a6-0d6e57a3d7d0" 00:29:30.070 ], 00:29:30.070 "product_name": "Raid Volume", 00:29:30.070 "block_size": 512, 00:29:30.070 "num_blocks": 63488, 00:29:30.070 "uuid": "e597ad74-da60-4ef7-99a6-0d6e57a3d7d0", 00:29:30.070 "assigned_rate_limits": { 00:29:30.070 "rw_ios_per_sec": 0, 00:29:30.070 "rw_mbytes_per_sec": 0, 00:29:30.070 "r_mbytes_per_sec": 0, 00:29:30.070 "w_mbytes_per_sec": 0 00:29:30.070 }, 00:29:30.070 "claimed": false, 00:29:30.070 "zoned": false, 00:29:30.070 "supported_io_types": { 00:29:30.070 "read": true, 00:29:30.070 "write": true, 00:29:30.070 "unmap": false, 00:29:30.070 "write_zeroes": true, 00:29:30.070 "flush": false, 00:29:30.070 "reset": true, 00:29:30.070 "compare": false, 00:29:30.070 "compare_and_write": false, 00:29:30.070 "abort": false, 00:29:30.070 "nvme_admin": false, 00:29:30.070 "nvme_io": false 00:29:30.070 }, 00:29:30.070 "memory_domains": [ 00:29:30.070 { 00:29:30.070 "dma_device_id": "system", 00:29:30.070 "dma_device_type": 1 00:29:30.070 }, 00:29:30.070 { 00:29:30.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:30.070 "dma_device_type": 2 00:29:30.070 }, 00:29:30.070 { 00:29:30.070 "dma_device_id": "system", 00:29:30.070 "dma_device_type": 1 00:29:30.070 }, 00:29:30.070 { 00:29:30.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:30.070 "dma_device_type": 2 00:29:30.070 }, 00:29:30.070 { 00:29:30.070 "dma_device_id": "system", 00:29:30.070 "dma_device_type": 1 00:29:30.070 }, 00:29:30.070 { 00:29:30.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:30.070 "dma_device_type": 2 00:29:30.070 }, 00:29:30.070 { 00:29:30.070 "dma_device_id": "system", 00:29:30.070 "dma_device_type": 1 00:29:30.070 }, 00:29:30.070 { 00:29:30.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:30.070 "dma_device_type": 2 00:29:30.070 } 00:29:30.070 ], 00:29:30.070 "driver_specific": { 00:29:30.070 "raid": { 00:29:30.070 "uuid": "e597ad74-da60-4ef7-99a6-0d6e57a3d7d0", 00:29:30.070 "strip_size_kb": 0, 00:29:30.070 "state": "online", 00:29:30.070 "raid_level": "raid1", 00:29:30.070 "superblock": true, 00:29:30.070 "num_base_bdevs": 4, 00:29:30.070 "num_base_bdevs_discovered": 4, 00:29:30.070 "num_base_bdevs_operational": 4, 00:29:30.070 "base_bdevs_list": [ 00:29:30.070 { 00:29:30.070 "name": "pt1", 00:29:30.070 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:30.070 "is_configured": true, 00:29:30.070 "data_offset": 2048, 00:29:30.070 "data_size": 63488 00:29:30.070 }, 00:29:30.070 { 00:29:30.070 "name": "pt2", 00:29:30.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:30.070 "is_configured": true, 00:29:30.070 "data_offset": 2048, 00:29:30.070 "data_size": 63488 00:29:30.070 }, 00:29:30.070 { 00:29:30.070 "name": "pt3", 00:29:30.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:30.070 "is_configured": true, 00:29:30.070 "data_offset": 2048, 00:29:30.070 "data_size": 63488 00:29:30.070 }, 00:29:30.070 { 00:29:30.070 "name": "pt4", 00:29:30.070 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:30.070 "is_configured": true, 00:29:30.070 "data_offset": 2048, 00:29:30.070 "data_size": 63488 00:29:30.070 } 00:29:30.070 ] 00:29:30.070 } 00:29:30.070 } 00:29:30.070 }' 00:29:30.070 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:30.070 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:29:30.070 pt2 00:29:30.070 pt3 00:29:30.070 pt4' 00:29:30.070 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:30.070 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:29:30.070 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:30.720 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:30.720 "name": "pt1", 00:29:30.720 "aliases": [ 00:29:30.720 "00000000-0000-0000-0000-000000000001" 00:29:30.720 ], 00:29:30.720 "product_name": "passthru", 00:29:30.720 "block_size": 512, 00:29:30.720 "num_blocks": 65536, 00:29:30.720 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:30.720 "assigned_rate_limits": { 00:29:30.720 "rw_ios_per_sec": 0, 00:29:30.720 "rw_mbytes_per_sec": 0, 00:29:30.720 "r_mbytes_per_sec": 0, 00:29:30.720 "w_mbytes_per_sec": 0 00:29:30.720 }, 00:29:30.720 "claimed": true, 00:29:30.720 "claim_type": "exclusive_write", 00:29:30.720 "zoned": false, 00:29:30.720 "supported_io_types": { 00:29:30.720 "read": true, 00:29:30.720 "write": true, 00:29:30.720 "unmap": true, 00:29:30.720 "write_zeroes": true, 00:29:30.720 "flush": true, 00:29:30.720 "reset": true, 00:29:30.720 "compare": false, 00:29:30.720 "compare_and_write": false, 00:29:30.720 "abort": true, 00:29:30.720 "nvme_admin": false, 00:29:30.720 "nvme_io": false 00:29:30.720 }, 00:29:30.720 "memory_domains": [ 00:29:30.720 { 00:29:30.720 "dma_device_id": "system", 00:29:30.720 "dma_device_type": 1 00:29:30.720 }, 00:29:30.720 { 00:29:30.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:30.720 "dma_device_type": 2 00:29:30.720 } 00:29:30.720 ], 00:29:30.720 "driver_specific": { 00:29:30.720 "passthru": { 00:29:30.720 "name": "pt1", 00:29:30.720 "base_bdev_name": "malloc1" 00:29:30.720 } 00:29:30.720 } 00:29:30.720 }' 00:29:30.720 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:30.720 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:30.720 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:30.720 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:30.720 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:30.720 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:30.720 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:30.720 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:30.720 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:30.720 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:30.720 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:30.977 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:30.977 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:30.977 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:29:30.977 11:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:31.235 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:31.235 "name": "pt2", 00:29:31.235 "aliases": [ 00:29:31.235 "00000000-0000-0000-0000-000000000002" 00:29:31.235 ], 00:29:31.235 "product_name": "passthru", 00:29:31.235 "block_size": 512, 00:29:31.235 "num_blocks": 65536, 00:29:31.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:31.235 "assigned_rate_limits": { 00:29:31.235 "rw_ios_per_sec": 0, 00:29:31.235 "rw_mbytes_per_sec": 0, 00:29:31.235 "r_mbytes_per_sec": 0, 00:29:31.235 "w_mbytes_per_sec": 0 00:29:31.235 }, 00:29:31.235 "claimed": true, 00:29:31.235 "claim_type": "exclusive_write", 00:29:31.235 "zoned": false, 00:29:31.235 "supported_io_types": { 00:29:31.235 "read": true, 00:29:31.235 "write": true, 00:29:31.235 "unmap": true, 00:29:31.235 "write_zeroes": true, 00:29:31.235 "flush": true, 00:29:31.235 "reset": true, 00:29:31.235 "compare": false, 00:29:31.235 "compare_and_write": false, 00:29:31.235 "abort": true, 00:29:31.235 "nvme_admin": false, 00:29:31.235 "nvme_io": false 00:29:31.235 }, 00:29:31.235 "memory_domains": [ 00:29:31.235 { 00:29:31.235 "dma_device_id": "system", 00:29:31.235 "dma_device_type": 1 00:29:31.235 }, 00:29:31.235 { 00:29:31.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:31.235 "dma_device_type": 2 00:29:31.235 } 00:29:31.235 ], 00:29:31.235 "driver_specific": { 00:29:31.235 "passthru": { 00:29:31.235 "name": "pt2", 00:29:31.235 "base_bdev_name": "malloc2" 00:29:31.235 } 00:29:31.235 } 00:29:31.235 }' 00:29:31.235 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:31.235 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:31.235 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:31.235 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:31.235 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:31.235 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:31.235 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:31.494 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:31.494 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:31.494 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:31.494 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:31.494 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:31.494 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:31.494 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:29:31.494 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:32.059 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:32.059 "name": "pt3", 00:29:32.059 "aliases": [ 00:29:32.059 "00000000-0000-0000-0000-000000000003" 00:29:32.059 ], 00:29:32.059 "product_name": "passthru", 00:29:32.059 "block_size": 512, 00:29:32.059 "num_blocks": 65536, 00:29:32.059 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:32.059 "assigned_rate_limits": { 00:29:32.059 "rw_ios_per_sec": 0, 00:29:32.059 "rw_mbytes_per_sec": 0, 00:29:32.059 "r_mbytes_per_sec": 0, 00:29:32.059 "w_mbytes_per_sec": 0 00:29:32.059 }, 00:29:32.059 "claimed": true, 00:29:32.059 "claim_type": "exclusive_write", 00:29:32.059 "zoned": false, 00:29:32.059 "supported_io_types": { 00:29:32.059 "read": true, 00:29:32.059 "write": true, 00:29:32.059 "unmap": true, 00:29:32.059 "write_zeroes": true, 00:29:32.059 "flush": true, 00:29:32.059 "reset": true, 00:29:32.059 "compare": false, 00:29:32.059 "compare_and_write": false, 00:29:32.059 "abort": true, 00:29:32.059 "nvme_admin": false, 00:29:32.059 "nvme_io": false 00:29:32.059 }, 00:29:32.059 "memory_domains": [ 00:29:32.059 { 00:29:32.059 "dma_device_id": "system", 00:29:32.059 "dma_device_type": 1 00:29:32.059 }, 00:29:32.059 { 00:29:32.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:32.059 "dma_device_type": 2 00:29:32.059 } 00:29:32.059 ], 00:29:32.059 "driver_specific": { 00:29:32.059 "passthru": { 00:29:32.059 "name": "pt3", 00:29:32.059 "base_bdev_name": "malloc3" 00:29:32.059 } 00:29:32.059 } 00:29:32.059 }' 00:29:32.059 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:32.059 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:32.059 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:32.059 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:32.059 11:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:32.059 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:32.059 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:32.059 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:32.316 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:32.316 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:32.316 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:32.316 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:32.316 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:32.316 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:29:32.316 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:32.573 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:32.573 "name": "pt4", 00:29:32.573 "aliases": [ 00:29:32.573 "00000000-0000-0000-0000-000000000004" 00:29:32.573 ], 00:29:32.573 "product_name": "passthru", 00:29:32.573 "block_size": 512, 00:29:32.573 "num_blocks": 65536, 00:29:32.573 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:32.573 "assigned_rate_limits": { 00:29:32.573 "rw_ios_per_sec": 0, 00:29:32.573 "rw_mbytes_per_sec": 0, 00:29:32.573 "r_mbytes_per_sec": 0, 00:29:32.573 "w_mbytes_per_sec": 0 00:29:32.573 }, 00:29:32.573 "claimed": true, 00:29:32.573 "claim_type": "exclusive_write", 00:29:32.573 "zoned": false, 00:29:32.573 "supported_io_types": { 00:29:32.573 "read": true, 00:29:32.573 "write": true, 00:29:32.573 "unmap": true, 00:29:32.573 "write_zeroes": true, 00:29:32.573 "flush": true, 00:29:32.573 "reset": true, 00:29:32.573 "compare": false, 00:29:32.573 "compare_and_write": false, 00:29:32.573 "abort": true, 00:29:32.573 "nvme_admin": false, 00:29:32.573 "nvme_io": false 00:29:32.573 }, 00:29:32.573 "memory_domains": [ 00:29:32.573 { 00:29:32.573 "dma_device_id": "system", 00:29:32.573 "dma_device_type": 1 00:29:32.573 }, 00:29:32.573 { 00:29:32.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:32.573 "dma_device_type": 2 00:29:32.573 } 00:29:32.573 ], 00:29:32.573 "driver_specific": { 00:29:32.573 "passthru": { 00:29:32.573 "name": "pt4", 00:29:32.573 "base_bdev_name": "malloc4" 00:29:32.573 } 00:29:32.573 } 00:29:32.573 }' 00:29:32.573 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:32.573 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:32.573 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:32.573 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:32.830 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:32.830 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:32.830 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:32.830 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:32.830 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:32.830 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:32.830 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:32.830 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:32.830 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:32.830 11:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:29:33.087 [2024-06-10 11:53:05.060573] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:33.087 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' e597ad74-da60-4ef7-99a6-0d6e57a3d7d0 '!=' e597ad74-da60-4ef7-99a6-0d6e57a3d7d0 ']' 00:29:33.087 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:29:33.087 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:29:33.087 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:29:33.087 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:29:33.343 [2024-06-10 11:53:05.280377] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:29:33.343 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:33.343 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:33.343 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:33.343 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:33.343 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:33.343 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:33.343 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:33.343 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:33.343 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:33.343 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:33.343 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:33.343 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:33.600 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:33.600 "name": "raid_bdev1", 00:29:33.600 "uuid": "e597ad74-da60-4ef7-99a6-0d6e57a3d7d0", 00:29:33.600 "strip_size_kb": 0, 00:29:33.600 "state": "online", 00:29:33.600 "raid_level": "raid1", 00:29:33.600 "superblock": true, 00:29:33.600 "num_base_bdevs": 4, 00:29:33.600 "num_base_bdevs_discovered": 3, 00:29:33.600 "num_base_bdevs_operational": 3, 00:29:33.600 "base_bdevs_list": [ 00:29:33.600 { 00:29:33.600 "name": null, 00:29:33.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:33.600 "is_configured": false, 00:29:33.600 "data_offset": 2048, 00:29:33.600 "data_size": 63488 00:29:33.600 }, 00:29:33.600 { 00:29:33.600 "name": "pt2", 00:29:33.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:33.600 "is_configured": true, 00:29:33.600 "data_offset": 2048, 00:29:33.600 "data_size": 63488 00:29:33.600 }, 00:29:33.600 { 00:29:33.600 "name": "pt3", 00:29:33.600 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:33.601 "is_configured": true, 00:29:33.601 "data_offset": 2048, 00:29:33.601 "data_size": 63488 00:29:33.601 }, 00:29:33.601 { 00:29:33.601 "name": "pt4", 00:29:33.601 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:33.601 "is_configured": true, 00:29:33.601 "data_offset": 2048, 00:29:33.601 "data_size": 63488 00:29:33.601 } 00:29:33.601 ] 00:29:33.601 }' 00:29:33.601 11:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:33.601 11:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.167 11:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:34.425 [2024-06-10 11:53:06.420635] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:34.425 [2024-06-10 11:53:06.420874] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:34.425 [2024-06-10 11:53:06.421043] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:34.425 [2024-06-10 11:53:06.421216] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:34.425 [2024-06-10 11:53:06.421304] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:29:34.425 11:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:34.425 11:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:29:34.683 11:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:29:34.683 11:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:29:34.683 11:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:29:34.683 11:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:29:34.683 11:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:34.941 11:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:29:34.941 11:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:29:34.941 11:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:29:35.198 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:29:35.198 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:29:35.198 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:29:35.455 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:29:35.456 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:29:35.456 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:29:35.456 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:29:35.456 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:35.713 [2024-06-10 11:53:07.552877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:35.713 [2024-06-10 11:53:07.553197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.713 [2024-06-10 11:53:07.553338] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:29:35.713 [2024-06-10 11:53:07.553464] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.713 [2024-06-10 11:53:07.556253] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.713 [2024-06-10 11:53:07.556434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:35.713 [2024-06-10 11:53:07.556654] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:35.713 [2024-06-10 11:53:07.556820] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:35.713 pt2 00:29:35.713 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:29:35.713 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:35.713 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:35.713 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:35.713 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:35.713 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:35.713 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:35.713 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:35.713 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:35.713 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:35.713 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.713 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.970 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:35.970 "name": "raid_bdev1", 00:29:35.970 "uuid": "e597ad74-da60-4ef7-99a6-0d6e57a3d7d0", 00:29:35.970 "strip_size_kb": 0, 00:29:35.970 "state": "configuring", 00:29:35.970 "raid_level": "raid1", 00:29:35.970 "superblock": true, 00:29:35.970 "num_base_bdevs": 4, 00:29:35.970 "num_base_bdevs_discovered": 1, 00:29:35.970 "num_base_bdevs_operational": 3, 00:29:35.970 "base_bdevs_list": [ 00:29:35.970 { 00:29:35.970 "name": null, 00:29:35.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:35.970 "is_configured": false, 00:29:35.970 "data_offset": 2048, 00:29:35.970 "data_size": 63488 00:29:35.970 }, 00:29:35.970 { 00:29:35.970 "name": "pt2", 00:29:35.970 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:35.970 "is_configured": true, 00:29:35.970 "data_offset": 2048, 00:29:35.970 "data_size": 63488 00:29:35.970 }, 00:29:35.970 { 00:29:35.970 "name": null, 00:29:35.970 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:35.970 "is_configured": false, 00:29:35.970 "data_offset": 2048, 00:29:35.970 "data_size": 63488 00:29:35.970 }, 00:29:35.970 { 00:29:35.970 "name": null, 00:29:35.970 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:35.970 "is_configured": false, 00:29:35.970 "data_offset": 2048, 00:29:35.970 "data_size": 63488 00:29:35.970 } 00:29:35.970 ] 00:29:35.970 }' 00:29:35.970 11:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:35.970 11:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.556 11:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:29:36.556 11:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:29:36.556 11:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:36.814 [2024-06-10 11:53:08.825465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:36.814 [2024-06-10 11:53:08.825800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:36.814 [2024-06-10 11:53:08.825954] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:29:36.814 [2024-06-10 11:53:08.826076] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:36.814 [2024-06-10 11:53:08.826749] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:36.814 [2024-06-10 11:53:08.826913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:36.814 [2024-06-10 11:53:08.827155] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:36.814 [2024-06-10 11:53:08.827279] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:36.814 pt3 00:29:36.814 11:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:29:36.814 11:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:36.814 11:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:36.814 11:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:36.814 11:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:36.814 11:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:36.814 11:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:36.814 11:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:36.814 11:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:36.814 11:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:36.814 11:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.814 11:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.377 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:37.377 "name": "raid_bdev1", 00:29:37.377 "uuid": "e597ad74-da60-4ef7-99a6-0d6e57a3d7d0", 00:29:37.377 "strip_size_kb": 0, 00:29:37.377 "state": "configuring", 00:29:37.377 "raid_level": "raid1", 00:29:37.377 "superblock": true, 00:29:37.377 "num_base_bdevs": 4, 00:29:37.377 "num_base_bdevs_discovered": 2, 00:29:37.377 "num_base_bdevs_operational": 3, 00:29:37.377 "base_bdevs_list": [ 00:29:37.377 { 00:29:37.377 "name": null, 00:29:37.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:37.377 "is_configured": false, 00:29:37.377 "data_offset": 2048, 00:29:37.377 "data_size": 63488 00:29:37.377 }, 00:29:37.377 { 00:29:37.377 "name": "pt2", 00:29:37.377 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:37.377 "is_configured": true, 00:29:37.377 "data_offset": 2048, 00:29:37.377 "data_size": 63488 00:29:37.377 }, 00:29:37.377 { 00:29:37.377 "name": "pt3", 00:29:37.377 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:37.377 "is_configured": true, 00:29:37.377 "data_offset": 2048, 00:29:37.377 "data_size": 63488 00:29:37.377 }, 00:29:37.377 { 00:29:37.377 "name": null, 00:29:37.377 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:37.377 "is_configured": false, 00:29:37.377 "data_offset": 2048, 00:29:37.377 "data_size": 63488 00:29:37.377 } 00:29:37.377 ] 00:29:37.377 }' 00:29:37.377 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:37.377 11:53:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.941 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:29:37.941 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:29:37.941 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:29:37.941 11:53:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:38.199 [2024-06-10 11:53:10.031269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:38.199 [2024-06-10 11:53:10.031590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:38.199 [2024-06-10 11:53:10.031755] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:29:38.199 [2024-06-10 11:53:10.031892] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:38.199 [2024-06-10 11:53:10.032465] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:38.199 [2024-06-10 11:53:10.032642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:38.199 [2024-06-10 11:53:10.032958] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:29:38.199 [2024-06-10 11:53:10.033107] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:38.199 [2024-06-10 11:53:10.033382] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:29:38.199 [2024-06-10 11:53:10.033509] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:38.199 [2024-06-10 11:53:10.033666] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:29:38.199 [2024-06-10 11:53:10.034111] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:29:38.199 [2024-06-10 11:53:10.034242] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:29:38.199 [2024-06-10 11:53:10.034516] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:38.199 pt4 00:29:38.199 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:38.199 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:38.199 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:38.199 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:38.199 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:38.199 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:38.199 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:38.199 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:38.199 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:38.199 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:38.199 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:38.199 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:38.457 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:38.457 "name": "raid_bdev1", 00:29:38.457 "uuid": "e597ad74-da60-4ef7-99a6-0d6e57a3d7d0", 00:29:38.457 "strip_size_kb": 0, 00:29:38.457 "state": "online", 00:29:38.457 "raid_level": "raid1", 00:29:38.457 "superblock": true, 00:29:38.457 "num_base_bdevs": 4, 00:29:38.457 "num_base_bdevs_discovered": 3, 00:29:38.457 "num_base_bdevs_operational": 3, 00:29:38.457 "base_bdevs_list": [ 00:29:38.457 { 00:29:38.457 "name": null, 00:29:38.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:38.457 "is_configured": false, 00:29:38.457 "data_offset": 2048, 00:29:38.457 "data_size": 63488 00:29:38.457 }, 00:29:38.457 { 00:29:38.457 "name": "pt2", 00:29:38.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:38.457 "is_configured": true, 00:29:38.457 "data_offset": 2048, 00:29:38.457 "data_size": 63488 00:29:38.457 }, 00:29:38.457 { 00:29:38.457 "name": "pt3", 00:29:38.457 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:38.457 "is_configured": true, 00:29:38.457 "data_offset": 2048, 00:29:38.457 "data_size": 63488 00:29:38.457 }, 00:29:38.457 { 00:29:38.457 "name": "pt4", 00:29:38.457 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:38.457 "is_configured": true, 00:29:38.457 "data_offset": 2048, 00:29:38.457 "data_size": 63488 00:29:38.457 } 00:29:38.457 ] 00:29:38.457 }' 00:29:38.457 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:38.457 11:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.023 11:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:39.280 [2024-06-10 11:53:11.216667] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:39.280 [2024-06-10 11:53:11.216951] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:39.280 [2024-06-10 11:53:11.217146] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:39.280 [2024-06-10 11:53:11.217317] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:39.280 [2024-06-10 11:53:11.217438] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:29:39.280 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:39.280 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:29:39.538 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:29:39.538 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:29:39.538 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:29:39.538 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:29:39.538 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:29:39.796 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:40.055 [2024-06-10 11:53:11.896766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:40.055 [2024-06-10 11:53:11.897388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:40.055 [2024-06-10 11:53:11.897681] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:29:40.055 [2024-06-10 11:53:11.897955] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:40.055 [2024-06-10 11:53:11.900843] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:40.055 [2024-06-10 11:53:11.901146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:40.055 [2024-06-10 11:53:11.901497] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:40.055 [2024-06-10 11:53:11.901672] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:40.055 [2024-06-10 11:53:11.901894] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:29:40.055 [2024-06-10 11:53:11.902008] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:40.055 [2024-06-10 11:53:11.902061] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:29:40.055 [2024-06-10 11:53:11.902324] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:40.055 pt1 00:29:40.055 [2024-06-10 11:53:11.902603] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:40.055 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:29:40.055 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:29:40.055 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:40.055 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:40.055 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:40.055 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:40.055 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:40.055 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:40.055 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:40.055 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:40.055 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:40.055 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.055 11:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.313 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:40.313 "name": "raid_bdev1", 00:29:40.313 "uuid": "e597ad74-da60-4ef7-99a6-0d6e57a3d7d0", 00:29:40.313 "strip_size_kb": 0, 00:29:40.313 "state": "configuring", 00:29:40.313 "raid_level": "raid1", 00:29:40.313 "superblock": true, 00:29:40.313 "num_base_bdevs": 4, 00:29:40.313 "num_base_bdevs_discovered": 2, 00:29:40.313 "num_base_bdevs_operational": 3, 00:29:40.313 "base_bdevs_list": [ 00:29:40.313 { 00:29:40.313 "name": null, 00:29:40.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:40.313 "is_configured": false, 00:29:40.313 "data_offset": 2048, 00:29:40.313 "data_size": 63488 00:29:40.313 }, 00:29:40.313 { 00:29:40.313 "name": "pt2", 00:29:40.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:40.313 "is_configured": true, 00:29:40.313 "data_offset": 2048, 00:29:40.313 "data_size": 63488 00:29:40.313 }, 00:29:40.313 { 00:29:40.313 "name": "pt3", 00:29:40.313 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:40.313 "is_configured": true, 00:29:40.313 "data_offset": 2048, 00:29:40.313 "data_size": 63488 00:29:40.313 }, 00:29:40.313 { 00:29:40.313 "name": null, 00:29:40.313 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:40.313 "is_configured": false, 00:29:40.313 "data_offset": 2048, 00:29:40.313 "data_size": 63488 00:29:40.313 } 00:29:40.313 ] 00:29:40.313 }' 00:29:40.313 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:40.313 11:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.891 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:29:40.891 11:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:41.179 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:29:41.179 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:41.437 [2024-06-10 11:53:13.427297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:41.437 [2024-06-10 11:53:13.427604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:41.437 [2024-06-10 11:53:13.427680] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d280 00:29:41.437 [2024-06-10 11:53:13.427917] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:41.437 [2024-06-10 11:53:13.428459] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:41.437 [2024-06-10 11:53:13.428625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:41.437 [2024-06-10 11:53:13.428878] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:29:41.437 [2024-06-10 11:53:13.429003] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:41.437 [2024-06-10 11:53:13.429183] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cf80 00:29:41.437 [2024-06-10 11:53:13.429344] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:41.437 [2024-06-10 11:53:13.429497] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:29:41.437 [2024-06-10 11:53:13.429964] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cf80 00:29:41.437 [2024-06-10 11:53:13.430086] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cf80 00:29:41.437 [2024-06-10 11:53:13.430329] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:41.437 pt4 00:29:41.437 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:41.437 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:41.437 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:41.437 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:41.437 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:41.437 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:41.437 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:41.437 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:41.437 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:41.437 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:41.437 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.437 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.696 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:41.696 "name": "raid_bdev1", 00:29:41.696 "uuid": "e597ad74-da60-4ef7-99a6-0d6e57a3d7d0", 00:29:41.696 "strip_size_kb": 0, 00:29:41.696 "state": "online", 00:29:41.696 "raid_level": "raid1", 00:29:41.696 "superblock": true, 00:29:41.696 "num_base_bdevs": 4, 00:29:41.696 "num_base_bdevs_discovered": 3, 00:29:41.696 "num_base_bdevs_operational": 3, 00:29:41.696 "base_bdevs_list": [ 00:29:41.696 { 00:29:41.696 "name": null, 00:29:41.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:41.696 "is_configured": false, 00:29:41.696 "data_offset": 2048, 00:29:41.696 "data_size": 63488 00:29:41.696 }, 00:29:41.696 { 00:29:41.696 "name": "pt2", 00:29:41.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:41.696 "is_configured": true, 00:29:41.696 "data_offset": 2048, 00:29:41.696 "data_size": 63488 00:29:41.696 }, 00:29:41.696 { 00:29:41.696 "name": "pt3", 00:29:41.696 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:41.696 "is_configured": true, 00:29:41.696 "data_offset": 2048, 00:29:41.696 "data_size": 63488 00:29:41.696 }, 00:29:41.696 { 00:29:41.696 "name": "pt4", 00:29:41.696 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:41.696 "is_configured": true, 00:29:41.696 "data_offset": 2048, 00:29:41.696 "data_size": 63488 00:29:41.696 } 00:29:41.696 ] 00:29:41.696 }' 00:29:41.696 11:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:41.696 11:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.629 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:29:42.629 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:42.629 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:29:42.629 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:42.629 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:29:42.888 [2024-06-10 11:53:14.827891] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:42.888 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' e597ad74-da60-4ef7-99a6-0d6e57a3d7d0 '!=' e597ad74-da60-4ef7-99a6-0d6e57a3d7d0 ']' 00:29:42.888 11:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 144484 00:29:42.888 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 144484 ']' 00:29:42.888 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 144484 00:29:42.888 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:29:42.888 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:42.888 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 144484 00:29:42.888 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:42.888 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:42.888 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 144484' 00:29:42.888 killing process with pid 144484 00:29:42.888 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 144484 00:29:42.888 [2024-06-10 11:53:14.878920] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:42.888 11:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 144484 00:29:42.888 [2024-06-10 11:53:14.879148] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:42.888 [2024-06-10 11:53:14.879334] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:42.888 [2024-06-10 11:53:14.879429] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state offline 00:29:43.454 [2024-06-10 11:53:15.349552] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:44.826 ************************************ 00:29:44.826 END TEST raid_superblock_test 00:29:44.826 ************************************ 00:29:44.826 11:53:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:29:44.826 00:29:44.826 real 0m29.920s 00:29:44.826 user 0m54.194s 00:29:44.826 sys 0m4.234s 00:29:44.826 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:44.826 11:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.084 11:53:16 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:29:45.084 11:53:16 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:29:45.084 11:53:16 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:45.084 11:53:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:45.084 ************************************ 00:29:45.084 START TEST raid_read_error_test 00:29:45.084 ************************************ 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 4 read 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.PztaupWknm 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=145371 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 145371 /var/tmp/spdk-raid.sock 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 145371 ']' 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:45.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:45.084 11:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.084 [2024-06-10 11:53:17.019336] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:29:45.084 [2024-06-10 11:53:17.019717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145371 ] 00:29:45.343 [2024-06-10 11:53:17.196769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.602 [2024-06-10 11:53:17.502609] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.860 [2024-06-10 11:53:17.776127] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:46.118 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:46.118 11:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:29:46.118 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:29:46.118 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:46.376 BaseBdev1_malloc 00:29:46.376 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:29:46.634 true 00:29:46.634 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:29:46.892 [2024-06-10 11:53:18.847705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:29:46.892 [2024-06-10 11:53:18.848027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:46.892 [2024-06-10 11:53:18.848209] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:29:46.892 [2024-06-10 11:53:18.848311] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:46.892 [2024-06-10 11:53:18.851074] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:46.892 [2024-06-10 11:53:18.851258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:46.892 BaseBdev1 00:29:46.892 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:29:46.892 11:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:47.149 BaseBdev2_malloc 00:29:47.149 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:29:47.407 true 00:29:47.407 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:29:47.665 [2024-06-10 11:53:19.649949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:29:47.665 [2024-06-10 11:53:19.650298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:47.665 [2024-06-10 11:53:19.650469] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:47.665 [2024-06-10 11:53:19.650577] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:47.665 [2024-06-10 11:53:19.653406] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:47.665 [2024-06-10 11:53:19.653585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:47.665 BaseBdev2 00:29:47.665 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:29:47.665 11:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:48.233 BaseBdev3_malloc 00:29:48.233 11:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:29:48.548 true 00:29:48.548 11:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:29:48.806 [2024-06-10 11:53:20.625722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:29:48.806 [2024-06-10 11:53:20.626034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:48.806 [2024-06-10 11:53:20.626179] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:29:48.806 [2024-06-10 11:53:20.626288] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:48.806 [2024-06-10 11:53:20.629033] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:48.806 [2024-06-10 11:53:20.629244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:48.806 BaseBdev3 00:29:48.806 11:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:29:48.806 11:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:49.064 BaseBdev4_malloc 00:29:49.064 11:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:29:49.322 true 00:29:49.322 11:53:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:29:49.579 [2024-06-10 11:53:21.455630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:29:49.579 [2024-06-10 11:53:21.455957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:49.579 [2024-06-10 11:53:21.456101] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:29:49.579 [2024-06-10 11:53:21.456234] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:49.579 [2024-06-10 11:53:21.459078] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:49.579 [2024-06-10 11:53:21.459277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:49.579 BaseBdev4 00:29:49.579 11:53:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:29:49.838 [2024-06-10 11:53:21.679791] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:49.838 [2024-06-10 11:53:21.682339] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:49.838 [2024-06-10 11:53:21.682572] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:49.838 [2024-06-10 11:53:21.682807] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:49.838 [2024-06-10 11:53:21.683183] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:29:49.838 [2024-06-10 11:53:21.683309] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:49.838 [2024-06-10 11:53:21.683551] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:49.838 [2024-06-10 11:53:21.684056] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:29:49.838 [2024-06-10 11:53:21.684176] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:29:49.838 [2024-06-10 11:53:21.684498] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:49.838 11:53:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:49.838 11:53:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:49.838 11:53:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:49.838 11:53:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:49.838 11:53:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:49.838 11:53:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:49.838 11:53:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:49.838 11:53:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:49.838 11:53:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:49.838 11:53:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:49.838 11:53:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:49.838 11:53:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:50.097 11:53:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:50.097 "name": "raid_bdev1", 00:29:50.097 "uuid": "479d7341-6b0c-454f-a33e-d972428e65da", 00:29:50.097 "strip_size_kb": 0, 00:29:50.097 "state": "online", 00:29:50.097 "raid_level": "raid1", 00:29:50.097 "superblock": true, 00:29:50.097 "num_base_bdevs": 4, 00:29:50.097 "num_base_bdevs_discovered": 4, 00:29:50.097 "num_base_bdevs_operational": 4, 00:29:50.097 "base_bdevs_list": [ 00:29:50.097 { 00:29:50.097 "name": "BaseBdev1", 00:29:50.097 "uuid": "edcd31b5-7074-5f09-8aa7-fbb46f65f9ea", 00:29:50.097 "is_configured": true, 00:29:50.097 "data_offset": 2048, 00:29:50.097 "data_size": 63488 00:29:50.097 }, 00:29:50.097 { 00:29:50.097 "name": "BaseBdev2", 00:29:50.097 "uuid": "93eab231-fd9e-5023-abc4-76f22379f640", 00:29:50.097 "is_configured": true, 00:29:50.097 "data_offset": 2048, 00:29:50.097 "data_size": 63488 00:29:50.097 }, 00:29:50.097 { 00:29:50.097 "name": "BaseBdev3", 00:29:50.097 "uuid": "43008c4e-1fff-5dfe-8a89-8094f4d745a2", 00:29:50.097 "is_configured": true, 00:29:50.097 "data_offset": 2048, 00:29:50.097 "data_size": 63488 00:29:50.097 }, 00:29:50.097 { 00:29:50.097 "name": "BaseBdev4", 00:29:50.097 "uuid": "92b95cb5-a4cf-5b37-a686-811bedf9d7d8", 00:29:50.097 "is_configured": true, 00:29:50.097 "data_offset": 2048, 00:29:50.097 "data_size": 63488 00:29:50.097 } 00:29:50.097 ] 00:29:50.097 }' 00:29:50.097 11:53:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:50.097 11:53:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.663 11:53:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:29:50.663 11:53:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:29:50.921 [2024-06-10 11:53:22.722367] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:51.855 11:53:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:29:52.113 11:53:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:29:52.113 11:53:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:29:52.113 11:53:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:29:52.113 11:53:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:29:52.113 11:53:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:52.113 11:53:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:52.113 11:53:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:52.113 11:53:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:52.113 11:53:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:52.113 11:53:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:52.114 11:53:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:52.114 11:53:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:52.114 11:53:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:52.114 11:53:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:52.114 11:53:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:52.114 11:53:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:52.372 11:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:52.372 "name": "raid_bdev1", 00:29:52.372 "uuid": "479d7341-6b0c-454f-a33e-d972428e65da", 00:29:52.372 "strip_size_kb": 0, 00:29:52.372 "state": "online", 00:29:52.372 "raid_level": "raid1", 00:29:52.372 "superblock": true, 00:29:52.372 "num_base_bdevs": 4, 00:29:52.372 "num_base_bdevs_discovered": 4, 00:29:52.372 "num_base_bdevs_operational": 4, 00:29:52.372 "base_bdevs_list": [ 00:29:52.372 { 00:29:52.372 "name": "BaseBdev1", 00:29:52.372 "uuid": "edcd31b5-7074-5f09-8aa7-fbb46f65f9ea", 00:29:52.372 "is_configured": true, 00:29:52.372 "data_offset": 2048, 00:29:52.372 "data_size": 63488 00:29:52.372 }, 00:29:52.372 { 00:29:52.372 "name": "BaseBdev2", 00:29:52.372 "uuid": "93eab231-fd9e-5023-abc4-76f22379f640", 00:29:52.372 "is_configured": true, 00:29:52.372 "data_offset": 2048, 00:29:52.372 "data_size": 63488 00:29:52.372 }, 00:29:52.372 { 00:29:52.372 "name": "BaseBdev3", 00:29:52.372 "uuid": "43008c4e-1fff-5dfe-8a89-8094f4d745a2", 00:29:52.372 "is_configured": true, 00:29:52.372 "data_offset": 2048, 00:29:52.372 "data_size": 63488 00:29:52.372 }, 00:29:52.372 { 00:29:52.372 "name": "BaseBdev4", 00:29:52.372 "uuid": "92b95cb5-a4cf-5b37-a686-811bedf9d7d8", 00:29:52.372 "is_configured": true, 00:29:52.373 "data_offset": 2048, 00:29:52.373 "data_size": 63488 00:29:52.373 } 00:29:52.373 ] 00:29:52.373 }' 00:29:52.373 11:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:52.373 11:53:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.939 11:53:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:53.197 [2024-06-10 11:53:25.196169] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:53.197 [2024-06-10 11:53:25.196420] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:53.197 [2024-06-10 11:53:25.199594] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:53.197 [2024-06-10 11:53:25.199774] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:53.197 [2024-06-10 11:53:25.199943] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:53.197 [2024-06-10 11:53:25.200102] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:29:53.197 0 00:29:53.197 11:53:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 145371 00:29:53.197 11:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 145371 ']' 00:29:53.197 11:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 145371 00:29:53.197 11:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:29:53.197 11:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:53.197 11:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 145371 00:29:53.197 killing process with pid 145371 00:29:53.197 11:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:53.197 11:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:53.197 11:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 145371' 00:29:53.197 11:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 145371 00:29:53.197 [2024-06-10 11:53:25.243895] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:53.197 11:53:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 145371 00:29:53.762 [2024-06-10 11:53:25.676850] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:55.664 11:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.PztaupWknm 00:29:55.664 11:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:29:55.664 11:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:29:55.664 11:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:29:55.664 11:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:29:55.664 ************************************ 00:29:55.664 END TEST raid_read_error_test 00:29:55.664 ************************************ 00:29:55.664 11:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:29:55.664 11:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:29:55.664 11:53:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:29:55.664 00:29:55.664 real 0m10.510s 00:29:55.664 user 0m15.949s 00:29:55.664 sys 0m1.237s 00:29:55.664 11:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:55.664 11:53:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.664 11:53:27 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:29:55.664 11:53:27 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:29:55.664 11:53:27 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:55.664 11:53:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:55.664 ************************************ 00:29:55.664 START TEST raid_write_error_test 00:29:55.664 ************************************ 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 4 write 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.1VlZgSwQQT 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=145607 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 145607 /var/tmp/spdk-raid.sock 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 145607 ']' 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:55.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:55.664 11:53:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.664 [2024-06-10 11:53:27.594274] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:29:55.664 [2024-06-10 11:53:27.594721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145607 ] 00:29:55.923 [2024-06-10 11:53:27.796513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.182 [2024-06-10 11:53:28.045355] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.440 [2024-06-10 11:53:28.311205] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:56.699 11:53:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:56.699 11:53:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:29:56.699 11:53:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:29:56.699 11:53:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:56.957 BaseBdev1_malloc 00:29:56.957 11:53:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:29:57.523 true 00:29:57.523 11:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:29:57.523 [2024-06-10 11:53:29.522290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:29:57.523 [2024-06-10 11:53:29.522637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:57.523 [2024-06-10 11:53:29.522748] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:29:57.523 [2024-06-10 11:53:29.522877] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:57.523 [2024-06-10 11:53:29.525590] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:57.523 [2024-06-10 11:53:29.525782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:57.523 BaseBdev1 00:29:57.523 11:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:29:57.523 11:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:58.087 BaseBdev2_malloc 00:29:58.087 11:53:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:29:58.087 true 00:29:58.087 11:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:29:58.345 [2024-06-10 11:53:30.391083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:29:58.345 [2024-06-10 11:53:30.391403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:58.345 [2024-06-10 11:53:30.391509] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:58.345 [2024-06-10 11:53:30.391776] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:58.345 [2024-06-10 11:53:30.394461] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:58.345 [2024-06-10 11:53:30.394649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:58.345 BaseBdev2 00:29:58.603 11:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:29:58.603 11:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:58.861 BaseBdev3_malloc 00:29:58.861 11:53:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:29:59.119 true 00:29:59.119 11:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:29:59.377 [2024-06-10 11:53:31.242118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:29:59.377 [2024-06-10 11:53:31.242427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:59.377 [2024-06-10 11:53:31.242504] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:29:59.377 [2024-06-10 11:53:31.242631] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:59.377 [2024-06-10 11:53:31.245305] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:59.377 [2024-06-10 11:53:31.245490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:59.377 BaseBdev3 00:29:59.377 11:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:29:59.377 11:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:59.635 BaseBdev4_malloc 00:29:59.635 11:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:29:59.894 true 00:29:59.894 11:53:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:30:00.153 [2024-06-10 11:53:32.077071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:30:00.153 [2024-06-10 11:53:32.077368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:00.153 [2024-06-10 11:53:32.077511] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:30:00.153 [2024-06-10 11:53:32.077620] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:00.153 [2024-06-10 11:53:32.080359] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:00.153 [2024-06-10 11:53:32.080541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:00.153 BaseBdev4 00:30:00.153 11:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:30:00.411 [2024-06-10 11:53:32.349302] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:00.411 [2024-06-10 11:53:32.351756] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:00.411 [2024-06-10 11:53:32.351990] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:00.411 [2024-06-10 11:53:32.352198] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:00.411 [2024-06-10 11:53:32.352610] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:30:00.411 [2024-06-10 11:53:32.352734] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:00.411 [2024-06-10 11:53:32.352901] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:30:00.411 [2024-06-10 11:53:32.353475] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:30:00.411 [2024-06-10 11:53:32.353599] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:30:00.411 [2024-06-10 11:53:32.353912] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:00.411 11:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:00.411 11:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:00.411 11:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:00.411 11:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:00.411 11:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:00.411 11:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:00.411 11:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:00.411 11:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:00.411 11:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:00.411 11:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:00.411 11:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:00.411 11:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:00.669 11:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:00.669 "name": "raid_bdev1", 00:30:00.669 "uuid": "bdf920ce-bc10-4a6a-808a-12a8d15050f9", 00:30:00.669 "strip_size_kb": 0, 00:30:00.669 "state": "online", 00:30:00.669 "raid_level": "raid1", 00:30:00.669 "superblock": true, 00:30:00.669 "num_base_bdevs": 4, 00:30:00.669 "num_base_bdevs_discovered": 4, 00:30:00.669 "num_base_bdevs_operational": 4, 00:30:00.669 "base_bdevs_list": [ 00:30:00.669 { 00:30:00.669 "name": "BaseBdev1", 00:30:00.669 "uuid": "309ab0fb-9e75-5952-894c-808b615eac6e", 00:30:00.669 "is_configured": true, 00:30:00.669 "data_offset": 2048, 00:30:00.669 "data_size": 63488 00:30:00.669 }, 00:30:00.669 { 00:30:00.669 "name": "BaseBdev2", 00:30:00.669 "uuid": "2af433b9-1799-5eec-b71b-d289516477e8", 00:30:00.669 "is_configured": true, 00:30:00.669 "data_offset": 2048, 00:30:00.669 "data_size": 63488 00:30:00.669 }, 00:30:00.669 { 00:30:00.669 "name": "BaseBdev3", 00:30:00.669 "uuid": "b9609750-2949-5bfc-a33e-cde5bbc852c2", 00:30:00.669 "is_configured": true, 00:30:00.669 "data_offset": 2048, 00:30:00.669 "data_size": 63488 00:30:00.669 }, 00:30:00.669 { 00:30:00.669 "name": "BaseBdev4", 00:30:00.669 "uuid": "7402fc12-ff0b-5cda-9dde-24d1f65cd9ee", 00:30:00.669 "is_configured": true, 00:30:00.669 "data_offset": 2048, 00:30:00.669 "data_size": 63488 00:30:00.669 } 00:30:00.669 ] 00:30:00.669 }' 00:30:00.669 11:53:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:00.669 11:53:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.234 11:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:30:01.234 11:53:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:30:01.234 [2024-06-10 11:53:33.235797] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:02.169 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:30:02.428 [2024-06-10 11:53:34.443799] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:30:02.428 [2024-06-10 11:53:34.444086] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:02.428 [2024-06-10 11:53:34.444426] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:30:02.428 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:30:02.428 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:30:02.428 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:30:02.428 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:30:02.428 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:02.428 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:02.428 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:02.428 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:02.428 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:02.428 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:02.428 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:02.428 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:02.428 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:02.428 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:02.428 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:02.428 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.995 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:02.995 "name": "raid_bdev1", 00:30:02.995 "uuid": "bdf920ce-bc10-4a6a-808a-12a8d15050f9", 00:30:02.995 "strip_size_kb": 0, 00:30:02.995 "state": "online", 00:30:02.995 "raid_level": "raid1", 00:30:02.995 "superblock": true, 00:30:02.995 "num_base_bdevs": 4, 00:30:02.995 "num_base_bdevs_discovered": 3, 00:30:02.995 "num_base_bdevs_operational": 3, 00:30:02.995 "base_bdevs_list": [ 00:30:02.995 { 00:30:02.995 "name": null, 00:30:02.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.995 "is_configured": false, 00:30:02.995 "data_offset": 2048, 00:30:02.995 "data_size": 63488 00:30:02.995 }, 00:30:02.995 { 00:30:02.995 "name": "BaseBdev2", 00:30:02.995 "uuid": "2af433b9-1799-5eec-b71b-d289516477e8", 00:30:02.995 "is_configured": true, 00:30:02.995 "data_offset": 2048, 00:30:02.995 "data_size": 63488 00:30:02.995 }, 00:30:02.995 { 00:30:02.995 "name": "BaseBdev3", 00:30:02.995 "uuid": "b9609750-2949-5bfc-a33e-cde5bbc852c2", 00:30:02.995 "is_configured": true, 00:30:02.995 "data_offset": 2048, 00:30:02.995 "data_size": 63488 00:30:02.995 }, 00:30:02.995 { 00:30:02.995 "name": "BaseBdev4", 00:30:02.995 "uuid": "7402fc12-ff0b-5cda-9dde-24d1f65cd9ee", 00:30:02.995 "is_configured": true, 00:30:02.995 "data_offset": 2048, 00:30:02.995 "data_size": 63488 00:30:02.995 } 00:30:02.995 ] 00:30:02.995 }' 00:30:02.995 11:53:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:02.995 11:53:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.561 11:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:03.820 [2024-06-10 11:53:35.754526] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:03.820 [2024-06-10 11:53:35.754825] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:03.820 [2024-06-10 11:53:35.757499] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:03.820 [2024-06-10 11:53:35.757658] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:03.820 [2024-06-10 11:53:35.757793] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:03.820 [2024-06-10 11:53:35.757944] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:30:03.820 0 00:30:03.820 11:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 145607 00:30:03.820 11:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 145607 ']' 00:30:03.820 11:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 145607 00:30:03.820 11:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:30:03.820 11:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:03.820 11:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 145607 00:30:03.820 killing process with pid 145607 00:30:03.820 11:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:03.820 11:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:03.820 11:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 145607' 00:30:03.820 11:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 145607 00:30:03.820 [2024-06-10 11:53:35.816664] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:03.820 11:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 145607 00:30:04.386 [2024-06-10 11:53:36.206900] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:05.801 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.1VlZgSwQQT 00:30:05.801 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:30:05.801 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:30:05.801 ************************************ 00:30:05.801 END TEST raid_write_error_test 00:30:05.801 ************************************ 00:30:05.801 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:30:05.801 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:30:05.801 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:30:05.801 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:30:05.801 11:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:30:05.801 00:30:05.801 real 0m10.313s 00:30:05.801 user 0m15.670s 00:30:05.801 sys 0m1.338s 00:30:05.801 11:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:05.801 11:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.059 11:53:37 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:30:06.059 11:53:37 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:30:06.059 11:53:37 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:30:06.059 11:53:37 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:30:06.059 11:53:37 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:06.059 11:53:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:06.059 ************************************ 00:30:06.059 START TEST raid_rebuild_test 00:30:06.059 ************************************ 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 2 false false true 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=145828 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 145828 /var/tmp/spdk-raid.sock 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@830 -- # '[' -z 145828 ']' 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:06.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:06.059 11:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.059 [2024-06-10 11:53:37.991829] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:30:06.059 [2024-06-10 11:53:37.992332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145828 ] 00:30:06.059 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:06.059 Zero copy mechanism will not be used. 00:30:06.318 [2024-06-10 11:53:38.176104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.576 [2024-06-10 11:53:38.398552] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.834 [2024-06-10 11:53:38.639634] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:07.092 11:53:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:07.092 11:53:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@863 -- # return 0 00:30:07.092 11:53:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:07.092 11:53:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:07.351 BaseBdev1_malloc 00:30:07.351 11:53:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:07.609 [2024-06-10 11:53:39.413745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:07.609 [2024-06-10 11:53:39.414082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:07.609 [2024-06-10 11:53:39.414173] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:30:07.609 [2024-06-10 11:53:39.414289] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:07.609 [2024-06-10 11:53:39.416871] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:07.609 [2024-06-10 11:53:39.417066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:07.609 BaseBdev1 00:30:07.609 11:53:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:07.609 11:53:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:07.867 BaseBdev2_malloc 00:30:07.867 11:53:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:08.125 [2024-06-10 11:53:39.970784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:08.125 [2024-06-10 11:53:39.971107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:08.125 [2024-06-10 11:53:39.971202] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:30:08.125 [2024-06-10 11:53:39.971360] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:08.125 [2024-06-10 11:53:39.973918] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:08.125 [2024-06-10 11:53:39.974108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:08.125 BaseBdev2 00:30:08.125 11:53:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:08.435 spare_malloc 00:30:08.435 11:53:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:08.435 spare_delay 00:30:08.435 11:53:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:08.692 [2024-06-10 11:53:40.721530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:08.692 [2024-06-10 11:53:40.721834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:08.692 [2024-06-10 11:53:40.721957] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:08.692 [2024-06-10 11:53:40.722059] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:08.692 [2024-06-10 11:53:40.724498] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:08.692 [2024-06-10 11:53:40.724668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:08.692 spare 00:30:08.692 11:53:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:30:08.950 [2024-06-10 11:53:40.973605] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:08.950 [2024-06-10 11:53:40.975750] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:08.950 [2024-06-10 11:53:40.975957] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:30:08.950 [2024-06-10 11:53:40.976078] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:08.950 [2024-06-10 11:53:40.976267] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:30:08.950 [2024-06-10 11:53:40.976699] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:30:08.950 [2024-06-10 11:53:40.976810] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:30:08.950 [2024-06-10 11:53:40.977097] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:08.950 11:53:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:08.950 11:53:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:08.950 11:53:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:08.950 11:53:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:08.950 11:53:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:08.950 11:53:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:08.950 11:53:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:08.950 11:53:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:08.950 11:53:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:08.950 11:53:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:08.951 11:53:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:08.951 11:53:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.208 11:53:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:09.208 "name": "raid_bdev1", 00:30:09.208 "uuid": "0fae6f30-370e-4555-8b2d-0c5a7883ca98", 00:30:09.208 "strip_size_kb": 0, 00:30:09.208 "state": "online", 00:30:09.208 "raid_level": "raid1", 00:30:09.208 "superblock": false, 00:30:09.208 "num_base_bdevs": 2, 00:30:09.208 "num_base_bdevs_discovered": 2, 00:30:09.208 "num_base_bdevs_operational": 2, 00:30:09.208 "base_bdevs_list": [ 00:30:09.208 { 00:30:09.208 "name": "BaseBdev1", 00:30:09.209 "uuid": "c96b6a4b-454b-55ad-b703-e10ac064fc44", 00:30:09.209 "is_configured": true, 00:30:09.209 "data_offset": 0, 00:30:09.209 "data_size": 65536 00:30:09.209 }, 00:30:09.209 { 00:30:09.209 "name": "BaseBdev2", 00:30:09.209 "uuid": "d10965c6-dc17-5795-a08d-64578f0c5ab6", 00:30:09.209 "is_configured": true, 00:30:09.209 "data_offset": 0, 00:30:09.209 "data_size": 65536 00:30:09.209 } 00:30:09.209 ] 00:30:09.209 }' 00:30:09.209 11:53:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:09.209 11:53:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:10.141 11:53:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:30:10.141 11:53:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:10.399 [2024-06-10 11:53:42.230099] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:10.399 11:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:30:10.399 11:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:10.399 11:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.657 11:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:30:10.657 11:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:30:10.657 11:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:30:10.657 11:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:30:10.657 11:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:30:10.657 11:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:10.657 11:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:30:10.657 11:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:10.657 11:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:10.657 11:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:10.657 11:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:30:10.657 11:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:10.657 11:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:10.657 11:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:30:10.916 [2024-06-10 11:53:42.794069] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:30:10.916 /dev/nbd0 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # break 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:10.916 1+0 records in 00:30:10.916 1+0 records out 00:30:10.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624446 s, 6.6 MB/s 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:30:10.916 11:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:30:16.229 65536+0 records in 00:30:16.229 65536+0 records out 00:30:16.229 33554432 bytes (34 MB, 32 MiB) copied, 4.76655 s, 7.0 MB/s 00:30:16.229 11:53:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:16.229 11:53:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:16.229 11:53:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:16.229 11:53:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:16.229 11:53:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:30:16.229 11:53:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:16.229 11:53:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:16.229 [2024-06-10 11:53:47.822094] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:16.229 11:53:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:16.229 11:53:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:16.229 11:53:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:16.229 11:53:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:16.229 11:53:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:16.229 11:53:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:16.229 11:53:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:30:16.229 11:53:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:30:16.229 11:53:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:16.229 [2024-06-10 11:53:48.029851] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:16.229 11:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:16.229 11:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:16.229 11:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:16.229 11:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:16.229 11:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:16.229 11:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:16.229 11:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:16.229 11:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:16.229 11:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:16.229 11:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:16.229 11:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:16.229 11:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:16.488 11:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:16.488 "name": "raid_bdev1", 00:30:16.488 "uuid": "0fae6f30-370e-4555-8b2d-0c5a7883ca98", 00:30:16.488 "strip_size_kb": 0, 00:30:16.488 "state": "online", 00:30:16.488 "raid_level": "raid1", 00:30:16.488 "superblock": false, 00:30:16.488 "num_base_bdevs": 2, 00:30:16.488 "num_base_bdevs_discovered": 1, 00:30:16.488 "num_base_bdevs_operational": 1, 00:30:16.488 "base_bdevs_list": [ 00:30:16.488 { 00:30:16.488 "name": null, 00:30:16.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:16.488 "is_configured": false, 00:30:16.488 "data_offset": 0, 00:30:16.488 "data_size": 65536 00:30:16.488 }, 00:30:16.488 { 00:30:16.488 "name": "BaseBdev2", 00:30:16.488 "uuid": "d10965c6-dc17-5795-a08d-64578f0c5ab6", 00:30:16.488 "is_configured": true, 00:30:16.488 "data_offset": 0, 00:30:16.488 "data_size": 65536 00:30:16.488 } 00:30:16.488 ] 00:30:16.488 }' 00:30:16.488 11:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:16.488 11:53:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:17.055 11:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:17.055 [2024-06-10 11:53:49.091340] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:17.055 [2024-06-10 11:53:49.110742] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09960 00:30:17.055 [2024-06-10 11:53:49.112915] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:17.315 11:53:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:30:18.249 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:18.249 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:18.249 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:18.249 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:18.249 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:18.249 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:18.249 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.508 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:18.508 "name": "raid_bdev1", 00:30:18.508 "uuid": "0fae6f30-370e-4555-8b2d-0c5a7883ca98", 00:30:18.508 "strip_size_kb": 0, 00:30:18.508 "state": "online", 00:30:18.508 "raid_level": "raid1", 00:30:18.508 "superblock": false, 00:30:18.508 "num_base_bdevs": 2, 00:30:18.508 "num_base_bdevs_discovered": 2, 00:30:18.508 "num_base_bdevs_operational": 2, 00:30:18.508 "process": { 00:30:18.508 "type": "rebuild", 00:30:18.508 "target": "spare", 00:30:18.508 "progress": { 00:30:18.508 "blocks": 24576, 00:30:18.508 "percent": 37 00:30:18.508 } 00:30:18.508 }, 00:30:18.508 "base_bdevs_list": [ 00:30:18.508 { 00:30:18.508 "name": "spare", 00:30:18.508 "uuid": "45d435b6-ed31-51b6-87ce-429fe6349cfb", 00:30:18.508 "is_configured": true, 00:30:18.508 "data_offset": 0, 00:30:18.508 "data_size": 65536 00:30:18.508 }, 00:30:18.508 { 00:30:18.508 "name": "BaseBdev2", 00:30:18.508 "uuid": "d10965c6-dc17-5795-a08d-64578f0c5ab6", 00:30:18.508 "is_configured": true, 00:30:18.508 "data_offset": 0, 00:30:18.508 "data_size": 65536 00:30:18.508 } 00:30:18.508 ] 00:30:18.508 }' 00:30:18.508 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:18.508 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:18.508 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:18.508 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:18.508 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:18.765 [2024-06-10 11:53:50.659017] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:18.765 [2024-06-10 11:53:50.723016] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:18.765 [2024-06-10 11:53:50.723253] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:18.765 [2024-06-10 11:53:50.723303] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:18.765 [2024-06-10 11:53:50.723376] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:18.766 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:18.766 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:18.766 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:18.766 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:18.766 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:18.766 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:18.766 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:18.766 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:18.766 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:18.766 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:18.766 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:18.766 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:19.031 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:19.031 "name": "raid_bdev1", 00:30:19.031 "uuid": "0fae6f30-370e-4555-8b2d-0c5a7883ca98", 00:30:19.031 "strip_size_kb": 0, 00:30:19.031 "state": "online", 00:30:19.031 "raid_level": "raid1", 00:30:19.031 "superblock": false, 00:30:19.031 "num_base_bdevs": 2, 00:30:19.032 "num_base_bdevs_discovered": 1, 00:30:19.032 "num_base_bdevs_operational": 1, 00:30:19.032 "base_bdevs_list": [ 00:30:19.032 { 00:30:19.032 "name": null, 00:30:19.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:19.032 "is_configured": false, 00:30:19.032 "data_offset": 0, 00:30:19.032 "data_size": 65536 00:30:19.032 }, 00:30:19.032 { 00:30:19.032 "name": "BaseBdev2", 00:30:19.032 "uuid": "d10965c6-dc17-5795-a08d-64578f0c5ab6", 00:30:19.032 "is_configured": true, 00:30:19.032 "data_offset": 0, 00:30:19.032 "data_size": 65536 00:30:19.032 } 00:30:19.032 ] 00:30:19.032 }' 00:30:19.032 11:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:19.032 11:53:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.598 11:53:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:19.598 11:53:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:19.598 11:53:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:19.598 11:53:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:19.598 11:53:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:19.598 11:53:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:19.598 11:53:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:19.857 11:53:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:19.857 "name": "raid_bdev1", 00:30:19.857 "uuid": "0fae6f30-370e-4555-8b2d-0c5a7883ca98", 00:30:19.857 "strip_size_kb": 0, 00:30:19.857 "state": "online", 00:30:19.857 "raid_level": "raid1", 00:30:19.857 "superblock": false, 00:30:19.857 "num_base_bdevs": 2, 00:30:19.857 "num_base_bdevs_discovered": 1, 00:30:19.857 "num_base_bdevs_operational": 1, 00:30:19.857 "base_bdevs_list": [ 00:30:19.857 { 00:30:19.857 "name": null, 00:30:19.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:19.857 "is_configured": false, 00:30:19.857 "data_offset": 0, 00:30:19.857 "data_size": 65536 00:30:19.857 }, 00:30:19.857 { 00:30:19.857 "name": "BaseBdev2", 00:30:19.857 "uuid": "d10965c6-dc17-5795-a08d-64578f0c5ab6", 00:30:19.857 "is_configured": true, 00:30:19.857 "data_offset": 0, 00:30:19.857 "data_size": 65536 00:30:19.857 } 00:30:19.857 ] 00:30:19.857 }' 00:30:19.857 11:53:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:19.857 11:53:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:19.857 11:53:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:19.857 11:53:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:19.857 11:53:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:20.116 [2024-06-10 11:53:52.138812] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:20.117 [2024-06-10 11:53:52.156596] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:30:20.117 [2024-06-10 11:53:52.158889] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:20.375 11:53:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:21.311 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:21.311 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:21.311 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:21.311 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:21.311 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:21.311 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:21.311 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:21.570 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:21.570 "name": "raid_bdev1", 00:30:21.570 "uuid": "0fae6f30-370e-4555-8b2d-0c5a7883ca98", 00:30:21.570 "strip_size_kb": 0, 00:30:21.570 "state": "online", 00:30:21.570 "raid_level": "raid1", 00:30:21.570 "superblock": false, 00:30:21.570 "num_base_bdevs": 2, 00:30:21.570 "num_base_bdevs_discovered": 2, 00:30:21.570 "num_base_bdevs_operational": 2, 00:30:21.570 "process": { 00:30:21.570 "type": "rebuild", 00:30:21.570 "target": "spare", 00:30:21.570 "progress": { 00:30:21.570 "blocks": 22528, 00:30:21.570 "percent": 34 00:30:21.570 } 00:30:21.570 }, 00:30:21.570 "base_bdevs_list": [ 00:30:21.570 { 00:30:21.570 "name": "spare", 00:30:21.570 "uuid": "45d435b6-ed31-51b6-87ce-429fe6349cfb", 00:30:21.570 "is_configured": true, 00:30:21.570 "data_offset": 0, 00:30:21.570 "data_size": 65536 00:30:21.570 }, 00:30:21.570 { 00:30:21.570 "name": "BaseBdev2", 00:30:21.570 "uuid": "d10965c6-dc17-5795-a08d-64578f0c5ab6", 00:30:21.570 "is_configured": true, 00:30:21.570 "data_offset": 0, 00:30:21.570 "data_size": 65536 00:30:21.570 } 00:30:21.570 ] 00:30:21.570 }' 00:30:21.570 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:21.570 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:21.570 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:21.570 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:21.570 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:30:21.570 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:30:21.570 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:30:21.571 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:30:21.571 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=893 00:30:21.571 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:21.571 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:21.571 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:21.571 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:21.571 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:21.571 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:21.571 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:21.571 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:21.830 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:21.830 "name": "raid_bdev1", 00:30:21.830 "uuid": "0fae6f30-370e-4555-8b2d-0c5a7883ca98", 00:30:21.830 "strip_size_kb": 0, 00:30:21.830 "state": "online", 00:30:21.830 "raid_level": "raid1", 00:30:21.830 "superblock": false, 00:30:21.830 "num_base_bdevs": 2, 00:30:21.830 "num_base_bdevs_discovered": 2, 00:30:21.830 "num_base_bdevs_operational": 2, 00:30:21.830 "process": { 00:30:21.830 "type": "rebuild", 00:30:21.830 "target": "spare", 00:30:21.830 "progress": { 00:30:21.830 "blocks": 30720, 00:30:21.830 "percent": 46 00:30:21.830 } 00:30:21.830 }, 00:30:21.830 "base_bdevs_list": [ 00:30:21.830 { 00:30:21.830 "name": "spare", 00:30:21.830 "uuid": "45d435b6-ed31-51b6-87ce-429fe6349cfb", 00:30:21.830 "is_configured": true, 00:30:21.830 "data_offset": 0, 00:30:21.830 "data_size": 65536 00:30:21.830 }, 00:30:21.830 { 00:30:21.830 "name": "BaseBdev2", 00:30:21.830 "uuid": "d10965c6-dc17-5795-a08d-64578f0c5ab6", 00:30:21.830 "is_configured": true, 00:30:21.830 "data_offset": 0, 00:30:21.830 "data_size": 65536 00:30:21.830 } 00:30:21.830 ] 00:30:21.830 }' 00:30:21.830 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:21.830 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:21.830 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:21.830 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:21.830 11:53:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:22.766 11:53:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:22.766 11:53:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:22.766 11:53:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:22.766 11:53:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:22.766 11:53:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:22.766 11:53:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:22.766 11:53:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:22.766 11:53:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:23.024 11:53:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:23.024 "name": "raid_bdev1", 00:30:23.024 "uuid": "0fae6f30-370e-4555-8b2d-0c5a7883ca98", 00:30:23.024 "strip_size_kb": 0, 00:30:23.024 "state": "online", 00:30:23.024 "raid_level": "raid1", 00:30:23.024 "superblock": false, 00:30:23.024 "num_base_bdevs": 2, 00:30:23.024 "num_base_bdevs_discovered": 2, 00:30:23.024 "num_base_bdevs_operational": 2, 00:30:23.024 "process": { 00:30:23.024 "type": "rebuild", 00:30:23.024 "target": "spare", 00:30:23.024 "progress": { 00:30:23.024 "blocks": 57344, 00:30:23.024 "percent": 87 00:30:23.024 } 00:30:23.024 }, 00:30:23.024 "base_bdevs_list": [ 00:30:23.024 { 00:30:23.024 "name": "spare", 00:30:23.024 "uuid": "45d435b6-ed31-51b6-87ce-429fe6349cfb", 00:30:23.024 "is_configured": true, 00:30:23.024 "data_offset": 0, 00:30:23.024 "data_size": 65536 00:30:23.024 }, 00:30:23.024 { 00:30:23.024 "name": "BaseBdev2", 00:30:23.024 "uuid": "d10965c6-dc17-5795-a08d-64578f0c5ab6", 00:30:23.024 "is_configured": true, 00:30:23.024 "data_offset": 0, 00:30:23.024 "data_size": 65536 00:30:23.024 } 00:30:23.024 ] 00:30:23.024 }' 00:30:23.024 11:53:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:23.282 11:53:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:23.282 11:53:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:23.282 11:53:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:23.282 11:53:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:23.540 [2024-06-10 11:53:55.378961] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:23.540 [2024-06-10 11:53:55.379277] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:23.540 [2024-06-10 11:53:55.379480] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:24.107 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:24.107 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:24.107 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:24.107 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:24.107 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:24.107 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:24.107 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.107 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.366 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:24.366 "name": "raid_bdev1", 00:30:24.366 "uuid": "0fae6f30-370e-4555-8b2d-0c5a7883ca98", 00:30:24.366 "strip_size_kb": 0, 00:30:24.366 "state": "online", 00:30:24.366 "raid_level": "raid1", 00:30:24.366 "superblock": false, 00:30:24.366 "num_base_bdevs": 2, 00:30:24.366 "num_base_bdevs_discovered": 2, 00:30:24.366 "num_base_bdevs_operational": 2, 00:30:24.366 "base_bdevs_list": [ 00:30:24.366 { 00:30:24.366 "name": "spare", 00:30:24.366 "uuid": "45d435b6-ed31-51b6-87ce-429fe6349cfb", 00:30:24.366 "is_configured": true, 00:30:24.366 "data_offset": 0, 00:30:24.366 "data_size": 65536 00:30:24.366 }, 00:30:24.366 { 00:30:24.366 "name": "BaseBdev2", 00:30:24.366 "uuid": "d10965c6-dc17-5795-a08d-64578f0c5ab6", 00:30:24.366 "is_configured": true, 00:30:24.366 "data_offset": 0, 00:30:24.366 "data_size": 65536 00:30:24.366 } 00:30:24.366 ] 00:30:24.366 }' 00:30:24.366 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:24.625 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:24.625 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:24.625 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:24.625 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:30:24.625 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:24.625 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:24.625 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:24.625 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:24.625 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:24.625 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.625 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.883 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:24.883 "name": "raid_bdev1", 00:30:24.883 "uuid": "0fae6f30-370e-4555-8b2d-0c5a7883ca98", 00:30:24.883 "strip_size_kb": 0, 00:30:24.883 "state": "online", 00:30:24.883 "raid_level": "raid1", 00:30:24.883 "superblock": false, 00:30:24.883 "num_base_bdevs": 2, 00:30:24.883 "num_base_bdevs_discovered": 2, 00:30:24.883 "num_base_bdevs_operational": 2, 00:30:24.883 "base_bdevs_list": [ 00:30:24.883 { 00:30:24.883 "name": "spare", 00:30:24.883 "uuid": "45d435b6-ed31-51b6-87ce-429fe6349cfb", 00:30:24.883 "is_configured": true, 00:30:24.883 "data_offset": 0, 00:30:24.883 "data_size": 65536 00:30:24.883 }, 00:30:24.883 { 00:30:24.883 "name": "BaseBdev2", 00:30:24.883 "uuid": "d10965c6-dc17-5795-a08d-64578f0c5ab6", 00:30:24.883 "is_configured": true, 00:30:24.883 "data_offset": 0, 00:30:24.883 "data_size": 65536 00:30:24.883 } 00:30:24.883 ] 00:30:24.883 }' 00:30:24.883 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:24.883 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:24.883 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:24.883 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:24.883 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:24.883 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:24.883 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:24.883 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:24.883 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:24.883 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:24.883 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:24.883 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:24.883 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:24.883 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:24.883 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.883 11:53:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.140 11:53:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:25.140 "name": "raid_bdev1", 00:30:25.140 "uuid": "0fae6f30-370e-4555-8b2d-0c5a7883ca98", 00:30:25.140 "strip_size_kb": 0, 00:30:25.140 "state": "online", 00:30:25.140 "raid_level": "raid1", 00:30:25.140 "superblock": false, 00:30:25.140 "num_base_bdevs": 2, 00:30:25.140 "num_base_bdevs_discovered": 2, 00:30:25.140 "num_base_bdevs_operational": 2, 00:30:25.140 "base_bdevs_list": [ 00:30:25.140 { 00:30:25.140 "name": "spare", 00:30:25.140 "uuid": "45d435b6-ed31-51b6-87ce-429fe6349cfb", 00:30:25.140 "is_configured": true, 00:30:25.140 "data_offset": 0, 00:30:25.140 "data_size": 65536 00:30:25.140 }, 00:30:25.140 { 00:30:25.140 "name": "BaseBdev2", 00:30:25.141 "uuid": "d10965c6-dc17-5795-a08d-64578f0c5ab6", 00:30:25.141 "is_configured": true, 00:30:25.141 "data_offset": 0, 00:30:25.141 "data_size": 65536 00:30:25.141 } 00:30:25.141 ] 00:30:25.141 }' 00:30:25.141 11:53:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:25.141 11:53:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.706 11:53:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:25.975 [2024-06-10 11:53:57.936594] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:25.975 [2024-06-10 11:53:57.936801] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:25.975 [2024-06-10 11:53:57.936984] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:25.975 [2024-06-10 11:53:57.937179] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:25.975 [2024-06-10 11:53:57.937272] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:30:25.975 11:53:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:30:25.975 11:53:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.265 11:53:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:26.265 11:53:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:26.265 11:53:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:30:26.265 11:53:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:26.265 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:26.265 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:26.265 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:26.265 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:26.265 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:26.265 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:30:26.265 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:26.265 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:26.265 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:26.523 /dev/nbd0 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # break 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:26.523 1+0 records in 00:30:26.523 1+0 records out 00:30:26.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533269 s, 7.7 MB/s 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:26.523 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:30:26.782 /dev/nbd1 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # break 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:26.782 1+0 records in 00:30:26.782 1+0 records out 00:30:26.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525027 s, 7.8 MB/s 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:26.782 11:53:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:27.041 11:53:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:30:27.041 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:27.041 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:27.041 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:27.041 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:30:27.041 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:27.041 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:27.298 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:27.298 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:27.298 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:27.298 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:27.298 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:27.299 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:27.299 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:30:27.299 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:30:27.299 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:27.299 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 145828 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@949 -- # '[' -z 145828 ']' 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # kill -0 145828 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # uname 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 145828 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 145828' 00:30:27.557 killing process with pid 145828 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # kill 145828 00:30:27.557 Received shutdown signal, test time was about 60.000000 seconds 00:30:27.557 00:30:27.557 Latency(us) 00:30:27.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.557 =================================================================================================================== 00:30:27.557 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:27.557 11:53:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # wait 145828 00:30:27.557 [2024-06-10 11:53:59.588955] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:28.121 [2024-06-10 11:53:59.956667] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:30.021 ************************************ 00:30:30.021 END TEST raid_rebuild_test 00:30:30.021 ************************************ 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:30:30.021 00:30:30.021 real 0m23.731s 00:30:30.021 user 0m32.149s 00:30:30.021 sys 0m4.359s 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.021 11:54:01 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:30:30.021 11:54:01 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:30:30.021 11:54:01 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:30.021 11:54:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:30.021 ************************************ 00:30:30.021 START TEST raid_rebuild_test_sb 00:30:30.021 ************************************ 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 2 true false true 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=146387 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 146387 /var/tmp/spdk-raid.sock 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@830 -- # '[' -z 146387 ']' 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:30.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:30.021 11:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:30.022 11:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:30.022 [2024-06-10 11:54:01.784834] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:30:30.022 [2024-06-10 11:54:01.785087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146387 ] 00:30:30.022 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:30.022 Zero copy mechanism will not be used. 00:30:30.022 [2024-06-10 11:54:01.965511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.279 [2024-06-10 11:54:02.199678] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.537 [2024-06-10 11:54:02.445305] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:30.795 11:54:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:30.795 11:54:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@863 -- # return 0 00:30:30.795 11:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:30.795 11:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:31.052 BaseBdev1_malloc 00:30:31.052 11:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:31.310 [2024-06-10 11:54:03.235667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:31.310 [2024-06-10 11:54:03.235942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:31.310 [2024-06-10 11:54:03.236077] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:30:31.310 [2024-06-10 11:54:03.236192] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:31.310 [2024-06-10 11:54:03.238837] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:31.310 [2024-06-10 11:54:03.239013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:31.310 BaseBdev1 00:30:31.310 11:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:31.310 11:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:31.568 BaseBdev2_malloc 00:30:31.568 11:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:31.826 [2024-06-10 11:54:03.693928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:31.826 [2024-06-10 11:54:03.694388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:31.826 [2024-06-10 11:54:03.694618] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:30:31.826 [2024-06-10 11:54:03.694793] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:31.826 [2024-06-10 11:54:03.698347] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:31.826 [2024-06-10 11:54:03.698616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:31.826 BaseBdev2 00:30:31.826 11:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:32.085 spare_malloc 00:30:32.085 11:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:32.343 spare_delay 00:30:32.343 11:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:32.601 [2024-06-10 11:54:04.462964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:32.601 [2024-06-10 11:54:04.463237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:32.601 [2024-06-10 11:54:04.463392] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:32.601 [2024-06-10 11:54:04.463504] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:32.601 [2024-06-10 11:54:04.466230] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:32.601 [2024-06-10 11:54:04.466408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:32.601 spare 00:30:32.601 11:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:30:32.859 [2024-06-10 11:54:04.675422] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:32.859 [2024-06-10 11:54:04.679466] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:32.859 [2024-06-10 11:54:04.680111] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:30:32.859 [2024-06-10 11:54:04.680305] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:32.859 [2024-06-10 11:54:04.680698] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:30:32.859 [2024-06-10 11:54:04.681520] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:30:32.859 [2024-06-10 11:54:04.681716] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:30:32.859 [2024-06-10 11:54:04.682121] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:32.859 11:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:32.859 11:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:32.859 11:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:32.859 11:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:32.859 11:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:32.859 11:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:32.859 11:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:32.859 11:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:32.859 11:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:32.859 11:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:32.859 11:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:32.859 11:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:33.116 11:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:33.116 "name": "raid_bdev1", 00:30:33.116 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:33.116 "strip_size_kb": 0, 00:30:33.116 "state": "online", 00:30:33.116 "raid_level": "raid1", 00:30:33.116 "superblock": true, 00:30:33.116 "num_base_bdevs": 2, 00:30:33.116 "num_base_bdevs_discovered": 2, 00:30:33.116 "num_base_bdevs_operational": 2, 00:30:33.116 "base_bdevs_list": [ 00:30:33.116 { 00:30:33.116 "name": "BaseBdev1", 00:30:33.116 "uuid": "d4a3b4dc-49a9-5a79-a74d-a1c0ff94858c", 00:30:33.116 "is_configured": true, 00:30:33.116 "data_offset": 2048, 00:30:33.116 "data_size": 63488 00:30:33.116 }, 00:30:33.116 { 00:30:33.116 "name": "BaseBdev2", 00:30:33.116 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:33.116 "is_configured": true, 00:30:33.116 "data_offset": 2048, 00:30:33.116 "data_size": 63488 00:30:33.116 } 00:30:33.116 ] 00:30:33.116 }' 00:30:33.116 11:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:33.116 11:54:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:33.683 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:33.683 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:30:33.683 [2024-06-10 11:54:05.728501] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:33.941 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:30:33.941 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:33.941 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:33.941 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:30:33.941 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:30:33.941 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:30:33.941 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:30:33.941 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:30:33.941 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:33.941 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:30:33.941 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:33.941 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:33.941 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:33.941 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:33.941 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:33.941 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:33.941 11:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:30:34.199 [2024-06-10 11:54:06.224474] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:30:34.199 /dev/nbd0 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:34.457 1+0 records in 00:30:34.457 1+0 records out 00:30:34.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271135 s, 15.1 MB/s 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:30:34.457 11:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:30:39.719 63488+0 records in 00:30:39.719 63488+0 records out 00:30:39.719 32505856 bytes (33 MB, 31 MiB) copied, 4.68837 s, 6.9 MB/s 00:30:39.719 11:54:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:39.719 11:54:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:39.719 11:54:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:39.719 11:54:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:39.719 11:54:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:39.719 11:54:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:39.719 11:54:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:39.719 [2024-06-10 11:54:11.270102] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:39.719 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:39.719 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:39.719 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:39.719 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:39.719 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:39.719 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:39.719 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:39.719 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:39.719 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:39.719 [2024-06-10 11:54:11.553806] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:39.719 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:39.719 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:39.719 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:39.720 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:39.720 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:39.720 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:39.720 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:39.720 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:39.720 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:39.720 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:39.720 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:39.720 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:39.978 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:39.978 "name": "raid_bdev1", 00:30:39.978 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:39.978 "strip_size_kb": 0, 00:30:39.978 "state": "online", 00:30:39.978 "raid_level": "raid1", 00:30:39.978 "superblock": true, 00:30:39.978 "num_base_bdevs": 2, 00:30:39.978 "num_base_bdevs_discovered": 1, 00:30:39.978 "num_base_bdevs_operational": 1, 00:30:39.978 "base_bdevs_list": [ 00:30:39.978 { 00:30:39.978 "name": null, 00:30:39.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.978 "is_configured": false, 00:30:39.978 "data_offset": 2048, 00:30:39.978 "data_size": 63488 00:30:39.978 }, 00:30:39.978 { 00:30:39.978 "name": "BaseBdev2", 00:30:39.978 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:39.978 "is_configured": true, 00:30:39.978 "data_offset": 2048, 00:30:39.978 "data_size": 63488 00:30:39.978 } 00:30:39.978 ] 00:30:39.978 }' 00:30:39.978 11:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:39.978 11:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.544 11:54:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:40.802 [2024-06-10 11:54:12.730142] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:40.802 [2024-06-10 11:54:12.750575] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca30f0 00:30:40.802 [2024-06-10 11:54:12.753068] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:40.802 11:54:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:30:41.736 11:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:41.736 11:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:41.736 11:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:41.736 11:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:41.736 11:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:41.736 11:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:41.736 11:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:42.301 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:42.301 "name": "raid_bdev1", 00:30:42.301 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:42.301 "strip_size_kb": 0, 00:30:42.301 "state": "online", 00:30:42.301 "raid_level": "raid1", 00:30:42.301 "superblock": true, 00:30:42.301 "num_base_bdevs": 2, 00:30:42.301 "num_base_bdevs_discovered": 2, 00:30:42.301 "num_base_bdevs_operational": 2, 00:30:42.301 "process": { 00:30:42.301 "type": "rebuild", 00:30:42.301 "target": "spare", 00:30:42.301 "progress": { 00:30:42.301 "blocks": 26624, 00:30:42.301 "percent": 41 00:30:42.301 } 00:30:42.301 }, 00:30:42.301 "base_bdevs_list": [ 00:30:42.301 { 00:30:42.301 "name": "spare", 00:30:42.301 "uuid": "a134c0f0-aa72-5dbb-b906-0ab82f8d4c6f", 00:30:42.301 "is_configured": true, 00:30:42.301 "data_offset": 2048, 00:30:42.301 "data_size": 63488 00:30:42.301 }, 00:30:42.301 { 00:30:42.301 "name": "BaseBdev2", 00:30:42.301 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:42.301 "is_configured": true, 00:30:42.301 "data_offset": 2048, 00:30:42.301 "data_size": 63488 00:30:42.301 } 00:30:42.301 ] 00:30:42.301 }' 00:30:42.301 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:42.301 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:42.301 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:42.301 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:42.301 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:42.560 [2024-06-10 11:54:14.474648] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:42.560 [2024-06-10 11:54:14.565051] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:42.560 [2024-06-10 11:54:14.565154] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:42.560 [2024-06-10 11:54:14.565187] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:42.560 [2024-06-10 11:54:14.565195] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:42.818 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:42.819 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:42.819 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:42.819 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:42.819 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:42.819 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:42.819 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:42.819 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:42.819 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:42.819 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:42.819 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.819 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:43.077 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:43.077 "name": "raid_bdev1", 00:30:43.077 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:43.077 "strip_size_kb": 0, 00:30:43.077 "state": "online", 00:30:43.077 "raid_level": "raid1", 00:30:43.077 "superblock": true, 00:30:43.077 "num_base_bdevs": 2, 00:30:43.077 "num_base_bdevs_discovered": 1, 00:30:43.077 "num_base_bdevs_operational": 1, 00:30:43.077 "base_bdevs_list": [ 00:30:43.077 { 00:30:43.077 "name": null, 00:30:43.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:43.077 "is_configured": false, 00:30:43.077 "data_offset": 2048, 00:30:43.077 "data_size": 63488 00:30:43.077 }, 00:30:43.077 { 00:30:43.077 "name": "BaseBdev2", 00:30:43.077 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:43.077 "is_configured": true, 00:30:43.077 "data_offset": 2048, 00:30:43.077 "data_size": 63488 00:30:43.077 } 00:30:43.077 ] 00:30:43.077 }' 00:30:43.077 11:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:43.077 11:54:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.644 11:54:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:43.644 11:54:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:43.644 11:54:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:43.644 11:54:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:43.644 11:54:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:43.644 11:54:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:43.644 11:54:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:43.902 11:54:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:43.902 "name": "raid_bdev1", 00:30:43.903 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:43.903 "strip_size_kb": 0, 00:30:43.903 "state": "online", 00:30:43.903 "raid_level": "raid1", 00:30:43.903 "superblock": true, 00:30:43.903 "num_base_bdevs": 2, 00:30:43.903 "num_base_bdevs_discovered": 1, 00:30:43.903 "num_base_bdevs_operational": 1, 00:30:43.903 "base_bdevs_list": [ 00:30:43.903 { 00:30:43.903 "name": null, 00:30:43.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:43.903 "is_configured": false, 00:30:43.903 "data_offset": 2048, 00:30:43.903 "data_size": 63488 00:30:43.903 }, 00:30:43.903 { 00:30:43.903 "name": "BaseBdev2", 00:30:43.903 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:43.903 "is_configured": true, 00:30:43.903 "data_offset": 2048, 00:30:43.903 "data_size": 63488 00:30:43.903 } 00:30:43.903 ] 00:30:43.903 }' 00:30:43.903 11:54:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:43.903 11:54:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:43.903 11:54:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:43.903 11:54:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:43.903 11:54:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:44.161 [2024-06-10 11:54:16.145195] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:44.161 [2024-06-10 11:54:16.165018] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:30:44.162 [2024-06-10 11:54:16.167306] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:44.162 11:54:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:45.539 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:45.539 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:45.539 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:45.539 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:45.539 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:45.539 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:45.539 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:45.539 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:45.539 "name": "raid_bdev1", 00:30:45.539 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:45.540 "strip_size_kb": 0, 00:30:45.540 "state": "online", 00:30:45.540 "raid_level": "raid1", 00:30:45.540 "superblock": true, 00:30:45.540 "num_base_bdevs": 2, 00:30:45.540 "num_base_bdevs_discovered": 2, 00:30:45.540 "num_base_bdevs_operational": 2, 00:30:45.540 "process": { 00:30:45.540 "type": "rebuild", 00:30:45.540 "target": "spare", 00:30:45.540 "progress": { 00:30:45.540 "blocks": 24576, 00:30:45.540 "percent": 38 00:30:45.540 } 00:30:45.540 }, 00:30:45.540 "base_bdevs_list": [ 00:30:45.540 { 00:30:45.540 "name": "spare", 00:30:45.540 "uuid": "a134c0f0-aa72-5dbb-b906-0ab82f8d4c6f", 00:30:45.540 "is_configured": true, 00:30:45.540 "data_offset": 2048, 00:30:45.540 "data_size": 63488 00:30:45.540 }, 00:30:45.540 { 00:30:45.540 "name": "BaseBdev2", 00:30:45.540 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:45.540 "is_configured": true, 00:30:45.540 "data_offset": 2048, 00:30:45.540 "data_size": 63488 00:30:45.540 } 00:30:45.540 ] 00:30:45.540 }' 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:30:45.540 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=917 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:45.540 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:45.798 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:45.798 "name": "raid_bdev1", 00:30:45.798 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:45.798 "strip_size_kb": 0, 00:30:45.798 "state": "online", 00:30:45.798 "raid_level": "raid1", 00:30:45.798 "superblock": true, 00:30:45.798 "num_base_bdevs": 2, 00:30:45.798 "num_base_bdevs_discovered": 2, 00:30:45.798 "num_base_bdevs_operational": 2, 00:30:45.798 "process": { 00:30:45.798 "type": "rebuild", 00:30:45.798 "target": "spare", 00:30:45.798 "progress": { 00:30:45.798 "blocks": 30720, 00:30:45.798 "percent": 48 00:30:45.798 } 00:30:45.798 }, 00:30:45.798 "base_bdevs_list": [ 00:30:45.798 { 00:30:45.798 "name": "spare", 00:30:45.798 "uuid": "a134c0f0-aa72-5dbb-b906-0ab82f8d4c6f", 00:30:45.798 "is_configured": true, 00:30:45.798 "data_offset": 2048, 00:30:45.798 "data_size": 63488 00:30:45.798 }, 00:30:45.798 { 00:30:45.798 "name": "BaseBdev2", 00:30:45.798 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:45.798 "is_configured": true, 00:30:45.798 "data_offset": 2048, 00:30:45.798 "data_size": 63488 00:30:45.798 } 00:30:45.798 ] 00:30:45.798 }' 00:30:45.798 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:45.798 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:45.798 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:46.058 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:46.058 11:54:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:47.113 11:54:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:47.113 11:54:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:47.113 11:54:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:47.113 11:54:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:47.113 11:54:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:47.113 11:54:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:47.113 11:54:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:47.113 11:54:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.113 11:54:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:47.113 "name": "raid_bdev1", 00:30:47.113 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:47.113 "strip_size_kb": 0, 00:30:47.113 "state": "online", 00:30:47.113 "raid_level": "raid1", 00:30:47.113 "superblock": true, 00:30:47.114 "num_base_bdevs": 2, 00:30:47.114 "num_base_bdevs_discovered": 2, 00:30:47.114 "num_base_bdevs_operational": 2, 00:30:47.114 "process": { 00:30:47.114 "type": "rebuild", 00:30:47.114 "target": "spare", 00:30:47.114 "progress": { 00:30:47.114 "blocks": 59392, 00:30:47.114 "percent": 93 00:30:47.114 } 00:30:47.114 }, 00:30:47.114 "base_bdevs_list": [ 00:30:47.114 { 00:30:47.114 "name": "spare", 00:30:47.114 "uuid": "a134c0f0-aa72-5dbb-b906-0ab82f8d4c6f", 00:30:47.114 "is_configured": true, 00:30:47.114 "data_offset": 2048, 00:30:47.114 "data_size": 63488 00:30:47.114 }, 00:30:47.114 { 00:30:47.114 "name": "BaseBdev2", 00:30:47.114 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:47.114 "is_configured": true, 00:30:47.114 "data_offset": 2048, 00:30:47.114 "data_size": 63488 00:30:47.114 } 00:30:47.114 ] 00:30:47.114 }' 00:30:47.114 11:54:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:47.373 11:54:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:47.373 11:54:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:47.373 11:54:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:47.373 11:54:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:47.373 [2024-06-10 11:54:19.287029] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:47.373 [2024-06-10 11:54:19.287114] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:47.373 [2024-06-10 11:54:19.287273] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:48.308 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:48.308 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:48.308 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:48.308 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:48.308 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:48.308 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:48.308 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.308 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.567 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:48.567 "name": "raid_bdev1", 00:30:48.567 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:48.567 "strip_size_kb": 0, 00:30:48.567 "state": "online", 00:30:48.567 "raid_level": "raid1", 00:30:48.567 "superblock": true, 00:30:48.567 "num_base_bdevs": 2, 00:30:48.567 "num_base_bdevs_discovered": 2, 00:30:48.567 "num_base_bdevs_operational": 2, 00:30:48.567 "base_bdevs_list": [ 00:30:48.567 { 00:30:48.567 "name": "spare", 00:30:48.567 "uuid": "a134c0f0-aa72-5dbb-b906-0ab82f8d4c6f", 00:30:48.567 "is_configured": true, 00:30:48.567 "data_offset": 2048, 00:30:48.567 "data_size": 63488 00:30:48.567 }, 00:30:48.567 { 00:30:48.567 "name": "BaseBdev2", 00:30:48.567 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:48.567 "is_configured": true, 00:30:48.567 "data_offset": 2048, 00:30:48.567 "data_size": 63488 00:30:48.567 } 00:30:48.567 ] 00:30:48.567 }' 00:30:48.567 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:48.567 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:48.567 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:48.567 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:48.567 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:30:48.567 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:48.567 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:48.567 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:48.567 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:48.567 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:48.567 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.567 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.825 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:48.825 "name": "raid_bdev1", 00:30:48.825 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:48.825 "strip_size_kb": 0, 00:30:48.825 "state": "online", 00:30:48.825 "raid_level": "raid1", 00:30:48.825 "superblock": true, 00:30:48.825 "num_base_bdevs": 2, 00:30:48.825 "num_base_bdevs_discovered": 2, 00:30:48.825 "num_base_bdevs_operational": 2, 00:30:48.825 "base_bdevs_list": [ 00:30:48.825 { 00:30:48.825 "name": "spare", 00:30:48.825 "uuid": "a134c0f0-aa72-5dbb-b906-0ab82f8d4c6f", 00:30:48.825 "is_configured": true, 00:30:48.825 "data_offset": 2048, 00:30:48.825 "data_size": 63488 00:30:48.825 }, 00:30:48.825 { 00:30:48.825 "name": "BaseBdev2", 00:30:48.825 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:48.825 "is_configured": true, 00:30:48.825 "data_offset": 2048, 00:30:48.825 "data_size": 63488 00:30:48.825 } 00:30:48.825 ] 00:30:48.825 }' 00:30:48.825 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:49.083 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:49.083 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:49.083 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:49.083 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:49.083 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:49.083 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:49.083 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:49.083 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:49.083 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:49.083 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:49.083 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:49.083 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:49.083 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:49.083 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:49.083 11:54:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:49.341 11:54:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:49.341 "name": "raid_bdev1", 00:30:49.341 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:49.341 "strip_size_kb": 0, 00:30:49.341 "state": "online", 00:30:49.341 "raid_level": "raid1", 00:30:49.341 "superblock": true, 00:30:49.341 "num_base_bdevs": 2, 00:30:49.341 "num_base_bdevs_discovered": 2, 00:30:49.341 "num_base_bdevs_operational": 2, 00:30:49.341 "base_bdevs_list": [ 00:30:49.341 { 00:30:49.341 "name": "spare", 00:30:49.341 "uuid": "a134c0f0-aa72-5dbb-b906-0ab82f8d4c6f", 00:30:49.341 "is_configured": true, 00:30:49.341 "data_offset": 2048, 00:30:49.341 "data_size": 63488 00:30:49.341 }, 00:30:49.341 { 00:30:49.341 "name": "BaseBdev2", 00:30:49.341 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:49.341 "is_configured": true, 00:30:49.341 "data_offset": 2048, 00:30:49.341 "data_size": 63488 00:30:49.341 } 00:30:49.341 ] 00:30:49.341 }' 00:30:49.341 11:54:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:49.341 11:54:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.906 11:54:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:50.164 [2024-06-10 11:54:22.065790] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:50.164 [2024-06-10 11:54:22.065842] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:50.164 [2024-06-10 11:54:22.065972] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:50.164 [2024-06-10 11:54:22.066089] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:50.164 [2024-06-10 11:54:22.066116] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:30:50.164 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:50.164 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:30:50.422 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:50.422 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:50.422 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:30:50.423 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:50.423 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:50.423 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:50.423 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:50.423 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:50.423 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:50.423 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:50.423 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:50.423 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:50.423 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:50.681 /dev/nbd0 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:50.681 1+0 records in 00:30:50.681 1+0 records out 00:30:50.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00069197 s, 5.9 MB/s 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:50.681 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:30:50.938 /dev/nbd1 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:50.938 1+0 records in 00:30:50.938 1+0 records out 00:30:50.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000872064 s, 4.7 MB/s 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:50.938 11:54:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:51.196 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:30:51.196 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:51.196 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:51.196 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:51.196 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:51.196 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:51.196 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:51.453 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:51.453 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:51.453 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:51.453 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:51.453 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:51.453 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:51.453 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:51.453 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:51.453 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:51.453 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:51.711 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:51.711 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:51.711 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:51.711 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:51.711 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:51.711 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:51.711 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:51.711 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:51.711 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:30:51.711 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:51.968 11:54:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:52.227 [2024-06-10 11:54:24.158860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:52.227 [2024-06-10 11:54:24.159108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:52.227 [2024-06-10 11:54:24.159273] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:30:52.227 [2024-06-10 11:54:24.159372] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:52.227 [2024-06-10 11:54:24.161978] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:52.227 [2024-06-10 11:54:24.162160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:52.227 [2024-06-10 11:54:24.162409] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:52.227 [2024-06-10 11:54:24.162569] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:52.227 [2024-06-10 11:54:24.162886] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:52.227 spare 00:30:52.227 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:52.227 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:52.227 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:52.227 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:52.227 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:52.227 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:52.227 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:52.227 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:52.227 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:52.227 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:52.227 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.227 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.227 [2024-06-10 11:54:24.263146] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:30:52.227 [2024-06-10 11:54:24.263336] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:52.227 [2024-06-10 11:54:24.263553] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:30:52.227 [2024-06-10 11:54:24.264155] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:30:52.227 [2024-06-10 11:54:24.264272] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:30:52.227 [2024-06-10 11:54:24.264547] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:52.485 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:52.485 "name": "raid_bdev1", 00:30:52.485 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:52.485 "strip_size_kb": 0, 00:30:52.485 "state": "online", 00:30:52.485 "raid_level": "raid1", 00:30:52.485 "superblock": true, 00:30:52.485 "num_base_bdevs": 2, 00:30:52.485 "num_base_bdevs_discovered": 2, 00:30:52.485 "num_base_bdevs_operational": 2, 00:30:52.485 "base_bdevs_list": [ 00:30:52.485 { 00:30:52.485 "name": "spare", 00:30:52.485 "uuid": "a134c0f0-aa72-5dbb-b906-0ab82f8d4c6f", 00:30:52.485 "is_configured": true, 00:30:52.485 "data_offset": 2048, 00:30:52.485 "data_size": 63488 00:30:52.485 }, 00:30:52.485 { 00:30:52.485 "name": "BaseBdev2", 00:30:52.485 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:52.485 "is_configured": true, 00:30:52.485 "data_offset": 2048, 00:30:52.485 "data_size": 63488 00:30:52.485 } 00:30:52.485 ] 00:30:52.485 }' 00:30:52.485 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:52.485 11:54:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:53.050 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:53.050 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:53.050 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:53.050 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:53.050 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:53.050 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:53.050 11:54:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:53.309 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:53.309 "name": "raid_bdev1", 00:30:53.309 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:53.309 "strip_size_kb": 0, 00:30:53.309 "state": "online", 00:30:53.309 "raid_level": "raid1", 00:30:53.309 "superblock": true, 00:30:53.309 "num_base_bdevs": 2, 00:30:53.309 "num_base_bdevs_discovered": 2, 00:30:53.309 "num_base_bdevs_operational": 2, 00:30:53.309 "base_bdevs_list": [ 00:30:53.309 { 00:30:53.309 "name": "spare", 00:30:53.309 "uuid": "a134c0f0-aa72-5dbb-b906-0ab82f8d4c6f", 00:30:53.309 "is_configured": true, 00:30:53.309 "data_offset": 2048, 00:30:53.309 "data_size": 63488 00:30:53.309 }, 00:30:53.309 { 00:30:53.309 "name": "BaseBdev2", 00:30:53.309 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:53.309 "is_configured": true, 00:30:53.309 "data_offset": 2048, 00:30:53.309 "data_size": 63488 00:30:53.309 } 00:30:53.309 ] 00:30:53.309 }' 00:30:53.309 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:53.309 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:53.309 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:53.309 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:53.309 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:53.309 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:53.567 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:30:53.567 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:53.825 [2024-06-10 11:54:25.811402] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:53.825 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:53.825 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:53.825 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:53.825 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:53.825 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:53.825 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:53.825 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:53.825 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:53.825 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:53.825 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:53.825 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:53.825 11:54:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:54.083 11:54:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:54.083 "name": "raid_bdev1", 00:30:54.083 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:54.083 "strip_size_kb": 0, 00:30:54.083 "state": "online", 00:30:54.083 "raid_level": "raid1", 00:30:54.083 "superblock": true, 00:30:54.083 "num_base_bdevs": 2, 00:30:54.083 "num_base_bdevs_discovered": 1, 00:30:54.083 "num_base_bdevs_operational": 1, 00:30:54.083 "base_bdevs_list": [ 00:30:54.083 { 00:30:54.083 "name": null, 00:30:54.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.083 "is_configured": false, 00:30:54.083 "data_offset": 2048, 00:30:54.083 "data_size": 63488 00:30:54.083 }, 00:30:54.083 { 00:30:54.083 "name": "BaseBdev2", 00:30:54.083 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:54.083 "is_configured": true, 00:30:54.083 "data_offset": 2048, 00:30:54.083 "data_size": 63488 00:30:54.083 } 00:30:54.083 ] 00:30:54.083 }' 00:30:54.083 11:54:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:54.083 11:54:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:55.019 11:54:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:55.019 [2024-06-10 11:54:26.947726] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:55.019 [2024-06-10 11:54:26.948137] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:55.019 [2024-06-10 11:54:26.948249] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:55.019 [2024-06-10 11:54:26.948344] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:55.019 [2024-06-10 11:54:26.967387] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:30:55.019 [2024-06-10 11:54:26.969783] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:55.019 11:54:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:30:55.956 11:54:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:55.956 11:54:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:55.956 11:54:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:55.956 11:54:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:55.956 11:54:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:55.956 11:54:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.956 11:54:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:56.215 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:56.215 "name": "raid_bdev1", 00:30:56.215 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:56.215 "strip_size_kb": 0, 00:30:56.215 "state": "online", 00:30:56.215 "raid_level": "raid1", 00:30:56.215 "superblock": true, 00:30:56.215 "num_base_bdevs": 2, 00:30:56.215 "num_base_bdevs_discovered": 2, 00:30:56.215 "num_base_bdevs_operational": 2, 00:30:56.215 "process": { 00:30:56.215 "type": "rebuild", 00:30:56.215 "target": "spare", 00:30:56.215 "progress": { 00:30:56.215 "blocks": 24576, 00:30:56.215 "percent": 38 00:30:56.215 } 00:30:56.215 }, 00:30:56.215 "base_bdevs_list": [ 00:30:56.215 { 00:30:56.215 "name": "spare", 00:30:56.215 "uuid": "a134c0f0-aa72-5dbb-b906-0ab82f8d4c6f", 00:30:56.215 "is_configured": true, 00:30:56.215 "data_offset": 2048, 00:30:56.215 "data_size": 63488 00:30:56.215 }, 00:30:56.215 { 00:30:56.215 "name": "BaseBdev2", 00:30:56.215 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:56.215 "is_configured": true, 00:30:56.215 "data_offset": 2048, 00:30:56.215 "data_size": 63488 00:30:56.215 } 00:30:56.215 ] 00:30:56.215 }' 00:30:56.215 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:56.472 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:56.472 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:56.472 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:56.472 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:56.730 [2024-06-10 11:54:28.627101] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:56.730 [2024-06-10 11:54:28.680714] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:56.730 [2024-06-10 11:54:28.681010] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:56.730 [2024-06-10 11:54:28.681143] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:56.730 [2024-06-10 11:54:28.681198] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:56.730 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:56.730 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:56.730 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:56.730 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:56.730 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:56.730 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:56.730 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:56.730 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:56.730 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:56.730 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:56.730 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.730 11:54:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:56.987 11:54:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:56.987 "name": "raid_bdev1", 00:30:56.987 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:56.987 "strip_size_kb": 0, 00:30:56.987 "state": "online", 00:30:56.987 "raid_level": "raid1", 00:30:56.987 "superblock": true, 00:30:56.987 "num_base_bdevs": 2, 00:30:56.987 "num_base_bdevs_discovered": 1, 00:30:56.988 "num_base_bdevs_operational": 1, 00:30:56.988 "base_bdevs_list": [ 00:30:56.988 { 00:30:56.988 "name": null, 00:30:56.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.988 "is_configured": false, 00:30:56.988 "data_offset": 2048, 00:30:56.988 "data_size": 63488 00:30:56.988 }, 00:30:56.988 { 00:30:56.988 "name": "BaseBdev2", 00:30:56.988 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:56.988 "is_configured": true, 00:30:56.988 "data_offset": 2048, 00:30:56.988 "data_size": 63488 00:30:56.988 } 00:30:56.988 ] 00:30:56.988 }' 00:30:56.988 11:54:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:56.988 11:54:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.925 11:54:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:58.183 [2024-06-10 11:54:30.001584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:58.183 [2024-06-10 11:54:30.001886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:58.183 [2024-06-10 11:54:30.001976] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:30:58.183 [2024-06-10 11:54:30.002094] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:58.183 [2024-06-10 11:54:30.002726] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:58.183 [2024-06-10 11:54:30.002886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:58.184 [2024-06-10 11:54:30.003119] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:58.184 [2024-06-10 11:54:30.003225] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:58.184 [2024-06-10 11:54:30.003323] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:58.184 [2024-06-10 11:54:30.003463] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:58.184 [2024-06-10 11:54:30.021780] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:30:58.184 spare 00:30:58.184 [2024-06-10 11:54:30.024122] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:58.184 11:54:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:30:59.162 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:59.162 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:59.162 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:59.162 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:59.162 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:59.162 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.162 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.421 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:59.421 "name": "raid_bdev1", 00:30:59.421 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:59.421 "strip_size_kb": 0, 00:30:59.421 "state": "online", 00:30:59.421 "raid_level": "raid1", 00:30:59.421 "superblock": true, 00:30:59.421 "num_base_bdevs": 2, 00:30:59.421 "num_base_bdevs_discovered": 2, 00:30:59.421 "num_base_bdevs_operational": 2, 00:30:59.421 "process": { 00:30:59.421 "type": "rebuild", 00:30:59.421 "target": "spare", 00:30:59.421 "progress": { 00:30:59.421 "blocks": 22528, 00:30:59.421 "percent": 35 00:30:59.421 } 00:30:59.421 }, 00:30:59.421 "base_bdevs_list": [ 00:30:59.421 { 00:30:59.421 "name": "spare", 00:30:59.421 "uuid": "a134c0f0-aa72-5dbb-b906-0ab82f8d4c6f", 00:30:59.421 "is_configured": true, 00:30:59.421 "data_offset": 2048, 00:30:59.421 "data_size": 63488 00:30:59.421 }, 00:30:59.421 { 00:30:59.421 "name": "BaseBdev2", 00:30:59.421 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:59.421 "is_configured": true, 00:30:59.421 "data_offset": 2048, 00:30:59.421 "data_size": 63488 00:30:59.421 } 00:30:59.421 ] 00:30:59.421 }' 00:30:59.421 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:59.421 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:59.421 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:59.421 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:59.421 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:59.680 [2024-06-10 11:54:31.513809] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:59.680 [2024-06-10 11:54:31.533646] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:59.680 [2024-06-10 11:54:31.533916] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:59.680 [2024-06-10 11:54:31.534036] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:59.680 [2024-06-10 11:54:31.534121] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:59.680 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:59.680 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:59.680 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:59.680 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:59.680 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:59.680 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:59.680 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:59.680 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:59.680 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:59.680 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:59.680 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.680 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.939 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:59.939 "name": "raid_bdev1", 00:30:59.939 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:30:59.939 "strip_size_kb": 0, 00:30:59.939 "state": "online", 00:30:59.939 "raid_level": "raid1", 00:30:59.939 "superblock": true, 00:30:59.939 "num_base_bdevs": 2, 00:30:59.939 "num_base_bdevs_discovered": 1, 00:30:59.939 "num_base_bdevs_operational": 1, 00:30:59.939 "base_bdevs_list": [ 00:30:59.939 { 00:30:59.939 "name": null, 00:30:59.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.939 "is_configured": false, 00:30:59.939 "data_offset": 2048, 00:30:59.939 "data_size": 63488 00:30:59.939 }, 00:30:59.939 { 00:30:59.939 "name": "BaseBdev2", 00:30:59.939 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:30:59.939 "is_configured": true, 00:30:59.939 "data_offset": 2048, 00:30:59.939 "data_size": 63488 00:30:59.939 } 00:30:59.939 ] 00:30:59.939 }' 00:30:59.939 11:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:59.939 11:54:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:00.506 11:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:00.506 11:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:00.506 11:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:00.506 11:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:00.506 11:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:00.506 11:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:00.506 11:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:00.765 11:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:00.765 "name": "raid_bdev1", 00:31:00.765 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:31:00.765 "strip_size_kb": 0, 00:31:00.765 "state": "online", 00:31:00.765 "raid_level": "raid1", 00:31:00.765 "superblock": true, 00:31:00.765 "num_base_bdevs": 2, 00:31:00.765 "num_base_bdevs_discovered": 1, 00:31:00.765 "num_base_bdevs_operational": 1, 00:31:00.765 "base_bdevs_list": [ 00:31:00.765 { 00:31:00.765 "name": null, 00:31:00.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.765 "is_configured": false, 00:31:00.765 "data_offset": 2048, 00:31:00.765 "data_size": 63488 00:31:00.765 }, 00:31:00.765 { 00:31:00.765 "name": "BaseBdev2", 00:31:00.765 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:31:00.765 "is_configured": true, 00:31:00.765 "data_offset": 2048, 00:31:00.765 "data_size": 63488 00:31:00.765 } 00:31:00.765 ] 00:31:00.765 }' 00:31:00.765 11:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:00.765 11:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:00.765 11:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:01.023 11:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:01.023 11:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:31:01.023 11:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:01.282 [2024-06-10 11:54:33.260302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:01.282 [2024-06-10 11:54:33.260573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:01.282 [2024-06-10 11:54:33.260670] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:31:01.282 [2024-06-10 11:54:33.260872] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:01.282 [2024-06-10 11:54:33.261396] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:01.282 [2024-06-10 11:54:33.261545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:01.282 [2024-06-10 11:54:33.261786] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:01.282 [2024-06-10 11:54:33.261892] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:01.282 [2024-06-10 11:54:33.261966] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:01.282 BaseBdev1 00:31:01.282 11:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:31:02.657 11:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:02.657 11:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:02.657 11:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:02.657 11:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:02.657 11:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:02.657 11:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:02.657 11:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:02.657 11:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:02.657 11:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:02.657 11:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:02.657 11:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:02.657 11:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:02.657 11:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:02.657 "name": "raid_bdev1", 00:31:02.657 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:31:02.657 "strip_size_kb": 0, 00:31:02.657 "state": "online", 00:31:02.657 "raid_level": "raid1", 00:31:02.657 "superblock": true, 00:31:02.657 "num_base_bdevs": 2, 00:31:02.657 "num_base_bdevs_discovered": 1, 00:31:02.657 "num_base_bdevs_operational": 1, 00:31:02.657 "base_bdevs_list": [ 00:31:02.657 { 00:31:02.657 "name": null, 00:31:02.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:02.657 "is_configured": false, 00:31:02.657 "data_offset": 2048, 00:31:02.657 "data_size": 63488 00:31:02.657 }, 00:31:02.657 { 00:31:02.657 "name": "BaseBdev2", 00:31:02.657 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:31:02.658 "is_configured": true, 00:31:02.658 "data_offset": 2048, 00:31:02.658 "data_size": 63488 00:31:02.658 } 00:31:02.658 ] 00:31:02.658 }' 00:31:02.658 11:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:02.658 11:54:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:03.225 11:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:03.225 11:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:03.225 11:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:03.225 11:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:03.225 11:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:03.225 11:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:03.225 11:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.484 11:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:03.484 "name": "raid_bdev1", 00:31:03.484 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:31:03.484 "strip_size_kb": 0, 00:31:03.484 "state": "online", 00:31:03.484 "raid_level": "raid1", 00:31:03.484 "superblock": true, 00:31:03.484 "num_base_bdevs": 2, 00:31:03.484 "num_base_bdevs_discovered": 1, 00:31:03.484 "num_base_bdevs_operational": 1, 00:31:03.484 "base_bdevs_list": [ 00:31:03.484 { 00:31:03.484 "name": null, 00:31:03.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.484 "is_configured": false, 00:31:03.484 "data_offset": 2048, 00:31:03.484 "data_size": 63488 00:31:03.484 }, 00:31:03.484 { 00:31:03.484 "name": "BaseBdev2", 00:31:03.484 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:31:03.484 "is_configured": true, 00:31:03.484 "data_offset": 2048, 00:31:03.484 "data_size": 63488 00:31:03.484 } 00:31:03.484 ] 00:31:03.484 }' 00:31:03.484 11:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:03.484 11:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:03.484 11:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:03.484 11:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:03.484 11:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:03.484 11:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@649 -- # local es=0 00:31:03.484 11:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:03.484 11:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:03.484 11:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:03.484 11:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:03.484 11:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:03.484 11:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:03.484 11:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:03.484 11:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:03.484 11:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:03.484 11:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:03.742 [2024-06-10 11:54:35.751518] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:03.743 [2024-06-10 11:54:35.751883] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:03.743 [2024-06-10 11:54:35.752035] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:03.743 request: 00:31:03.743 { 00:31:03.743 "base_bdev": "BaseBdev1", 00:31:03.743 "raid_bdev": "raid_bdev1", 00:31:03.743 "method": "bdev_raid_add_base_bdev", 00:31:03.743 "req_id": 1 00:31:03.743 } 00:31:03.743 Got JSON-RPC error response 00:31:03.743 response: 00:31:03.743 { 00:31:03.743 "code": -22, 00:31:03.743 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:03.743 } 00:31:03.743 11:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # es=1 00:31:03.743 11:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:03.743 11:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:03.743 11:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:03.743 11:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:31:05.118 11:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:05.118 11:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:05.118 11:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:05.118 11:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:05.118 11:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:05.118 11:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:05.118 11:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:05.118 11:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:05.118 11:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:05.118 11:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:05.118 11:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:05.118 11:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:05.118 11:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:05.118 "name": "raid_bdev1", 00:31:05.118 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:31:05.118 "strip_size_kb": 0, 00:31:05.118 "state": "online", 00:31:05.118 "raid_level": "raid1", 00:31:05.118 "superblock": true, 00:31:05.118 "num_base_bdevs": 2, 00:31:05.118 "num_base_bdevs_discovered": 1, 00:31:05.118 "num_base_bdevs_operational": 1, 00:31:05.118 "base_bdevs_list": [ 00:31:05.118 { 00:31:05.118 "name": null, 00:31:05.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:05.118 "is_configured": false, 00:31:05.119 "data_offset": 2048, 00:31:05.119 "data_size": 63488 00:31:05.119 }, 00:31:05.119 { 00:31:05.119 "name": "BaseBdev2", 00:31:05.119 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:31:05.119 "is_configured": true, 00:31:05.119 "data_offset": 2048, 00:31:05.119 "data_size": 63488 00:31:05.119 } 00:31:05.119 ] 00:31:05.119 }' 00:31:05.119 11:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:05.119 11:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:05.687 11:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:05.687 11:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:05.687 11:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:05.687 11:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:05.687 11:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:05.687 11:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:05.687 11:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:06.254 11:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:06.254 "name": "raid_bdev1", 00:31:06.254 "uuid": "2ebbcd50-c594-47b8-b41a-ac22b32d6400", 00:31:06.254 "strip_size_kb": 0, 00:31:06.254 "state": "online", 00:31:06.254 "raid_level": "raid1", 00:31:06.255 "superblock": true, 00:31:06.255 "num_base_bdevs": 2, 00:31:06.255 "num_base_bdevs_discovered": 1, 00:31:06.255 "num_base_bdevs_operational": 1, 00:31:06.255 "base_bdevs_list": [ 00:31:06.255 { 00:31:06.255 "name": null, 00:31:06.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.255 "is_configured": false, 00:31:06.255 "data_offset": 2048, 00:31:06.255 "data_size": 63488 00:31:06.255 }, 00:31:06.255 { 00:31:06.255 "name": "BaseBdev2", 00:31:06.255 "uuid": "3d9ed46f-7e54-5e52-a630-103acdd12992", 00:31:06.255 "is_configured": true, 00:31:06.255 "data_offset": 2048, 00:31:06.255 "data_size": 63488 00:31:06.255 } 00:31:06.255 ] 00:31:06.255 }' 00:31:06.255 11:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:06.255 11:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:06.255 11:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:06.255 11:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:06.255 11:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 146387 00:31:06.255 11:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@949 -- # '[' -z 146387 ']' 00:31:06.255 11:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # kill -0 146387 00:31:06.255 11:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # uname 00:31:06.255 11:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:06.255 11:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 146387 00:31:06.255 11:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:06.255 11:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:06.255 11:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 146387' 00:31:06.255 killing process with pid 146387 00:31:06.255 11:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # kill 146387 00:31:06.255 Received shutdown signal, test time was about 60.000000 seconds 00:31:06.255 00:31:06.255 Latency(us) 00:31:06.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.255 =================================================================================================================== 00:31:06.255 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:06.255 11:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # wait 146387 00:31:06.255 [2024-06-10 11:54:38.125069] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:06.255 [2024-06-10 11:54:38.125193] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:06.255 [2024-06-10 11:54:38.125344] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:06.255 [2024-06-10 11:54:38.125434] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:31:06.514 [2024-06-10 11:54:38.491128] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:08.413 ************************************ 00:31:08.413 END TEST raid_rebuild_test_sb 00:31:08.413 ************************************ 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:31:08.413 00:31:08.413 real 0m38.445s 00:31:08.413 user 0m56.306s 00:31:08.413 sys 0m5.825s 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:08.413 11:54:40 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:31:08.413 11:54:40 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:31:08.413 11:54:40 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:08.413 11:54:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:08.413 ************************************ 00:31:08.413 START TEST raid_rebuild_test_io 00:31:08.413 ************************************ 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 2 false true true 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=147346 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 147346 /var/tmp/spdk-raid.sock 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@830 -- # '[' -z 147346 ']' 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:08.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:08.413 11:54:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:08.413 [2024-06-10 11:54:40.313086] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:31:08.413 [2024-06-10 11:54:40.315632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147346 ] 00:31:08.413 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:08.413 Zero copy mechanism will not be used. 00:31:08.672 [2024-06-10 11:54:40.505132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.672 [2024-06-10 11:54:40.726581] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.930 [2024-06-10 11:54:40.975177] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:09.497 11:54:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:09.497 11:54:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@863 -- # return 0 00:31:09.497 11:54:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:09.497 11:54:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:09.497 BaseBdev1_malloc 00:31:09.754 11:54:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:10.013 [2024-06-10 11:54:41.816958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:10.013 [2024-06-10 11:54:41.817314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:10.013 [2024-06-10 11:54:41.817398] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:31:10.013 [2024-06-10 11:54:41.817537] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:10.013 [2024-06-10 11:54:41.820145] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:10.013 [2024-06-10 11:54:41.820333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:10.013 BaseBdev1 00:31:10.013 11:54:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:10.013 11:54:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:10.269 BaseBdev2_malloc 00:31:10.269 11:54:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:10.526 [2024-06-10 11:54:42.347709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:10.526 [2024-06-10 11:54:42.347988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:10.526 [2024-06-10 11:54:42.348154] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:31:10.526 [2024-06-10 11:54:42.348284] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:10.526 [2024-06-10 11:54:42.350862] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:10.526 [2024-06-10 11:54:42.351042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:10.526 BaseBdev2 00:31:10.526 11:54:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:10.869 spare_malloc 00:31:10.869 11:54:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:10.869 spare_delay 00:31:10.869 11:54:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:11.128 [2024-06-10 11:54:43.069913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:11.128 [2024-06-10 11:54:43.070272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:11.128 [2024-06-10 11:54:43.070414] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:11.128 [2024-06-10 11:54:43.070527] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:11.128 [2024-06-10 11:54:43.073143] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:11.128 [2024-06-10 11:54:43.073332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:11.128 spare 00:31:11.128 11:54:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:31:11.386 [2024-06-10 11:54:43.290083] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:11.386 [2024-06-10 11:54:43.292366] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:11.386 [2024-06-10 11:54:43.292602] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:31:11.386 [2024-06-10 11:54:43.292647] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:31:11.386 [2024-06-10 11:54:43.292886] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:31:11.386 [2024-06-10 11:54:43.293369] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:31:11.386 [2024-06-10 11:54:43.293490] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:31:11.386 [2024-06-10 11:54:43.293757] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:11.386 11:54:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:11.386 11:54:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:11.386 11:54:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:11.386 11:54:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:11.386 11:54:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:11.386 11:54:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:11.386 11:54:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:11.386 11:54:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:11.386 11:54:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:11.386 11:54:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:11.386 11:54:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:11.386 11:54:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:11.644 11:54:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:11.644 "name": "raid_bdev1", 00:31:11.644 "uuid": "3d46e50c-171c-477c-8b69-0f3bafed2b95", 00:31:11.644 "strip_size_kb": 0, 00:31:11.644 "state": "online", 00:31:11.644 "raid_level": "raid1", 00:31:11.644 "superblock": false, 00:31:11.644 "num_base_bdevs": 2, 00:31:11.644 "num_base_bdevs_discovered": 2, 00:31:11.644 "num_base_bdevs_operational": 2, 00:31:11.644 "base_bdevs_list": [ 00:31:11.644 { 00:31:11.644 "name": "BaseBdev1", 00:31:11.644 "uuid": "7de21bb5-5c1f-5514-882e-d546e3ff9970", 00:31:11.644 "is_configured": true, 00:31:11.644 "data_offset": 0, 00:31:11.644 "data_size": 65536 00:31:11.644 }, 00:31:11.644 { 00:31:11.644 "name": "BaseBdev2", 00:31:11.644 "uuid": "18c93801-cee1-598c-9206-f07d308f083a", 00:31:11.644 "is_configured": true, 00:31:11.644 "data_offset": 0, 00:31:11.644 "data_size": 65536 00:31:11.644 } 00:31:11.644 ] 00:31:11.644 }' 00:31:11.644 11:54:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:11.644 11:54:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:12.210 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:31:12.210 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:12.468 [2024-06-10 11:54:44.326481] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:12.468 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:31:12.468 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:12.468 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:12.726 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:31:12.726 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:31:12.726 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:12.726 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:31:12.726 [2024-06-10 11:54:44.640983] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:12.726 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:12.726 Zero copy mechanism will not be used. 00:31:12.726 Running I/O for 60 seconds... 00:31:12.985 [2024-06-10 11:54:44.811466] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:12.985 [2024-06-10 11:54:44.811891] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:31:12.985 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:12.985 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:12.985 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:12.985 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:12.985 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:12.985 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:12.985 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:12.985 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:12.985 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:12.985 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:12.985 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:12.985 11:54:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:13.243 11:54:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:13.243 "name": "raid_bdev1", 00:31:13.243 "uuid": "3d46e50c-171c-477c-8b69-0f3bafed2b95", 00:31:13.243 "strip_size_kb": 0, 00:31:13.243 "state": "online", 00:31:13.243 "raid_level": "raid1", 00:31:13.243 "superblock": false, 00:31:13.243 "num_base_bdevs": 2, 00:31:13.243 "num_base_bdevs_discovered": 1, 00:31:13.243 "num_base_bdevs_operational": 1, 00:31:13.243 "base_bdevs_list": [ 00:31:13.243 { 00:31:13.243 "name": null, 00:31:13.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.243 "is_configured": false, 00:31:13.243 "data_offset": 0, 00:31:13.243 "data_size": 65536 00:31:13.243 }, 00:31:13.243 { 00:31:13.243 "name": "BaseBdev2", 00:31:13.243 "uuid": "18c93801-cee1-598c-9206-f07d308f083a", 00:31:13.243 "is_configured": true, 00:31:13.243 "data_offset": 0, 00:31:13.243 "data_size": 65536 00:31:13.243 } 00:31:13.243 ] 00:31:13.243 }' 00:31:13.243 11:54:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:13.243 11:54:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:13.809 11:54:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:14.079 [2024-06-10 11:54:46.091938] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:14.357 11:54:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:31:14.357 [2024-06-10 11:54:46.156608] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:14.357 [2024-06-10 11:54:46.159026] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:14.357 [2024-06-10 11:54:46.283862] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:14.357 [2024-06-10 11:54:46.284628] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:14.616 [2024-06-10 11:54:46.426057] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:14.616 [2024-06-10 11:54:46.426622] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:14.874 [2024-06-10 11:54:46.772183] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:14.874 [2024-06-10 11:54:46.773007] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:14.874 [2024-06-10 11:54:46.890718] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:14.874 [2024-06-10 11:54:46.891282] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:15.133 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:15.133 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:15.133 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:15.133 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:15.133 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:15.133 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.133 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:15.393 [2024-06-10 11:54:47.219800] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:15.393 [2024-06-10 11:54:47.452179] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:15.651 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:15.651 "name": "raid_bdev1", 00:31:15.651 "uuid": "3d46e50c-171c-477c-8b69-0f3bafed2b95", 00:31:15.651 "strip_size_kb": 0, 00:31:15.651 "state": "online", 00:31:15.651 "raid_level": "raid1", 00:31:15.651 "superblock": false, 00:31:15.651 "num_base_bdevs": 2, 00:31:15.651 "num_base_bdevs_discovered": 2, 00:31:15.651 "num_base_bdevs_operational": 2, 00:31:15.651 "process": { 00:31:15.651 "type": "rebuild", 00:31:15.651 "target": "spare", 00:31:15.651 "progress": { 00:31:15.651 "blocks": 14336, 00:31:15.651 "percent": 21 00:31:15.651 } 00:31:15.651 }, 00:31:15.651 "base_bdevs_list": [ 00:31:15.651 { 00:31:15.651 "name": "spare", 00:31:15.651 "uuid": "26f344ac-45ca-5755-9aac-183f3731e789", 00:31:15.651 "is_configured": true, 00:31:15.651 "data_offset": 0, 00:31:15.651 "data_size": 65536 00:31:15.651 }, 00:31:15.651 { 00:31:15.651 "name": "BaseBdev2", 00:31:15.651 "uuid": "18c93801-cee1-598c-9206-f07d308f083a", 00:31:15.651 "is_configured": true, 00:31:15.651 "data_offset": 0, 00:31:15.651 "data_size": 65536 00:31:15.651 } 00:31:15.651 ] 00:31:15.651 }' 00:31:15.651 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:15.651 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:15.651 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:15.651 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:15.651 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:15.909 [2024-06-10 11:54:47.713393] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:15.909 [2024-06-10 11:54:47.721286] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:15.909 [2024-06-10 11:54:47.816806] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:15.909 [2024-06-10 11:54:47.848771] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:15.909 [2024-06-10 11:54:47.858988] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:15.909 [2024-06-10 11:54:47.859269] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:15.909 [2024-06-10 11:54:47.859321] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:15.909 [2024-06-10 11:54:47.911640] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:31:15.909 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:15.909 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:15.909 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:15.909 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:15.909 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:15.909 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:15.909 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:15.909 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:15.909 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:15.909 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:15.909 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:15.909 11:54:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:16.474 11:54:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:16.474 "name": "raid_bdev1", 00:31:16.474 "uuid": "3d46e50c-171c-477c-8b69-0f3bafed2b95", 00:31:16.474 "strip_size_kb": 0, 00:31:16.474 "state": "online", 00:31:16.474 "raid_level": "raid1", 00:31:16.474 "superblock": false, 00:31:16.474 "num_base_bdevs": 2, 00:31:16.474 "num_base_bdevs_discovered": 1, 00:31:16.475 "num_base_bdevs_operational": 1, 00:31:16.475 "base_bdevs_list": [ 00:31:16.475 { 00:31:16.475 "name": null, 00:31:16.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.475 "is_configured": false, 00:31:16.475 "data_offset": 0, 00:31:16.475 "data_size": 65536 00:31:16.475 }, 00:31:16.475 { 00:31:16.475 "name": "BaseBdev2", 00:31:16.475 "uuid": "18c93801-cee1-598c-9206-f07d308f083a", 00:31:16.475 "is_configured": true, 00:31:16.475 "data_offset": 0, 00:31:16.475 "data_size": 65536 00:31:16.475 } 00:31:16.475 ] 00:31:16.475 }' 00:31:16.475 11:54:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:16.475 11:54:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:17.041 11:54:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:17.041 11:54:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:17.042 11:54:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:17.042 11:54:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:17.042 11:54:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:17.042 11:54:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.042 11:54:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:17.354 11:54:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:17.355 "name": "raid_bdev1", 00:31:17.355 "uuid": "3d46e50c-171c-477c-8b69-0f3bafed2b95", 00:31:17.355 "strip_size_kb": 0, 00:31:17.355 "state": "online", 00:31:17.355 "raid_level": "raid1", 00:31:17.355 "superblock": false, 00:31:17.355 "num_base_bdevs": 2, 00:31:17.355 "num_base_bdevs_discovered": 1, 00:31:17.355 "num_base_bdevs_operational": 1, 00:31:17.355 "base_bdevs_list": [ 00:31:17.355 { 00:31:17.355 "name": null, 00:31:17.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:17.355 "is_configured": false, 00:31:17.355 "data_offset": 0, 00:31:17.355 "data_size": 65536 00:31:17.355 }, 00:31:17.355 { 00:31:17.355 "name": "BaseBdev2", 00:31:17.355 "uuid": "18c93801-cee1-598c-9206-f07d308f083a", 00:31:17.355 "is_configured": true, 00:31:17.355 "data_offset": 0, 00:31:17.355 "data_size": 65536 00:31:17.355 } 00:31:17.355 ] 00:31:17.355 }' 00:31:17.355 11:54:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:17.355 11:54:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:17.355 11:54:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:17.355 11:54:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:17.355 11:54:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:17.613 [2024-06-10 11:54:49.587287] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:17.614 11:54:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:17.614 [2024-06-10 11:54:49.649889] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:17.614 [2024-06-10 11:54:49.652121] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:17.873 [2024-06-10 11:54:49.783792] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:18.132 [2024-06-10 11:54:50.020239] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:18.132 [2024-06-10 11:54:50.020758] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:18.697 [2024-06-10 11:54:50.488192] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:18.697 [2024-06-10 11:54:50.488722] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:18.697 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:18.697 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:18.697 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:18.698 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:18.698 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:18.698 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:18.698 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:18.957 "name": "raid_bdev1", 00:31:18.957 "uuid": "3d46e50c-171c-477c-8b69-0f3bafed2b95", 00:31:18.957 "strip_size_kb": 0, 00:31:18.957 "state": "online", 00:31:18.957 "raid_level": "raid1", 00:31:18.957 "superblock": false, 00:31:18.957 "num_base_bdevs": 2, 00:31:18.957 "num_base_bdevs_discovered": 2, 00:31:18.957 "num_base_bdevs_operational": 2, 00:31:18.957 "process": { 00:31:18.957 "type": "rebuild", 00:31:18.957 "target": "spare", 00:31:18.957 "progress": { 00:31:18.957 "blocks": 14336, 00:31:18.957 "percent": 21 00:31:18.957 } 00:31:18.957 }, 00:31:18.957 "base_bdevs_list": [ 00:31:18.957 { 00:31:18.957 "name": "spare", 00:31:18.957 "uuid": "26f344ac-45ca-5755-9aac-183f3731e789", 00:31:18.957 "is_configured": true, 00:31:18.957 "data_offset": 0, 00:31:18.957 "data_size": 65536 00:31:18.957 }, 00:31:18.957 { 00:31:18.957 "name": "BaseBdev2", 00:31:18.957 "uuid": "18c93801-cee1-598c-9206-f07d308f083a", 00:31:18.957 "is_configured": true, 00:31:18.957 "data_offset": 0, 00:31:18.957 "data_size": 65536 00:31:18.957 } 00:31:18.957 ] 00:31:18.957 }' 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:18.957 [2024-06-10 11:54:50.945661] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:18.957 [2024-06-10 11:54:50.946157] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=950 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:18.957 11:54:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:19.215 11:54:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:19.215 "name": "raid_bdev1", 00:31:19.215 "uuid": "3d46e50c-171c-477c-8b69-0f3bafed2b95", 00:31:19.215 "strip_size_kb": 0, 00:31:19.215 "state": "online", 00:31:19.215 "raid_level": "raid1", 00:31:19.215 "superblock": false, 00:31:19.215 "num_base_bdevs": 2, 00:31:19.215 "num_base_bdevs_discovered": 2, 00:31:19.215 "num_base_bdevs_operational": 2, 00:31:19.215 "process": { 00:31:19.215 "type": "rebuild", 00:31:19.215 "target": "spare", 00:31:19.215 "progress": { 00:31:19.215 "blocks": 18432, 00:31:19.215 "percent": 28 00:31:19.215 } 00:31:19.215 }, 00:31:19.215 "base_bdevs_list": [ 00:31:19.215 { 00:31:19.215 "name": "spare", 00:31:19.215 "uuid": "26f344ac-45ca-5755-9aac-183f3731e789", 00:31:19.215 "is_configured": true, 00:31:19.215 "data_offset": 0, 00:31:19.215 "data_size": 65536 00:31:19.215 }, 00:31:19.215 { 00:31:19.215 "name": "BaseBdev2", 00:31:19.215 "uuid": "18c93801-cee1-598c-9206-f07d308f083a", 00:31:19.215 "is_configured": true, 00:31:19.215 "data_offset": 0, 00:31:19.215 "data_size": 65536 00:31:19.215 } 00:31:19.215 ] 00:31:19.215 }' 00:31:19.215 11:54:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:19.474 11:54:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:19.474 11:54:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:19.474 11:54:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:19.474 11:54:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:19.474 [2024-06-10 11:54:51.381593] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:31:19.733 [2024-06-10 11:54:51.618856] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:31:20.297 [2024-06-10 11:54:52.088766] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:31:20.576 11:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:20.577 11:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:20.577 11:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:20.577 11:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:20.577 11:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:20.577 11:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:20.577 11:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:20.577 11:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:20.577 11:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:20.577 "name": "raid_bdev1", 00:31:20.577 "uuid": "3d46e50c-171c-477c-8b69-0f3bafed2b95", 00:31:20.577 "strip_size_kb": 0, 00:31:20.577 "state": "online", 00:31:20.577 "raid_level": "raid1", 00:31:20.577 "superblock": false, 00:31:20.577 "num_base_bdevs": 2, 00:31:20.577 "num_base_bdevs_discovered": 2, 00:31:20.577 "num_base_bdevs_operational": 2, 00:31:20.577 "process": { 00:31:20.577 "type": "rebuild", 00:31:20.577 "target": "spare", 00:31:20.577 "progress": { 00:31:20.577 "blocks": 43008, 00:31:20.577 "percent": 65 00:31:20.577 } 00:31:20.577 }, 00:31:20.577 "base_bdevs_list": [ 00:31:20.577 { 00:31:20.577 "name": "spare", 00:31:20.577 "uuid": "26f344ac-45ca-5755-9aac-183f3731e789", 00:31:20.577 "is_configured": true, 00:31:20.577 "data_offset": 0, 00:31:20.577 "data_size": 65536 00:31:20.577 }, 00:31:20.577 { 00:31:20.577 "name": "BaseBdev2", 00:31:20.577 "uuid": "18c93801-cee1-598c-9206-f07d308f083a", 00:31:20.577 "is_configured": true, 00:31:20.577 "data_offset": 0, 00:31:20.577 "data_size": 65536 00:31:20.577 } 00:31:20.577 ] 00:31:20.577 }' 00:31:20.577 11:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:20.834 11:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:20.834 11:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:20.834 [2024-06-10 11:54:52.665037] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:31:20.834 11:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:20.834 11:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:21.768 11:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:21.768 11:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:21.768 11:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:21.768 11:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:21.768 11:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:21.768 11:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:21.768 11:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:21.768 11:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:22.026 [2024-06-10 11:54:53.873448] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:22.026 11:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:22.026 "name": "raid_bdev1", 00:31:22.026 "uuid": "3d46e50c-171c-477c-8b69-0f3bafed2b95", 00:31:22.026 "strip_size_kb": 0, 00:31:22.026 "state": "online", 00:31:22.026 "raid_level": "raid1", 00:31:22.026 "superblock": false, 00:31:22.026 "num_base_bdevs": 2, 00:31:22.026 "num_base_bdevs_discovered": 2, 00:31:22.026 "num_base_bdevs_operational": 2, 00:31:22.026 "process": { 00:31:22.026 "type": "rebuild", 00:31:22.026 "target": "spare", 00:31:22.026 "progress": { 00:31:22.026 "blocks": 65536, 00:31:22.026 "percent": 100 00:31:22.026 } 00:31:22.026 }, 00:31:22.026 "base_bdevs_list": [ 00:31:22.026 { 00:31:22.026 "name": "spare", 00:31:22.026 "uuid": "26f344ac-45ca-5755-9aac-183f3731e789", 00:31:22.026 "is_configured": true, 00:31:22.026 "data_offset": 0, 00:31:22.026 "data_size": 65536 00:31:22.026 }, 00:31:22.026 { 00:31:22.026 "name": "BaseBdev2", 00:31:22.026 "uuid": "18c93801-cee1-598c-9206-f07d308f083a", 00:31:22.026 "is_configured": true, 00:31:22.026 "data_offset": 0, 00:31:22.026 "data_size": 65536 00:31:22.026 } 00:31:22.026 ] 00:31:22.026 }' 00:31:22.026 11:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:22.026 [2024-06-10 11:54:53.980225] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:22.026 [2024-06-10 11:54:53.982685] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:22.026 11:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:22.026 11:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:22.026 11:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:22.026 11:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:23.401 "name": "raid_bdev1", 00:31:23.401 "uuid": "3d46e50c-171c-477c-8b69-0f3bafed2b95", 00:31:23.401 "strip_size_kb": 0, 00:31:23.401 "state": "online", 00:31:23.401 "raid_level": "raid1", 00:31:23.401 "superblock": false, 00:31:23.401 "num_base_bdevs": 2, 00:31:23.401 "num_base_bdevs_discovered": 2, 00:31:23.401 "num_base_bdevs_operational": 2, 00:31:23.401 "base_bdevs_list": [ 00:31:23.401 { 00:31:23.401 "name": "spare", 00:31:23.401 "uuid": "26f344ac-45ca-5755-9aac-183f3731e789", 00:31:23.401 "is_configured": true, 00:31:23.401 "data_offset": 0, 00:31:23.401 "data_size": 65536 00:31:23.401 }, 00:31:23.401 { 00:31:23.401 "name": "BaseBdev2", 00:31:23.401 "uuid": "18c93801-cee1-598c-9206-f07d308f083a", 00:31:23.401 "is_configured": true, 00:31:23.401 "data_offset": 0, 00:31:23.401 "data_size": 65536 00:31:23.401 } 00:31:23.401 ] 00:31:23.401 }' 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.401 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.659 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:23.659 "name": "raid_bdev1", 00:31:23.659 "uuid": "3d46e50c-171c-477c-8b69-0f3bafed2b95", 00:31:23.659 "strip_size_kb": 0, 00:31:23.659 "state": "online", 00:31:23.659 "raid_level": "raid1", 00:31:23.659 "superblock": false, 00:31:23.659 "num_base_bdevs": 2, 00:31:23.659 "num_base_bdevs_discovered": 2, 00:31:23.659 "num_base_bdevs_operational": 2, 00:31:23.659 "base_bdevs_list": [ 00:31:23.659 { 00:31:23.659 "name": "spare", 00:31:23.659 "uuid": "26f344ac-45ca-5755-9aac-183f3731e789", 00:31:23.659 "is_configured": true, 00:31:23.659 "data_offset": 0, 00:31:23.659 "data_size": 65536 00:31:23.659 }, 00:31:23.659 { 00:31:23.659 "name": "BaseBdev2", 00:31:23.659 "uuid": "18c93801-cee1-598c-9206-f07d308f083a", 00:31:23.659 "is_configured": true, 00:31:23.659 "data_offset": 0, 00:31:23.659 "data_size": 65536 00:31:23.659 } 00:31:23.659 ] 00:31:23.659 }' 00:31:23.659 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:23.978 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:23.978 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:23.978 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:23.978 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:23.978 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:23.978 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:23.978 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:23.978 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:23.978 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:23.978 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:23.978 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:23.978 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:23.978 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:23.978 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.978 11:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:24.237 11:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:24.237 "name": "raid_bdev1", 00:31:24.237 "uuid": "3d46e50c-171c-477c-8b69-0f3bafed2b95", 00:31:24.237 "strip_size_kb": 0, 00:31:24.237 "state": "online", 00:31:24.237 "raid_level": "raid1", 00:31:24.237 "superblock": false, 00:31:24.237 "num_base_bdevs": 2, 00:31:24.237 "num_base_bdevs_discovered": 2, 00:31:24.237 "num_base_bdevs_operational": 2, 00:31:24.237 "base_bdevs_list": [ 00:31:24.237 { 00:31:24.237 "name": "spare", 00:31:24.237 "uuid": "26f344ac-45ca-5755-9aac-183f3731e789", 00:31:24.237 "is_configured": true, 00:31:24.237 "data_offset": 0, 00:31:24.237 "data_size": 65536 00:31:24.237 }, 00:31:24.237 { 00:31:24.237 "name": "BaseBdev2", 00:31:24.237 "uuid": "18c93801-cee1-598c-9206-f07d308f083a", 00:31:24.237 "is_configured": true, 00:31:24.237 "data_offset": 0, 00:31:24.237 "data_size": 65536 00:31:24.237 } 00:31:24.237 ] 00:31:24.237 }' 00:31:24.237 11:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:24.237 11:54:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:24.804 11:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:25.063 [2024-06-10 11:54:56.898713] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:25.063 [2024-06-10 11:54:56.898957] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:25.063 00:31:25.063 Latency(us) 00:31:25.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.063 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:31:25.063 raid_bdev1 : 12.31 111.37 334.10 0.00 0.00 12669.77 339.38 115842.68 00:31:25.063 =================================================================================================================== 00:31:25.063 Total : 111.37 334.10 0.00 0.00 12669.77 339.38 115842.68 00:31:25.063 [2024-06-10 11:54:56.984752] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:25.063 [2024-06-10 11:54:56.984989] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:25.063 0 00:31:25.063 [2024-06-10 11:54:56.985125] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:25.063 [2024-06-10 11:54:56.985146] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:31:25.063 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:31:25.063 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:25.321 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:31:25.321 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:31:25.321 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:25.321 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:31:25.321 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:25.321 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:31:25.321 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:25.321 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:25.321 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:25.321 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:31:25.321 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:25.321 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:25.321 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:31:25.580 /dev/nbd0 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local i 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # break 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:25.580 1+0 records in 00:31:25.580 1+0 records out 00:31:25.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418485 s, 9.8 MB/s 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # size=4096 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # return 0 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:25.580 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:31:25.839 /dev/nbd1 00:31:25.839 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:25.839 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:25.839 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:31:25.839 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local i 00:31:25.839 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:25.839 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:25.839 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:31:25.839 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # break 00:31:25.839 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:31:25.839 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:31:25.840 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:25.840 1+0 records in 00:31:25.840 1+0 records out 00:31:25.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632698 s, 6.5 MB/s 00:31:25.840 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:25.840 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # size=4096 00:31:25.840 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:25.840 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:31:25.840 11:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # return 0 00:31:25.840 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:25.840 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:25.840 11:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:26.097 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:31:26.097 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:26.097 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:31:26.097 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:26.097 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:31:26.097 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:26.098 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:26.355 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 147346 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@949 -- # '[' -z 147346 ']' 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # kill -0 147346 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # uname 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:26.615 11:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 147346 00:31:26.872 11:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:26.872 11:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:26.872 11:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # echo 'killing process with pid 147346' 00:31:26.872 killing process with pid 147346 00:31:26.872 11:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # kill 147346 00:31:26.872 Received shutdown signal, test time was about 14.047003 seconds 00:31:26.872 00:31:26.872 Latency(us) 00:31:26.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.872 =================================================================================================================== 00:31:26.872 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:26.872 11:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # wait 147346 00:31:26.872 [2024-06-10 11:54:58.691096] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:27.203 [2024-06-10 11:54:58.997764] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:29.107 ************************************ 00:31:29.107 END TEST raid_rebuild_test_io 00:31:29.107 ************************************ 00:31:29.107 11:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:31:29.107 00:31:29.107 real 0m20.552s 00:31:29.107 user 0m30.734s 00:31:29.107 sys 0m2.513s 00:31:29.107 11:55:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:29.107 11:55:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:29.107 11:55:00 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:31:29.107 11:55:00 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:31:29.107 11:55:00 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:29.107 11:55:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:29.107 ************************************ 00:31:29.107 START TEST raid_rebuild_test_sb_io 00:31:29.107 ************************************ 00:31:29.107 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 2 true true true 00:31:29.107 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:31:29.107 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:31:29.107 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:31:29.107 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:31:29.107 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:31:29.107 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:31:29.107 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:29.107 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:31:29.107 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:29.107 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:29.107 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:31:29.107 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:29.107 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=147850 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 147850 /var/tmp/spdk-raid.sock 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@830 -- # '[' -z 147850 ']' 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:29.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:29.108 11:55:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:29.108 [2024-06-10 11:55:00.929720] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:31:29.108 [2024-06-10 11:55:00.930135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147850 ] 00:31:29.108 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:29.108 Zero copy mechanism will not be used. 00:31:29.108 [2024-06-10 11:55:01.103461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.365 [2024-06-10 11:55:01.338757] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.628 [2024-06-10 11:55:01.601551] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:29.888 11:55:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:29.888 11:55:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@863 -- # return 0 00:31:29.888 11:55:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:29.888 11:55:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:30.146 BaseBdev1_malloc 00:31:30.146 11:55:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:30.405 [2024-06-10 11:55:02.399778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:30.405 [2024-06-10 11:55:02.400074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:30.405 [2024-06-10 11:55:02.400158] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:31:30.405 [2024-06-10 11:55:02.400352] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:30.405 [2024-06-10 11:55:02.403037] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:30.405 [2024-06-10 11:55:02.403213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:30.405 BaseBdev1 00:31:30.405 11:55:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:30.405 11:55:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:30.990 BaseBdev2_malloc 00:31:30.990 11:55:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:30.990 [2024-06-10 11:55:02.951141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:30.990 [2024-06-10 11:55:02.951437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:30.990 [2024-06-10 11:55:02.951604] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:31:30.990 [2024-06-10 11:55:02.951707] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:30.990 [2024-06-10 11:55:02.954310] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:30.990 [2024-06-10 11:55:02.954483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:30.990 BaseBdev2 00:31:30.990 11:55:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:31.248 spare_malloc 00:31:31.248 11:55:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:31.507 spare_delay 00:31:31.507 11:55:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:31.766 [2024-06-10 11:55:03.777473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:31.766 [2024-06-10 11:55:03.777766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:31.766 [2024-06-10 11:55:03.777924] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:31.766 [2024-06-10 11:55:03.778045] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:31.766 [2024-06-10 11:55:03.780735] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:31.766 [2024-06-10 11:55:03.780933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:31.766 spare 00:31:31.766 11:55:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:31:32.025 [2024-06-10 11:55:04.057756] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:32.025 [2024-06-10 11:55:04.060174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:32.025 [2024-06-10 11:55:04.060548] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:31:32.025 [2024-06-10 11:55:04.060696] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:32.025 [2024-06-10 11:55:04.060875] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:31:32.025 [2024-06-10 11:55:04.061375] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:31:32.025 [2024-06-10 11:55:04.061503] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:31:32.025 [2024-06-10 11:55:04.061794] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:32.025 11:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:32.025 11:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:32.025 11:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:32.025 11:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:32.025 11:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:32.025 11:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:32.025 11:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:32.025 11:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:32.025 11:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:32.025 11:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:32.283 11:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:32.283 11:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:32.283 11:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:32.283 "name": "raid_bdev1", 00:31:32.283 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:32.283 "strip_size_kb": 0, 00:31:32.283 "state": "online", 00:31:32.283 "raid_level": "raid1", 00:31:32.283 "superblock": true, 00:31:32.283 "num_base_bdevs": 2, 00:31:32.283 "num_base_bdevs_discovered": 2, 00:31:32.283 "num_base_bdevs_operational": 2, 00:31:32.283 "base_bdevs_list": [ 00:31:32.283 { 00:31:32.283 "name": "BaseBdev1", 00:31:32.283 "uuid": "9df9710e-ec57-5f2e-acb5-39747d0eb2c4", 00:31:32.283 "is_configured": true, 00:31:32.283 "data_offset": 2048, 00:31:32.283 "data_size": 63488 00:31:32.283 }, 00:31:32.283 { 00:31:32.283 "name": "BaseBdev2", 00:31:32.283 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:32.283 "is_configured": true, 00:31:32.283 "data_offset": 2048, 00:31:32.283 "data_size": 63488 00:31:32.283 } 00:31:32.283 ] 00:31:32.283 }' 00:31:32.283 11:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:32.283 11:55:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:32.849 11:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:31:32.849 11:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:33.107 [2024-06-10 11:55:05.090289] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:33.107 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:31:33.107 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:33.107 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:33.365 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:31:33.365 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:31:33.365 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:33.365 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:31:33.624 [2024-06-10 11:55:05.436029] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:33.624 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:33.624 Zero copy mechanism will not be used. 00:31:33.624 Running I/O for 60 seconds... 00:31:33.624 [2024-06-10 11:55:05.652763] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:33.624 [2024-06-10 11:55:05.660174] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:31:33.883 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:33.883 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:33.883 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:33.883 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:33.883 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:33.883 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:33.883 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:33.883 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:33.883 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:33.883 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:33.883 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:33.883 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:34.140 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:34.140 "name": "raid_bdev1", 00:31:34.140 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:34.140 "strip_size_kb": 0, 00:31:34.140 "state": "online", 00:31:34.140 "raid_level": "raid1", 00:31:34.140 "superblock": true, 00:31:34.140 "num_base_bdevs": 2, 00:31:34.140 "num_base_bdevs_discovered": 1, 00:31:34.140 "num_base_bdevs_operational": 1, 00:31:34.140 "base_bdevs_list": [ 00:31:34.140 { 00:31:34.140 "name": null, 00:31:34.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:34.140 "is_configured": false, 00:31:34.140 "data_offset": 2048, 00:31:34.140 "data_size": 63488 00:31:34.140 }, 00:31:34.140 { 00:31:34.140 "name": "BaseBdev2", 00:31:34.140 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:34.140 "is_configured": true, 00:31:34.140 "data_offset": 2048, 00:31:34.140 "data_size": 63488 00:31:34.140 } 00:31:34.140 ] 00:31:34.140 }' 00:31:34.140 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:34.140 11:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:34.740 11:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:34.998 [2024-06-10 11:55:06.903666] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:34.998 11:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:31:34.998 [2024-06-10 11:55:06.965418] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:34.998 [2024-06-10 11:55:06.967842] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:35.256 [2024-06-10 11:55:07.085136] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:35.256 [2024-06-10 11:55:07.085904] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:35.515 [2024-06-10 11:55:07.326790] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:35.773 [2024-06-10 11:55:07.830907] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:36.032 11:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:36.032 11:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:36.032 11:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:36.032 11:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:36.032 11:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:36.032 11:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:36.032 11:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:36.290 [2024-06-10 11:55:08.197998] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:36.290 [2024-06-10 11:55:08.198514] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:36.290 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:36.290 "name": "raid_bdev1", 00:31:36.290 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:36.290 "strip_size_kb": 0, 00:31:36.290 "state": "online", 00:31:36.290 "raid_level": "raid1", 00:31:36.290 "superblock": true, 00:31:36.291 "num_base_bdevs": 2, 00:31:36.291 "num_base_bdevs_discovered": 2, 00:31:36.291 "num_base_bdevs_operational": 2, 00:31:36.291 "process": { 00:31:36.291 "type": "rebuild", 00:31:36.291 "target": "spare", 00:31:36.291 "progress": { 00:31:36.291 "blocks": 16384, 00:31:36.291 "percent": 25 00:31:36.291 } 00:31:36.291 }, 00:31:36.291 "base_bdevs_list": [ 00:31:36.291 { 00:31:36.291 "name": "spare", 00:31:36.291 "uuid": "f450cb7e-9796-5368-87f4-9ced74cc66b6", 00:31:36.291 "is_configured": true, 00:31:36.291 "data_offset": 2048, 00:31:36.291 "data_size": 63488 00:31:36.291 }, 00:31:36.291 { 00:31:36.291 "name": "BaseBdev2", 00:31:36.291 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:36.291 "is_configured": true, 00:31:36.291 "data_offset": 2048, 00:31:36.291 "data_size": 63488 00:31:36.291 } 00:31:36.291 ] 00:31:36.291 }' 00:31:36.291 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:36.291 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:36.291 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:36.548 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:36.548 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:36.548 [2024-06-10 11:55:08.601023] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:36.806 [2024-06-10 11:55:08.629177] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:31:36.806 [2024-06-10 11:55:08.737821] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:36.806 [2024-06-10 11:55:08.754892] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:36.806 [2024-06-10 11:55:08.755099] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:36.806 [2024-06-10 11:55:08.755150] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:36.806 [2024-06-10 11:55:08.813539] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:31:36.806 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:36.806 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:36.806 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:36.806 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:36.806 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:36.806 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:36.806 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:36.806 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:36.806 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:36.806 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:36.806 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:36.806 11:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.064 11:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:37.064 "name": "raid_bdev1", 00:31:37.064 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:37.064 "strip_size_kb": 0, 00:31:37.064 "state": "online", 00:31:37.064 "raid_level": "raid1", 00:31:37.064 "superblock": true, 00:31:37.064 "num_base_bdevs": 2, 00:31:37.064 "num_base_bdevs_discovered": 1, 00:31:37.064 "num_base_bdevs_operational": 1, 00:31:37.064 "base_bdevs_list": [ 00:31:37.064 { 00:31:37.064 "name": null, 00:31:37.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:37.064 "is_configured": false, 00:31:37.064 "data_offset": 2048, 00:31:37.064 "data_size": 63488 00:31:37.064 }, 00:31:37.064 { 00:31:37.064 "name": "BaseBdev2", 00:31:37.064 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:37.064 "is_configured": true, 00:31:37.064 "data_offset": 2048, 00:31:37.064 "data_size": 63488 00:31:37.064 } 00:31:37.064 ] 00:31:37.064 }' 00:31:37.064 11:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:37.064 11:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:37.996 11:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:37.996 11:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:37.996 11:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:37.996 11:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:37.996 11:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:37.996 11:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:37.996 11:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.996 11:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:37.996 "name": "raid_bdev1", 00:31:37.996 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:37.996 "strip_size_kb": 0, 00:31:37.996 "state": "online", 00:31:37.996 "raid_level": "raid1", 00:31:37.996 "superblock": true, 00:31:37.996 "num_base_bdevs": 2, 00:31:37.996 "num_base_bdevs_discovered": 1, 00:31:37.996 "num_base_bdevs_operational": 1, 00:31:37.996 "base_bdevs_list": [ 00:31:37.996 { 00:31:37.996 "name": null, 00:31:37.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:37.996 "is_configured": false, 00:31:37.996 "data_offset": 2048, 00:31:37.996 "data_size": 63488 00:31:37.996 }, 00:31:37.996 { 00:31:37.996 "name": "BaseBdev2", 00:31:37.996 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:37.996 "is_configured": true, 00:31:37.996 "data_offset": 2048, 00:31:37.996 "data_size": 63488 00:31:37.996 } 00:31:37.996 ] 00:31:37.996 }' 00:31:38.254 11:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:38.254 11:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:38.254 11:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:38.254 11:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:38.254 11:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:38.512 [2024-06-10 11:55:10.393634] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:38.513 11:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:38.513 [2024-06-10 11:55:10.472498] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:38.513 [2024-06-10 11:55:10.475047] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:38.771 [2024-06-10 11:55:10.607498] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:38.771 [2024-06-10 11:55:10.721025] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:38.771 [2024-06-10 11:55:10.721602] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:39.338 [2024-06-10 11:55:11.193512] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:39.338 [2024-06-10 11:55:11.194061] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:39.597 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:39.597 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:39.597 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:39.597 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:39.597 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:39.597 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:39.597 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.597 [2024-06-10 11:55:11.566226] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:39.855 "name": "raid_bdev1", 00:31:39.855 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:39.855 "strip_size_kb": 0, 00:31:39.855 "state": "online", 00:31:39.855 "raid_level": "raid1", 00:31:39.855 "superblock": true, 00:31:39.855 "num_base_bdevs": 2, 00:31:39.855 "num_base_bdevs_discovered": 2, 00:31:39.855 "num_base_bdevs_operational": 2, 00:31:39.855 "process": { 00:31:39.855 "type": "rebuild", 00:31:39.855 "target": "spare", 00:31:39.855 "progress": { 00:31:39.855 "blocks": 16384, 00:31:39.855 "percent": 25 00:31:39.855 } 00:31:39.855 }, 00:31:39.855 "base_bdevs_list": [ 00:31:39.855 { 00:31:39.855 "name": "spare", 00:31:39.855 "uuid": "f450cb7e-9796-5368-87f4-9ced74cc66b6", 00:31:39.855 "is_configured": true, 00:31:39.855 "data_offset": 2048, 00:31:39.855 "data_size": 63488 00:31:39.855 }, 00:31:39.855 { 00:31:39.855 "name": "BaseBdev2", 00:31:39.855 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:39.855 "is_configured": true, 00:31:39.855 "data_offset": 2048, 00:31:39.855 "data_size": 63488 00:31:39.855 } 00:31:39.855 ] 00:31:39.855 }' 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:31:39.855 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=971 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:39.855 11:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:40.421 11:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:40.421 "name": "raid_bdev1", 00:31:40.421 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:40.421 "strip_size_kb": 0, 00:31:40.421 "state": "online", 00:31:40.421 "raid_level": "raid1", 00:31:40.421 "superblock": true, 00:31:40.421 "num_base_bdevs": 2, 00:31:40.421 "num_base_bdevs_discovered": 2, 00:31:40.421 "num_base_bdevs_operational": 2, 00:31:40.421 "process": { 00:31:40.421 "type": "rebuild", 00:31:40.421 "target": "spare", 00:31:40.421 "progress": { 00:31:40.421 "blocks": 24576, 00:31:40.421 "percent": 38 00:31:40.421 } 00:31:40.421 }, 00:31:40.421 "base_bdevs_list": [ 00:31:40.421 { 00:31:40.421 "name": "spare", 00:31:40.421 "uuid": "f450cb7e-9796-5368-87f4-9ced74cc66b6", 00:31:40.421 "is_configured": true, 00:31:40.421 "data_offset": 2048, 00:31:40.421 "data_size": 63488 00:31:40.421 }, 00:31:40.421 { 00:31:40.421 "name": "BaseBdev2", 00:31:40.421 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:40.421 "is_configured": true, 00:31:40.421 "data_offset": 2048, 00:31:40.421 "data_size": 63488 00:31:40.421 } 00:31:40.421 ] 00:31:40.421 }' 00:31:40.421 11:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:40.421 11:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:40.421 11:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:40.421 11:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:40.421 11:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:40.421 [2024-06-10 11:55:12.380546] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:31:40.680 [2024-06-10 11:55:12.702349] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:31:41.247 [2024-06-10 11:55:13.175876] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:31:41.247 [2024-06-10 11:55:13.282904] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:31:41.247 [2024-06-10 11:55:13.283457] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:31:41.505 11:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:41.505 11:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:41.505 11:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:41.505 11:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:41.505 11:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:41.505 11:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:41.505 11:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.505 11:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:41.826 11:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:41.826 "name": "raid_bdev1", 00:31:41.826 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:41.826 "strip_size_kb": 0, 00:31:41.826 "state": "online", 00:31:41.826 "raid_level": "raid1", 00:31:41.826 "superblock": true, 00:31:41.826 "num_base_bdevs": 2, 00:31:41.826 "num_base_bdevs_discovered": 2, 00:31:41.826 "num_base_bdevs_operational": 2, 00:31:41.826 "process": { 00:31:41.826 "type": "rebuild", 00:31:41.826 "target": "spare", 00:31:41.826 "progress": { 00:31:41.826 "blocks": 43008, 00:31:41.826 "percent": 67 00:31:41.826 } 00:31:41.826 }, 00:31:41.826 "base_bdevs_list": [ 00:31:41.826 { 00:31:41.826 "name": "spare", 00:31:41.826 "uuid": "f450cb7e-9796-5368-87f4-9ced74cc66b6", 00:31:41.826 "is_configured": true, 00:31:41.826 "data_offset": 2048, 00:31:41.826 "data_size": 63488 00:31:41.826 }, 00:31:41.826 { 00:31:41.826 "name": "BaseBdev2", 00:31:41.826 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:41.826 "is_configured": true, 00:31:41.826 "data_offset": 2048, 00:31:41.826 "data_size": 63488 00:31:41.826 } 00:31:41.826 ] 00:31:41.826 }' 00:31:41.826 11:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:41.826 11:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:41.826 11:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:41.826 11:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:41.826 11:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:41.826 [2024-06-10 11:55:13.713507] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:31:42.760 [2024-06-10 11:55:14.648073] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:42.760 11:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:42.760 11:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:42.760 11:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:42.760 11:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:42.760 11:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:42.760 11:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:42.760 11:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.760 11:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.760 [2024-06-10 11:55:14.748079] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:42.760 [2024-06-10 11:55:14.750774] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:43.018 11:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:43.018 "name": "raid_bdev1", 00:31:43.018 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:43.018 "strip_size_kb": 0, 00:31:43.018 "state": "online", 00:31:43.018 "raid_level": "raid1", 00:31:43.018 "superblock": true, 00:31:43.018 "num_base_bdevs": 2, 00:31:43.018 "num_base_bdevs_discovered": 2, 00:31:43.018 "num_base_bdevs_operational": 2, 00:31:43.018 "base_bdevs_list": [ 00:31:43.018 { 00:31:43.018 "name": "spare", 00:31:43.018 "uuid": "f450cb7e-9796-5368-87f4-9ced74cc66b6", 00:31:43.018 "is_configured": true, 00:31:43.018 "data_offset": 2048, 00:31:43.018 "data_size": 63488 00:31:43.018 }, 00:31:43.018 { 00:31:43.018 "name": "BaseBdev2", 00:31:43.018 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:43.018 "is_configured": true, 00:31:43.018 "data_offset": 2048, 00:31:43.018 "data_size": 63488 00:31:43.018 } 00:31:43.018 ] 00:31:43.018 }' 00:31:43.018 11:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:43.018 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:43.018 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:43.277 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:43.277 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:31:43.277 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:43.277 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:43.277 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:43.277 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:43.277 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:43.277 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:43.277 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:43.536 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:43.536 "name": "raid_bdev1", 00:31:43.536 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:43.536 "strip_size_kb": 0, 00:31:43.536 "state": "online", 00:31:43.536 "raid_level": "raid1", 00:31:43.536 "superblock": true, 00:31:43.536 "num_base_bdevs": 2, 00:31:43.536 "num_base_bdevs_discovered": 2, 00:31:43.536 "num_base_bdevs_operational": 2, 00:31:43.536 "base_bdevs_list": [ 00:31:43.536 { 00:31:43.536 "name": "spare", 00:31:43.536 "uuid": "f450cb7e-9796-5368-87f4-9ced74cc66b6", 00:31:43.536 "is_configured": true, 00:31:43.536 "data_offset": 2048, 00:31:43.536 "data_size": 63488 00:31:43.536 }, 00:31:43.536 { 00:31:43.536 "name": "BaseBdev2", 00:31:43.536 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:43.536 "is_configured": true, 00:31:43.536 "data_offset": 2048, 00:31:43.536 "data_size": 63488 00:31:43.536 } 00:31:43.536 ] 00:31:43.536 }' 00:31:43.536 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:43.536 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:43.536 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:43.536 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:43.536 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:43.536 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:43.536 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:43.536 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:43.536 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:43.536 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:43.536 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:43.536 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:43.536 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:43.536 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:43.536 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:43.536 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:43.795 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:43.795 "name": "raid_bdev1", 00:31:43.795 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:43.795 "strip_size_kb": 0, 00:31:43.795 "state": "online", 00:31:43.795 "raid_level": "raid1", 00:31:43.795 "superblock": true, 00:31:43.795 "num_base_bdevs": 2, 00:31:43.795 "num_base_bdevs_discovered": 2, 00:31:43.795 "num_base_bdevs_operational": 2, 00:31:43.795 "base_bdevs_list": [ 00:31:43.795 { 00:31:43.795 "name": "spare", 00:31:43.795 "uuid": "f450cb7e-9796-5368-87f4-9ced74cc66b6", 00:31:43.795 "is_configured": true, 00:31:43.795 "data_offset": 2048, 00:31:43.795 "data_size": 63488 00:31:43.795 }, 00:31:43.795 { 00:31:43.795 "name": "BaseBdev2", 00:31:43.795 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:43.795 "is_configured": true, 00:31:43.795 "data_offset": 2048, 00:31:43.795 "data_size": 63488 00:31:43.795 } 00:31:43.795 ] 00:31:43.795 }' 00:31:43.795 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:43.796 11:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:44.732 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:44.732 [2024-06-10 11:55:16.732847] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:44.732 [2024-06-10 11:55:16.733128] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:44.990 00:31:44.990 Latency(us) 00:31:44.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:44.990 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:31:44.990 raid_bdev1 : 11.38 111.23 333.69 0.00 0.00 12292.70 331.58 117839.97 00:31:44.990 =================================================================================================================== 00:31:44.990 Total : 111.23 333.69 0.00 0.00 12292.70 331.58 117839.97 00:31:44.990 [2024-06-10 11:55:16.851917] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:44.990 0 00:31:44.990 [2024-06-10 11:55:16.852183] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:44.990 [2024-06-10 11:55:16.852363] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:44.990 [2024-06-10 11:55:16.852506] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:31:44.990 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:44.990 11:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:31:45.249 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:31:45.249 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:31:45.249 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:45.249 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:31:45.249 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:45.249 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:31:45.249 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:45.249 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:45.249 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:45.249 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:45.249 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:45.249 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:45.249 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:31:45.510 /dev/nbd0 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local i 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # break 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:45.510 1+0 records in 00:31:45.510 1+0 records out 00:31:45.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606748 s, 6.8 MB/s 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # size=4096 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # return 0 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:45.510 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:31:45.770 /dev/nbd1 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local i 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # break 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:45.770 1+0 records in 00:31:45.770 1+0 records out 00:31:45.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577091 s, 7.1 MB/s 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # size=4096 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # return 0 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:45.770 11:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:46.029 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:31:46.029 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:46.029 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:31:46.029 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:46.029 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:46.029 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:46.029 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:46.289 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:46.289 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:46.289 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:46.289 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:46.289 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:46.289 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:46.289 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:46.289 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:46.289 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:46.289 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:46.289 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:46.289 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:46.289 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:46.289 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:46.289 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:46.856 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:46.856 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:46.856 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:46.856 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:46.856 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:46.856 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:46.856 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:46.856 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:46.856 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:31:46.856 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:47.115 11:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:47.375 [2024-06-10 11:55:19.257767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:47.375 [2024-06-10 11:55:19.258118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:47.375 [2024-06-10 11:55:19.258225] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:47.375 [2024-06-10 11:55:19.258464] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:47.375 [2024-06-10 11:55:19.261140] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:47.375 [2024-06-10 11:55:19.261334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:47.375 [2024-06-10 11:55:19.261594] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:47.375 [2024-06-10 11:55:19.261789] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:47.375 [2024-06-10 11:55:19.262076] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:47.375 spare 00:31:47.375 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:47.375 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:47.375 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:47.375 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:47.375 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:47.375 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:47.375 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:47.375 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:47.375 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:47.375 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:47.375 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.375 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:47.375 [2024-06-10 11:55:19.362336] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:31:47.375 [2024-06-10 11:55:19.362578] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:47.375 [2024-06-10 11:55:19.362836] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:31:47.375 [2024-06-10 11:55:19.363367] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:31:47.375 [2024-06-10 11:55:19.363490] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:31:47.375 [2024-06-10 11:55:19.363765] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:47.634 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:47.634 "name": "raid_bdev1", 00:31:47.634 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:47.634 "strip_size_kb": 0, 00:31:47.634 "state": "online", 00:31:47.634 "raid_level": "raid1", 00:31:47.634 "superblock": true, 00:31:47.634 "num_base_bdevs": 2, 00:31:47.634 "num_base_bdevs_discovered": 2, 00:31:47.634 "num_base_bdevs_operational": 2, 00:31:47.634 "base_bdevs_list": [ 00:31:47.634 { 00:31:47.634 "name": "spare", 00:31:47.634 "uuid": "f450cb7e-9796-5368-87f4-9ced74cc66b6", 00:31:47.634 "is_configured": true, 00:31:47.634 "data_offset": 2048, 00:31:47.634 "data_size": 63488 00:31:47.634 }, 00:31:47.634 { 00:31:47.634 "name": "BaseBdev2", 00:31:47.634 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:47.634 "is_configured": true, 00:31:47.634 "data_offset": 2048, 00:31:47.634 "data_size": 63488 00:31:47.634 } 00:31:47.634 ] 00:31:47.634 }' 00:31:47.634 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:47.634 11:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:48.664 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:48.664 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:48.664 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:48.664 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:48.664 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:48.664 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.664 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:48.664 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:48.664 "name": "raid_bdev1", 00:31:48.664 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:48.664 "strip_size_kb": 0, 00:31:48.664 "state": "online", 00:31:48.664 "raid_level": "raid1", 00:31:48.664 "superblock": true, 00:31:48.664 "num_base_bdevs": 2, 00:31:48.664 "num_base_bdevs_discovered": 2, 00:31:48.664 "num_base_bdevs_operational": 2, 00:31:48.664 "base_bdevs_list": [ 00:31:48.664 { 00:31:48.664 "name": "spare", 00:31:48.664 "uuid": "f450cb7e-9796-5368-87f4-9ced74cc66b6", 00:31:48.664 "is_configured": true, 00:31:48.664 "data_offset": 2048, 00:31:48.664 "data_size": 63488 00:31:48.664 }, 00:31:48.664 { 00:31:48.664 "name": "BaseBdev2", 00:31:48.664 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:48.664 "is_configured": true, 00:31:48.664 "data_offset": 2048, 00:31:48.664 "data_size": 63488 00:31:48.664 } 00:31:48.664 ] 00:31:48.664 }' 00:31:48.664 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:48.664 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:48.664 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:48.664 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:48.664 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.664 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:48.930 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:31:48.930 11:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:49.189 [2024-06-10 11:55:21.130954] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:49.189 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:49.189 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:49.189 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:49.189 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:49.189 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:49.189 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:49.189 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:49.189 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:49.189 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:49.189 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:49.189 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:49.189 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:49.446 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:49.446 "name": "raid_bdev1", 00:31:49.446 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:49.446 "strip_size_kb": 0, 00:31:49.446 "state": "online", 00:31:49.446 "raid_level": "raid1", 00:31:49.446 "superblock": true, 00:31:49.446 "num_base_bdevs": 2, 00:31:49.446 "num_base_bdevs_discovered": 1, 00:31:49.447 "num_base_bdevs_operational": 1, 00:31:49.447 "base_bdevs_list": [ 00:31:49.447 { 00:31:49.447 "name": null, 00:31:49.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:49.447 "is_configured": false, 00:31:49.447 "data_offset": 2048, 00:31:49.447 "data_size": 63488 00:31:49.447 }, 00:31:49.447 { 00:31:49.447 "name": "BaseBdev2", 00:31:49.447 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:49.447 "is_configured": true, 00:31:49.447 "data_offset": 2048, 00:31:49.447 "data_size": 63488 00:31:49.447 } 00:31:49.447 ] 00:31:49.447 }' 00:31:49.447 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:49.447 11:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:50.381 11:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:50.640 [2024-06-10 11:55:22.443419] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:50.640 [2024-06-10 11:55:22.443840] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:50.640 [2024-06-10 11:55:22.443980] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:50.640 [2024-06-10 11:55:22.444135] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:50.640 [2024-06-10 11:55:22.462141] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b340 00:31:50.640 [2024-06-10 11:55:22.464384] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:50.640 11:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:31:51.573 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:51.573 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:51.573 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:51.573 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:51.573 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:51.573 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:51.573 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:51.830 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:51.830 "name": "raid_bdev1", 00:31:51.830 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:51.830 "strip_size_kb": 0, 00:31:51.830 "state": "online", 00:31:51.831 "raid_level": "raid1", 00:31:51.831 "superblock": true, 00:31:51.831 "num_base_bdevs": 2, 00:31:51.831 "num_base_bdevs_discovered": 2, 00:31:51.831 "num_base_bdevs_operational": 2, 00:31:51.831 "process": { 00:31:51.831 "type": "rebuild", 00:31:51.831 "target": "spare", 00:31:51.831 "progress": { 00:31:51.831 "blocks": 24576, 00:31:51.831 "percent": 38 00:31:51.831 } 00:31:51.831 }, 00:31:51.831 "base_bdevs_list": [ 00:31:51.831 { 00:31:51.831 "name": "spare", 00:31:51.831 "uuid": "f450cb7e-9796-5368-87f4-9ced74cc66b6", 00:31:51.831 "is_configured": true, 00:31:51.831 "data_offset": 2048, 00:31:51.831 "data_size": 63488 00:31:51.831 }, 00:31:51.831 { 00:31:51.831 "name": "BaseBdev2", 00:31:51.831 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:51.831 "is_configured": true, 00:31:51.831 "data_offset": 2048, 00:31:51.831 "data_size": 63488 00:31:51.831 } 00:31:51.831 ] 00:31:51.831 }' 00:31:51.831 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:51.831 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:51.831 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:51.831 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:51.831 11:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:52.088 [2024-06-10 11:55:24.033917] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:52.088 [2024-06-10 11:55:24.075022] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:52.088 [2024-06-10 11:55:24.075285] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:52.088 [2024-06-10 11:55:24.075339] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:52.088 [2024-06-10 11:55:24.075415] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:52.088 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:52.088 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:52.088 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:52.088 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:52.088 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:52.088 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:52.088 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:52.088 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:52.088 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:52.088 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:52.088 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:52.088 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:52.346 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:52.346 "name": "raid_bdev1", 00:31:52.346 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:52.346 "strip_size_kb": 0, 00:31:52.346 "state": "online", 00:31:52.346 "raid_level": "raid1", 00:31:52.346 "superblock": true, 00:31:52.346 "num_base_bdevs": 2, 00:31:52.346 "num_base_bdevs_discovered": 1, 00:31:52.346 "num_base_bdevs_operational": 1, 00:31:52.346 "base_bdevs_list": [ 00:31:52.346 { 00:31:52.346 "name": null, 00:31:52.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:52.346 "is_configured": false, 00:31:52.346 "data_offset": 2048, 00:31:52.346 "data_size": 63488 00:31:52.346 }, 00:31:52.346 { 00:31:52.346 "name": "BaseBdev2", 00:31:52.346 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:52.346 "is_configured": true, 00:31:52.346 "data_offset": 2048, 00:31:52.346 "data_size": 63488 00:31:52.346 } 00:31:52.346 ] 00:31:52.346 }' 00:31:52.346 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:52.346 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:52.912 11:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:53.171 [2024-06-10 11:55:25.207596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:53.171 [2024-06-10 11:55:25.207880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:53.171 [2024-06-10 11:55:25.207954] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:31:53.171 [2024-06-10 11:55:25.208065] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:53.171 [2024-06-10 11:55:25.208611] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:53.171 [2024-06-10 11:55:25.208749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:53.171 [2024-06-10 11:55:25.208926] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:53.171 [2024-06-10 11:55:25.208964] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:53.171 [2024-06-10 11:55:25.208993] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:53.171 [2024-06-10 11:55:25.209058] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:53.171 [2024-06-10 11:55:25.227360] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:31:53.171 spare 00:31:53.171 [2024-06-10 11:55:25.229650] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:53.429 11:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:31:54.361 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:54.361 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:54.361 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:54.361 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:54.361 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:54.361 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:54.361 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:54.618 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:54.618 "name": "raid_bdev1", 00:31:54.618 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:54.618 "strip_size_kb": 0, 00:31:54.618 "state": "online", 00:31:54.618 "raid_level": "raid1", 00:31:54.618 "superblock": true, 00:31:54.618 "num_base_bdevs": 2, 00:31:54.618 "num_base_bdevs_discovered": 2, 00:31:54.618 "num_base_bdevs_operational": 2, 00:31:54.618 "process": { 00:31:54.618 "type": "rebuild", 00:31:54.618 "target": "spare", 00:31:54.618 "progress": { 00:31:54.618 "blocks": 24576, 00:31:54.618 "percent": 38 00:31:54.618 } 00:31:54.618 }, 00:31:54.618 "base_bdevs_list": [ 00:31:54.618 { 00:31:54.618 "name": "spare", 00:31:54.618 "uuid": "f450cb7e-9796-5368-87f4-9ced74cc66b6", 00:31:54.618 "is_configured": true, 00:31:54.618 "data_offset": 2048, 00:31:54.618 "data_size": 63488 00:31:54.618 }, 00:31:54.618 { 00:31:54.618 "name": "BaseBdev2", 00:31:54.618 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:54.618 "is_configured": true, 00:31:54.618 "data_offset": 2048, 00:31:54.618 "data_size": 63488 00:31:54.618 } 00:31:54.618 ] 00:31:54.618 }' 00:31:54.618 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:54.618 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:54.618 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:54.618 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:54.618 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:54.875 [2024-06-10 11:55:26.795591] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:54.875 [2024-06-10 11:55:26.840560] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:54.875 [2024-06-10 11:55:26.840801] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:54.875 [2024-06-10 11:55:26.840854] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:54.875 [2024-06-10 11:55:26.840935] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:54.875 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:54.875 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:54.875 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:54.875 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:54.875 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:54.875 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:54.875 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:54.875 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:54.875 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:54.875 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:54.875 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:54.875 11:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:55.132 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:55.132 "name": "raid_bdev1", 00:31:55.132 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:55.132 "strip_size_kb": 0, 00:31:55.132 "state": "online", 00:31:55.132 "raid_level": "raid1", 00:31:55.132 "superblock": true, 00:31:55.132 "num_base_bdevs": 2, 00:31:55.132 "num_base_bdevs_discovered": 1, 00:31:55.132 "num_base_bdevs_operational": 1, 00:31:55.132 "base_bdevs_list": [ 00:31:55.132 { 00:31:55.132 "name": null, 00:31:55.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:55.132 "is_configured": false, 00:31:55.132 "data_offset": 2048, 00:31:55.132 "data_size": 63488 00:31:55.132 }, 00:31:55.132 { 00:31:55.132 "name": "BaseBdev2", 00:31:55.132 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:55.132 "is_configured": true, 00:31:55.132 "data_offset": 2048, 00:31:55.132 "data_size": 63488 00:31:55.132 } 00:31:55.132 ] 00:31:55.132 }' 00:31:55.132 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:55.132 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:55.696 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:55.696 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:55.696 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:55.696 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:55.696 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:55.696 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:55.696 11:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.261 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:56.261 "name": "raid_bdev1", 00:31:56.261 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:56.261 "strip_size_kb": 0, 00:31:56.261 "state": "online", 00:31:56.261 "raid_level": "raid1", 00:31:56.261 "superblock": true, 00:31:56.261 "num_base_bdevs": 2, 00:31:56.261 "num_base_bdevs_discovered": 1, 00:31:56.261 "num_base_bdevs_operational": 1, 00:31:56.261 "base_bdevs_list": [ 00:31:56.261 { 00:31:56.261 "name": null, 00:31:56.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.261 "is_configured": false, 00:31:56.261 "data_offset": 2048, 00:31:56.261 "data_size": 63488 00:31:56.261 }, 00:31:56.261 { 00:31:56.261 "name": "BaseBdev2", 00:31:56.261 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:56.261 "is_configured": true, 00:31:56.261 "data_offset": 2048, 00:31:56.261 "data_size": 63488 00:31:56.261 } 00:31:56.261 ] 00:31:56.261 }' 00:31:56.261 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:56.261 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:56.261 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:56.262 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:56.262 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:31:56.520 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:56.779 [2024-06-10 11:55:28.720519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:56.779 [2024-06-10 11:55:28.720835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:56.779 [2024-06-10 11:55:28.720998] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:31:56.779 [2024-06-10 11:55:28.721102] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:56.779 [2024-06-10 11:55:28.721708] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:56.779 [2024-06-10 11:55:28.721860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:56.779 [2024-06-10 11:55:28.722107] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:56.779 [2024-06-10 11:55:28.722212] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:56.779 [2024-06-10 11:55:28.722300] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:56.779 BaseBdev1 00:31:56.779 11:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:31:57.715 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:57.715 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:57.715 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:57.715 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:57.715 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:57.715 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:57.715 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:57.715 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:57.715 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:57.715 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:57.715 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.715 11:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:58.290 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:58.290 "name": "raid_bdev1", 00:31:58.290 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:58.290 "strip_size_kb": 0, 00:31:58.290 "state": "online", 00:31:58.290 "raid_level": "raid1", 00:31:58.290 "superblock": true, 00:31:58.290 "num_base_bdevs": 2, 00:31:58.290 "num_base_bdevs_discovered": 1, 00:31:58.290 "num_base_bdevs_operational": 1, 00:31:58.290 "base_bdevs_list": [ 00:31:58.290 { 00:31:58.290 "name": null, 00:31:58.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.290 "is_configured": false, 00:31:58.290 "data_offset": 2048, 00:31:58.290 "data_size": 63488 00:31:58.290 }, 00:31:58.290 { 00:31:58.290 "name": "BaseBdev2", 00:31:58.290 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:58.290 "is_configured": true, 00:31:58.290 "data_offset": 2048, 00:31:58.290 "data_size": 63488 00:31:58.290 } 00:31:58.290 ] 00:31:58.290 }' 00:31:58.290 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:58.290 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:58.857 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:58.857 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:58.857 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:58.857 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:58.857 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:58.857 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.857 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.115 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:59.115 "name": "raid_bdev1", 00:31:59.115 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:31:59.115 "strip_size_kb": 0, 00:31:59.115 "state": "online", 00:31:59.115 "raid_level": "raid1", 00:31:59.115 "superblock": true, 00:31:59.115 "num_base_bdevs": 2, 00:31:59.115 "num_base_bdevs_discovered": 1, 00:31:59.115 "num_base_bdevs_operational": 1, 00:31:59.115 "base_bdevs_list": [ 00:31:59.115 { 00:31:59.115 "name": null, 00:31:59.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.115 "is_configured": false, 00:31:59.115 "data_offset": 2048, 00:31:59.115 "data_size": 63488 00:31:59.115 }, 00:31:59.115 { 00:31:59.115 "name": "BaseBdev2", 00:31:59.115 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:31:59.115 "is_configured": true, 00:31:59.115 "data_offset": 2048, 00:31:59.115 "data_size": 63488 00:31:59.115 } 00:31:59.115 ] 00:31:59.115 }' 00:31:59.115 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:59.115 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:59.115 11:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:59.115 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:59.115 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:59.115 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@649 -- # local es=0 00:31:59.115 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:59.115 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:59.115 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:59.115 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:59.115 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:59.115 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:59.115 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:59.115 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:59.115 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:59.115 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:59.373 [2024-06-10 11:55:31.236730] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:59.373 [2024-06-10 11:55:31.237092] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:59.373 [2024-06-10 11:55:31.237212] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:59.373 request: 00:31:59.373 { 00:31:59.373 "base_bdev": "BaseBdev1", 00:31:59.373 "raid_bdev": "raid_bdev1", 00:31:59.373 "method": "bdev_raid_add_base_bdev", 00:31:59.373 "req_id": 1 00:31:59.373 } 00:31:59.373 Got JSON-RPC error response 00:31:59.373 response: 00:31:59.373 { 00:31:59.373 "code": -22, 00:31:59.373 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:59.373 } 00:31:59.373 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # es=1 00:31:59.373 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:59.373 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:59.373 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:59.373 11:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:32:00.305 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:00.305 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:00.305 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:00.305 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:00.305 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:00.305 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:00.305 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:00.305 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:00.305 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:00.305 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:00.305 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:00.305 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:00.562 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:00.563 "name": "raid_bdev1", 00:32:00.563 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:32:00.563 "strip_size_kb": 0, 00:32:00.563 "state": "online", 00:32:00.563 "raid_level": "raid1", 00:32:00.563 "superblock": true, 00:32:00.563 "num_base_bdevs": 2, 00:32:00.563 "num_base_bdevs_discovered": 1, 00:32:00.563 "num_base_bdevs_operational": 1, 00:32:00.563 "base_bdevs_list": [ 00:32:00.563 { 00:32:00.563 "name": null, 00:32:00.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:00.563 "is_configured": false, 00:32:00.563 "data_offset": 2048, 00:32:00.563 "data_size": 63488 00:32:00.563 }, 00:32:00.563 { 00:32:00.563 "name": "BaseBdev2", 00:32:00.563 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:32:00.563 "is_configured": true, 00:32:00.563 "data_offset": 2048, 00:32:00.563 "data_size": 63488 00:32:00.563 } 00:32:00.563 ] 00:32:00.563 }' 00:32:00.563 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:00.563 11:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:01.497 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:01.497 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:01.497 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:01.497 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:01.498 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:01.498 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:01.498 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:01.755 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:01.755 "name": "raid_bdev1", 00:32:01.755 "uuid": "ae67bb7c-154e-4e7c-9be0-54aec653653a", 00:32:01.755 "strip_size_kb": 0, 00:32:01.755 "state": "online", 00:32:01.755 "raid_level": "raid1", 00:32:01.755 "superblock": true, 00:32:01.755 "num_base_bdevs": 2, 00:32:01.755 "num_base_bdevs_discovered": 1, 00:32:01.755 "num_base_bdevs_operational": 1, 00:32:01.755 "base_bdevs_list": [ 00:32:01.755 { 00:32:01.755 "name": null, 00:32:01.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:01.756 "is_configured": false, 00:32:01.756 "data_offset": 2048, 00:32:01.756 "data_size": 63488 00:32:01.756 }, 00:32:01.756 { 00:32:01.756 "name": "BaseBdev2", 00:32:01.756 "uuid": "b2a50769-b213-5a42-b540-cea40d213094", 00:32:01.756 "is_configured": true, 00:32:01.756 "data_offset": 2048, 00:32:01.756 "data_size": 63488 00:32:01.756 } 00:32:01.756 ] 00:32:01.756 }' 00:32:01.756 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:01.756 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:01.756 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:01.756 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:01.756 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 147850 00:32:01.756 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@949 -- # '[' -z 147850 ']' 00:32:01.756 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # kill -0 147850 00:32:01.756 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # uname 00:32:01.756 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:01.756 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 147850 00:32:01.756 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:01.756 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:01.756 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # echo 'killing process with pid 147850' 00:32:01.756 killing process with pid 147850 00:32:01.756 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # kill 147850 00:32:01.756 Received shutdown signal, test time was about 28.247881 seconds 00:32:01.756 00:32:01.756 Latency(us) 00:32:01.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.756 =================================================================================================================== 00:32:01.756 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:01.756 11:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # wait 147850 00:32:01.756 [2024-06-10 11:55:33.687103] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:01.756 [2024-06-10 11:55:33.687226] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:01.756 [2024-06-10 11:55:33.687276] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:01.756 [2024-06-10 11:55:33.687286] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:32:02.013 [2024-06-10 11:55:33.971145] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:03.914 ************************************ 00:32:03.914 END TEST raid_rebuild_test_sb_io 00:32:03.914 ************************************ 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:32:03.914 00:32:03.914 real 0m34.828s 00:32:03.914 user 0m55.356s 00:32:03.914 sys 0m4.018s 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:03.914 11:55:35 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:32:03.914 11:55:35 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:32:03.914 11:55:35 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:32:03.914 11:55:35 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:03.914 11:55:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:03.914 ************************************ 00:32:03.914 START TEST raid_rebuild_test 00:32:03.914 ************************************ 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 4 false false true 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=148738 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 148738 /var/tmp/spdk-raid.sock 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@830 -- # '[' -z 148738 ']' 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:03.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:03.914 11:55:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.914 [2024-06-10 11:55:35.825261] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:32:03.914 [2024-06-10 11:55:35.825763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148738 ] 00:32:03.914 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:03.914 Zero copy mechanism will not be used. 00:32:04.173 [2024-06-10 11:55:35.993822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.432 [2024-06-10 11:55:36.279688] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.690 [2024-06-10 11:55:36.522658] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:04.949 11:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:04.949 11:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@863 -- # return 0 00:32:04.949 11:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:04.949 11:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:05.207 BaseBdev1_malloc 00:32:05.207 11:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:05.465 [2024-06-10 11:55:37.296468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:05.465 [2024-06-10 11:55:37.296776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:05.465 [2024-06-10 11:55:37.296921] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:32:05.465 [2024-06-10 11:55:37.297032] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:05.465 [2024-06-10 11:55:37.299764] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:05.465 [2024-06-10 11:55:37.299955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:05.465 BaseBdev1 00:32:05.465 11:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:05.465 11:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:05.724 BaseBdev2_malloc 00:32:05.724 11:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:05.983 [2024-06-10 11:55:37.844905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:05.983 [2024-06-10 11:55:37.845226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:05.983 [2024-06-10 11:55:37.845339] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:32:05.983 [2024-06-10 11:55:37.845553] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:05.983 [2024-06-10 11:55:37.848226] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:05.983 [2024-06-10 11:55:37.848411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:05.983 BaseBdev2 00:32:05.983 11:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:05.983 11:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:06.242 BaseBdev3_malloc 00:32:06.242 11:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:32:06.242 [2024-06-10 11:55:38.275817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:32:06.242 [2024-06-10 11:55:38.276138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:06.242 [2024-06-10 11:55:38.276212] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:06.242 [2024-06-10 11:55:38.276327] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:06.242 [2024-06-10 11:55:38.278715] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:06.242 [2024-06-10 11:55:38.278873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:06.242 BaseBdev3 00:32:06.242 11:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:06.242 11:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:06.502 BaseBdev4_malloc 00:32:06.502 11:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:32:06.761 [2024-06-10 11:55:38.732119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:32:06.761 [2024-06-10 11:55:38.732425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:06.761 [2024-06-10 11:55:38.732498] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:32:06.761 [2024-06-10 11:55:38.732604] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:06.761 [2024-06-10 11:55:38.735169] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:06.761 [2024-06-10 11:55:38.735362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:06.761 BaseBdev4 00:32:06.761 11:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:32:07.055 spare_malloc 00:32:07.055 11:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:07.314 spare_delay 00:32:07.314 11:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:07.573 [2024-06-10 11:55:39.541365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:07.573 [2024-06-10 11:55:39.541671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:07.573 [2024-06-10 11:55:39.541740] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:07.573 [2024-06-10 11:55:39.541849] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:07.573 [2024-06-10 11:55:39.544204] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:07.573 [2024-06-10 11:55:39.544362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:07.573 spare 00:32:07.573 11:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:32:07.831 [2024-06-10 11:55:39.733455] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:07.831 [2024-06-10 11:55:39.735751] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:07.831 [2024-06-10 11:55:39.735947] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:07.831 [2024-06-10 11:55:39.736031] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:07.831 [2024-06-10 11:55:39.736220] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:32:07.831 [2024-06-10 11:55:39.736259] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:32:07.831 [2024-06-10 11:55:39.736540] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:07.831 [2024-06-10 11:55:39.737121] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:32:07.831 [2024-06-10 11:55:39.737243] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:32:07.831 [2024-06-10 11:55:39.737606] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:07.831 11:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:32:07.831 11:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:07.831 11:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:07.831 11:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:07.831 11:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:07.831 11:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:07.831 11:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:07.831 11:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:07.831 11:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:07.831 11:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:07.831 11:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:07.831 11:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:08.090 11:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:08.090 "name": "raid_bdev1", 00:32:08.090 "uuid": "b0813c40-875b-4f70-8a8f-8a4492900499", 00:32:08.090 "strip_size_kb": 0, 00:32:08.090 "state": "online", 00:32:08.090 "raid_level": "raid1", 00:32:08.090 "superblock": false, 00:32:08.090 "num_base_bdevs": 4, 00:32:08.090 "num_base_bdevs_discovered": 4, 00:32:08.090 "num_base_bdevs_operational": 4, 00:32:08.090 "base_bdevs_list": [ 00:32:08.090 { 00:32:08.090 "name": "BaseBdev1", 00:32:08.090 "uuid": "9cf06365-b393-50a0-b0a2-eafcabb6f1c9", 00:32:08.090 "is_configured": true, 00:32:08.090 "data_offset": 0, 00:32:08.090 "data_size": 65536 00:32:08.090 }, 00:32:08.090 { 00:32:08.090 "name": "BaseBdev2", 00:32:08.090 "uuid": "7d1576d7-c816-5249-9ee9-687111efd99c", 00:32:08.090 "is_configured": true, 00:32:08.090 "data_offset": 0, 00:32:08.090 "data_size": 65536 00:32:08.090 }, 00:32:08.090 { 00:32:08.090 "name": "BaseBdev3", 00:32:08.090 "uuid": "19e68fde-6629-5cdc-b325-e30eb5f7ac30", 00:32:08.090 "is_configured": true, 00:32:08.090 "data_offset": 0, 00:32:08.090 "data_size": 65536 00:32:08.090 }, 00:32:08.090 { 00:32:08.090 "name": "BaseBdev4", 00:32:08.090 "uuid": "e81878c6-9f6d-52ad-b9c5-b6a02c678b02", 00:32:08.090 "is_configured": true, 00:32:08.090 "data_offset": 0, 00:32:08.090 "data_size": 65536 00:32:08.090 } 00:32:08.090 ] 00:32:08.090 }' 00:32:08.090 11:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:08.090 11:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:08.657 11:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:08.657 11:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:32:08.914 [2024-06-10 11:55:40.782210] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:08.914 11:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:32:08.915 11:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:08.915 11:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:09.173 11:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:32:09.173 11:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:32:09.173 11:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:32:09.173 11:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:32:09.174 11:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:32:09.174 11:55:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:09.174 11:55:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:09.174 11:55:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:09.174 11:55:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:09.174 11:55:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:09.174 11:55:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:32:09.174 11:55:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:09.174 11:55:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:09.174 11:55:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:09.174 [2024-06-10 11:55:41.198135] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:32:09.174 /dev/nbd0 00:32:09.432 11:55:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:09.432 11:55:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:09.432 11:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:32:09.432 11:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:32:09.432 11:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:09.432 11:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:09.432 11:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:32:09.432 11:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # break 00:32:09.432 11:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:32:09.432 11:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:32:09.432 11:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:09.432 1+0 records in 00:32:09.432 1+0 records out 00:32:09.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437878 s, 9.4 MB/s 00:32:09.432 11:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:09.432 11:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:32:09.433 11:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:09.433 11:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:32:09.433 11:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:32:09.433 11:55:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:09.433 11:55:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:09.433 11:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:32:09.433 11:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:32:09.433 11:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:32:15.990 65536+0 records in 00:32:15.990 65536+0 records out 00:32:15.990 33554432 bytes (34 MB, 32 MiB) copied, 6.66062 s, 5.0 MB/s 00:32:15.990 11:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:32:15.990 11:55:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:15.990 11:55:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:15.990 11:55:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:15.990 11:55:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:32:15.990 11:55:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:15.990 11:55:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:16.249 11:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:16.249 [2024-06-10 11:55:48.246302] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:16.249 11:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:16.249 11:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:16.249 11:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:16.249 11:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:16.249 11:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:16.249 11:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:16.249 11:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:16.249 11:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:32:16.508 [2024-06-10 11:55:48.526000] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:16.508 11:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:16.508 11:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:16.508 11:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:16.508 11:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:16.508 11:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:16.508 11:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:16.508 11:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:16.508 11:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:16.508 11:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:16.508 11:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:16.508 11:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:16.508 11:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:17.075 11:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:17.075 "name": "raid_bdev1", 00:32:17.075 "uuid": "b0813c40-875b-4f70-8a8f-8a4492900499", 00:32:17.075 "strip_size_kb": 0, 00:32:17.075 "state": "online", 00:32:17.075 "raid_level": "raid1", 00:32:17.075 "superblock": false, 00:32:17.075 "num_base_bdevs": 4, 00:32:17.075 "num_base_bdevs_discovered": 3, 00:32:17.075 "num_base_bdevs_operational": 3, 00:32:17.075 "base_bdevs_list": [ 00:32:17.075 { 00:32:17.075 "name": null, 00:32:17.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:17.075 "is_configured": false, 00:32:17.075 "data_offset": 0, 00:32:17.075 "data_size": 65536 00:32:17.075 }, 00:32:17.075 { 00:32:17.075 "name": "BaseBdev2", 00:32:17.075 "uuid": "7d1576d7-c816-5249-9ee9-687111efd99c", 00:32:17.075 "is_configured": true, 00:32:17.075 "data_offset": 0, 00:32:17.075 "data_size": 65536 00:32:17.075 }, 00:32:17.075 { 00:32:17.075 "name": "BaseBdev3", 00:32:17.075 "uuid": "19e68fde-6629-5cdc-b325-e30eb5f7ac30", 00:32:17.075 "is_configured": true, 00:32:17.075 "data_offset": 0, 00:32:17.075 "data_size": 65536 00:32:17.075 }, 00:32:17.075 { 00:32:17.075 "name": "BaseBdev4", 00:32:17.075 "uuid": "e81878c6-9f6d-52ad-b9c5-b6a02c678b02", 00:32:17.075 "is_configured": true, 00:32:17.075 "data_offset": 0, 00:32:17.075 "data_size": 65536 00:32:17.075 } 00:32:17.075 ] 00:32:17.075 }' 00:32:17.075 11:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:17.075 11:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.641 11:55:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:17.899 [2024-06-10 11:55:49.730284] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:17.899 [2024-06-10 11:55:49.749320] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:32:17.899 [2024-06-10 11:55:49.751853] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:17.899 11:55:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:32:18.832 11:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:18.832 11:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:18.832 11:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:18.832 11:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:18.832 11:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:18.832 11:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:18.832 11:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:19.090 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:19.090 "name": "raid_bdev1", 00:32:19.090 "uuid": "b0813c40-875b-4f70-8a8f-8a4492900499", 00:32:19.090 "strip_size_kb": 0, 00:32:19.090 "state": "online", 00:32:19.090 "raid_level": "raid1", 00:32:19.090 "superblock": false, 00:32:19.090 "num_base_bdevs": 4, 00:32:19.090 "num_base_bdevs_discovered": 4, 00:32:19.090 "num_base_bdevs_operational": 4, 00:32:19.090 "process": { 00:32:19.090 "type": "rebuild", 00:32:19.090 "target": "spare", 00:32:19.090 "progress": { 00:32:19.090 "blocks": 24576, 00:32:19.090 "percent": 37 00:32:19.090 } 00:32:19.090 }, 00:32:19.090 "base_bdevs_list": [ 00:32:19.090 { 00:32:19.090 "name": "spare", 00:32:19.090 "uuid": "89e71873-7ea8-5c84-b6c9-e44f85b54911", 00:32:19.090 "is_configured": true, 00:32:19.090 "data_offset": 0, 00:32:19.090 "data_size": 65536 00:32:19.090 }, 00:32:19.090 { 00:32:19.090 "name": "BaseBdev2", 00:32:19.090 "uuid": "7d1576d7-c816-5249-9ee9-687111efd99c", 00:32:19.090 "is_configured": true, 00:32:19.090 "data_offset": 0, 00:32:19.090 "data_size": 65536 00:32:19.090 }, 00:32:19.090 { 00:32:19.090 "name": "BaseBdev3", 00:32:19.090 "uuid": "19e68fde-6629-5cdc-b325-e30eb5f7ac30", 00:32:19.090 "is_configured": true, 00:32:19.090 "data_offset": 0, 00:32:19.090 "data_size": 65536 00:32:19.090 }, 00:32:19.090 { 00:32:19.090 "name": "BaseBdev4", 00:32:19.090 "uuid": "e81878c6-9f6d-52ad-b9c5-b6a02c678b02", 00:32:19.090 "is_configured": true, 00:32:19.090 "data_offset": 0, 00:32:19.090 "data_size": 65536 00:32:19.090 } 00:32:19.090 ] 00:32:19.090 }' 00:32:19.090 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:19.090 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:19.090 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:19.090 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:19.090 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:19.349 [2024-06-10 11:55:51.341464] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:19.349 [2024-06-10 11:55:51.362679] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:19.349 [2024-06-10 11:55:51.362959] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:19.349 [2024-06-10 11:55:51.363032] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:19.349 [2024-06-10 11:55:51.363141] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:19.349 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:19.349 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:19.349 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:19.349 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:19.349 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:19.349 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:19.349 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:19.349 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:19.349 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:19.349 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:19.607 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:19.607 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:19.865 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:19.865 "name": "raid_bdev1", 00:32:19.865 "uuid": "b0813c40-875b-4f70-8a8f-8a4492900499", 00:32:19.865 "strip_size_kb": 0, 00:32:19.865 "state": "online", 00:32:19.865 "raid_level": "raid1", 00:32:19.865 "superblock": false, 00:32:19.865 "num_base_bdevs": 4, 00:32:19.865 "num_base_bdevs_discovered": 3, 00:32:19.865 "num_base_bdevs_operational": 3, 00:32:19.865 "base_bdevs_list": [ 00:32:19.865 { 00:32:19.865 "name": null, 00:32:19.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.865 "is_configured": false, 00:32:19.865 "data_offset": 0, 00:32:19.865 "data_size": 65536 00:32:19.865 }, 00:32:19.865 { 00:32:19.865 "name": "BaseBdev2", 00:32:19.865 "uuid": "7d1576d7-c816-5249-9ee9-687111efd99c", 00:32:19.865 "is_configured": true, 00:32:19.865 "data_offset": 0, 00:32:19.865 "data_size": 65536 00:32:19.865 }, 00:32:19.865 { 00:32:19.865 "name": "BaseBdev3", 00:32:19.865 "uuid": "19e68fde-6629-5cdc-b325-e30eb5f7ac30", 00:32:19.865 "is_configured": true, 00:32:19.865 "data_offset": 0, 00:32:19.865 "data_size": 65536 00:32:19.865 }, 00:32:19.865 { 00:32:19.865 "name": "BaseBdev4", 00:32:19.865 "uuid": "e81878c6-9f6d-52ad-b9c5-b6a02c678b02", 00:32:19.865 "is_configured": true, 00:32:19.865 "data_offset": 0, 00:32:19.865 "data_size": 65536 00:32:19.865 } 00:32:19.865 ] 00:32:19.865 }' 00:32:19.865 11:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:19.865 11:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.432 11:55:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:20.432 11:55:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:20.432 11:55:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:20.432 11:55:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:20.432 11:55:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:20.432 11:55:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:20.432 11:55:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.691 11:55:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:20.691 "name": "raid_bdev1", 00:32:20.691 "uuid": "b0813c40-875b-4f70-8a8f-8a4492900499", 00:32:20.691 "strip_size_kb": 0, 00:32:20.691 "state": "online", 00:32:20.691 "raid_level": "raid1", 00:32:20.691 "superblock": false, 00:32:20.691 "num_base_bdevs": 4, 00:32:20.691 "num_base_bdevs_discovered": 3, 00:32:20.691 "num_base_bdevs_operational": 3, 00:32:20.691 "base_bdevs_list": [ 00:32:20.691 { 00:32:20.691 "name": null, 00:32:20.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.691 "is_configured": false, 00:32:20.691 "data_offset": 0, 00:32:20.691 "data_size": 65536 00:32:20.691 }, 00:32:20.691 { 00:32:20.691 "name": "BaseBdev2", 00:32:20.691 "uuid": "7d1576d7-c816-5249-9ee9-687111efd99c", 00:32:20.691 "is_configured": true, 00:32:20.691 "data_offset": 0, 00:32:20.691 "data_size": 65536 00:32:20.691 }, 00:32:20.691 { 00:32:20.691 "name": "BaseBdev3", 00:32:20.691 "uuid": "19e68fde-6629-5cdc-b325-e30eb5f7ac30", 00:32:20.691 "is_configured": true, 00:32:20.691 "data_offset": 0, 00:32:20.691 "data_size": 65536 00:32:20.691 }, 00:32:20.691 { 00:32:20.691 "name": "BaseBdev4", 00:32:20.691 "uuid": "e81878c6-9f6d-52ad-b9c5-b6a02c678b02", 00:32:20.691 "is_configured": true, 00:32:20.691 "data_offset": 0, 00:32:20.691 "data_size": 65536 00:32:20.691 } 00:32:20.691 ] 00:32:20.691 }' 00:32:20.691 11:55:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:20.691 11:55:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:20.691 11:55:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:20.691 11:55:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:20.691 11:55:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:20.949 [2024-06-10 11:55:52.920432] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:20.949 [2024-06-10 11:55:52.938320] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:32:20.949 [2024-06-10 11:55:52.940780] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:20.949 11:55:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:22.325 11:55:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:22.325 11:55:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:22.325 11:55:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:22.325 11:55:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:22.325 11:55:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:22.325 11:55:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:22.325 11:55:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:22.325 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:22.325 "name": "raid_bdev1", 00:32:22.325 "uuid": "b0813c40-875b-4f70-8a8f-8a4492900499", 00:32:22.325 "strip_size_kb": 0, 00:32:22.325 "state": "online", 00:32:22.325 "raid_level": "raid1", 00:32:22.325 "superblock": false, 00:32:22.325 "num_base_bdevs": 4, 00:32:22.325 "num_base_bdevs_discovered": 4, 00:32:22.325 "num_base_bdevs_operational": 4, 00:32:22.325 "process": { 00:32:22.325 "type": "rebuild", 00:32:22.325 "target": "spare", 00:32:22.325 "progress": { 00:32:22.325 "blocks": 24576, 00:32:22.325 "percent": 37 00:32:22.325 } 00:32:22.325 }, 00:32:22.325 "base_bdevs_list": [ 00:32:22.325 { 00:32:22.325 "name": "spare", 00:32:22.325 "uuid": "89e71873-7ea8-5c84-b6c9-e44f85b54911", 00:32:22.325 "is_configured": true, 00:32:22.325 "data_offset": 0, 00:32:22.325 "data_size": 65536 00:32:22.325 }, 00:32:22.325 { 00:32:22.325 "name": "BaseBdev2", 00:32:22.325 "uuid": "7d1576d7-c816-5249-9ee9-687111efd99c", 00:32:22.325 "is_configured": true, 00:32:22.325 "data_offset": 0, 00:32:22.325 "data_size": 65536 00:32:22.325 }, 00:32:22.325 { 00:32:22.325 "name": "BaseBdev3", 00:32:22.325 "uuid": "19e68fde-6629-5cdc-b325-e30eb5f7ac30", 00:32:22.325 "is_configured": true, 00:32:22.325 "data_offset": 0, 00:32:22.325 "data_size": 65536 00:32:22.325 }, 00:32:22.325 { 00:32:22.325 "name": "BaseBdev4", 00:32:22.325 "uuid": "e81878c6-9f6d-52ad-b9c5-b6a02c678b02", 00:32:22.325 "is_configured": true, 00:32:22.325 "data_offset": 0, 00:32:22.325 "data_size": 65536 00:32:22.325 } 00:32:22.325 ] 00:32:22.325 }' 00:32:22.325 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:22.325 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:22.325 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:22.325 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:22.325 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:32:22.325 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:32:22.325 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:32:22.325 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:32:22.325 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:32:22.583 [2024-06-10 11:55:54.562221] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:22.841 [2024-06-10 11:55:54.651940] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:32:22.841 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:32:22.841 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:32:22.841 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:22.841 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:22.841 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:22.841 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:22.841 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:22.841 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:22.841 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:23.100 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:23.100 "name": "raid_bdev1", 00:32:23.100 "uuid": "b0813c40-875b-4f70-8a8f-8a4492900499", 00:32:23.100 "strip_size_kb": 0, 00:32:23.100 "state": "online", 00:32:23.100 "raid_level": "raid1", 00:32:23.100 "superblock": false, 00:32:23.100 "num_base_bdevs": 4, 00:32:23.100 "num_base_bdevs_discovered": 3, 00:32:23.100 "num_base_bdevs_operational": 3, 00:32:23.100 "process": { 00:32:23.100 "type": "rebuild", 00:32:23.100 "target": "spare", 00:32:23.100 "progress": { 00:32:23.100 "blocks": 38912, 00:32:23.100 "percent": 59 00:32:23.100 } 00:32:23.100 }, 00:32:23.100 "base_bdevs_list": [ 00:32:23.100 { 00:32:23.100 "name": "spare", 00:32:23.100 "uuid": "89e71873-7ea8-5c84-b6c9-e44f85b54911", 00:32:23.100 "is_configured": true, 00:32:23.100 "data_offset": 0, 00:32:23.100 "data_size": 65536 00:32:23.100 }, 00:32:23.100 { 00:32:23.100 "name": null, 00:32:23.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.100 "is_configured": false, 00:32:23.100 "data_offset": 0, 00:32:23.100 "data_size": 65536 00:32:23.100 }, 00:32:23.100 { 00:32:23.100 "name": "BaseBdev3", 00:32:23.100 "uuid": "19e68fde-6629-5cdc-b325-e30eb5f7ac30", 00:32:23.100 "is_configured": true, 00:32:23.100 "data_offset": 0, 00:32:23.100 "data_size": 65536 00:32:23.100 }, 00:32:23.100 { 00:32:23.100 "name": "BaseBdev4", 00:32:23.100 "uuid": "e81878c6-9f6d-52ad-b9c5-b6a02c678b02", 00:32:23.100 "is_configured": true, 00:32:23.100 "data_offset": 0, 00:32:23.100 "data_size": 65536 00:32:23.100 } 00:32:23.100 ] 00:32:23.100 }' 00:32:23.100 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:23.100 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:23.100 11:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:23.100 11:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:23.100 11:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1015 00:32:23.100 11:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:23.100 11:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:23.100 11:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:23.100 11:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:23.100 11:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:23.100 11:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:23.100 11:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.100 11:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:23.361 11:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:23.361 "name": "raid_bdev1", 00:32:23.361 "uuid": "b0813c40-875b-4f70-8a8f-8a4492900499", 00:32:23.361 "strip_size_kb": 0, 00:32:23.361 "state": "online", 00:32:23.361 "raid_level": "raid1", 00:32:23.361 "superblock": false, 00:32:23.361 "num_base_bdevs": 4, 00:32:23.361 "num_base_bdevs_discovered": 3, 00:32:23.361 "num_base_bdevs_operational": 3, 00:32:23.361 "process": { 00:32:23.361 "type": "rebuild", 00:32:23.361 "target": "spare", 00:32:23.361 "progress": { 00:32:23.361 "blocks": 47104, 00:32:23.361 "percent": 71 00:32:23.361 } 00:32:23.361 }, 00:32:23.361 "base_bdevs_list": [ 00:32:23.361 { 00:32:23.361 "name": "spare", 00:32:23.361 "uuid": "89e71873-7ea8-5c84-b6c9-e44f85b54911", 00:32:23.361 "is_configured": true, 00:32:23.361 "data_offset": 0, 00:32:23.361 "data_size": 65536 00:32:23.361 }, 00:32:23.361 { 00:32:23.361 "name": null, 00:32:23.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.361 "is_configured": false, 00:32:23.361 "data_offset": 0, 00:32:23.361 "data_size": 65536 00:32:23.361 }, 00:32:23.361 { 00:32:23.361 "name": "BaseBdev3", 00:32:23.361 "uuid": "19e68fde-6629-5cdc-b325-e30eb5f7ac30", 00:32:23.361 "is_configured": true, 00:32:23.361 "data_offset": 0, 00:32:23.361 "data_size": 65536 00:32:23.361 }, 00:32:23.361 { 00:32:23.361 "name": "BaseBdev4", 00:32:23.361 "uuid": "e81878c6-9f6d-52ad-b9c5-b6a02c678b02", 00:32:23.361 "is_configured": true, 00:32:23.361 "data_offset": 0, 00:32:23.361 "data_size": 65536 00:32:23.361 } 00:32:23.361 ] 00:32:23.361 }' 00:32:23.361 11:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:23.361 11:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:23.361 11:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:23.361 11:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:23.361 11:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:24.295 [2024-06-10 11:55:56.161953] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:24.295 [2024-06-10 11:55:56.162266] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:24.295 [2024-06-10 11:55:56.162452] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:24.553 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:24.553 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:24.553 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:24.553 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:24.553 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:24.553 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:24.553 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:24.553 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:24.812 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:24.812 "name": "raid_bdev1", 00:32:24.812 "uuid": "b0813c40-875b-4f70-8a8f-8a4492900499", 00:32:24.812 "strip_size_kb": 0, 00:32:24.812 "state": "online", 00:32:24.812 "raid_level": "raid1", 00:32:24.812 "superblock": false, 00:32:24.812 "num_base_bdevs": 4, 00:32:24.812 "num_base_bdevs_discovered": 3, 00:32:24.812 "num_base_bdevs_operational": 3, 00:32:24.812 "base_bdevs_list": [ 00:32:24.812 { 00:32:24.812 "name": "spare", 00:32:24.812 "uuid": "89e71873-7ea8-5c84-b6c9-e44f85b54911", 00:32:24.812 "is_configured": true, 00:32:24.812 "data_offset": 0, 00:32:24.812 "data_size": 65536 00:32:24.812 }, 00:32:24.812 { 00:32:24.812 "name": null, 00:32:24.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.812 "is_configured": false, 00:32:24.812 "data_offset": 0, 00:32:24.812 "data_size": 65536 00:32:24.812 }, 00:32:24.812 { 00:32:24.812 "name": "BaseBdev3", 00:32:24.812 "uuid": "19e68fde-6629-5cdc-b325-e30eb5f7ac30", 00:32:24.812 "is_configured": true, 00:32:24.812 "data_offset": 0, 00:32:24.812 "data_size": 65536 00:32:24.812 }, 00:32:24.812 { 00:32:24.812 "name": "BaseBdev4", 00:32:24.812 "uuid": "e81878c6-9f6d-52ad-b9c5-b6a02c678b02", 00:32:24.812 "is_configured": true, 00:32:24.812 "data_offset": 0, 00:32:24.812 "data_size": 65536 00:32:24.812 } 00:32:24.812 ] 00:32:24.812 }' 00:32:24.812 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:24.812 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:24.812 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:24.812 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:32:24.812 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:32:24.812 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:24.812 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:24.812 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:24.812 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:24.812 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:24.812 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:24.812 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:25.071 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:25.071 "name": "raid_bdev1", 00:32:25.071 "uuid": "b0813c40-875b-4f70-8a8f-8a4492900499", 00:32:25.071 "strip_size_kb": 0, 00:32:25.071 "state": "online", 00:32:25.071 "raid_level": "raid1", 00:32:25.071 "superblock": false, 00:32:25.071 "num_base_bdevs": 4, 00:32:25.071 "num_base_bdevs_discovered": 3, 00:32:25.071 "num_base_bdevs_operational": 3, 00:32:25.071 "base_bdevs_list": [ 00:32:25.071 { 00:32:25.071 "name": "spare", 00:32:25.071 "uuid": "89e71873-7ea8-5c84-b6c9-e44f85b54911", 00:32:25.071 "is_configured": true, 00:32:25.071 "data_offset": 0, 00:32:25.071 "data_size": 65536 00:32:25.071 }, 00:32:25.071 { 00:32:25.071 "name": null, 00:32:25.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.071 "is_configured": false, 00:32:25.071 "data_offset": 0, 00:32:25.071 "data_size": 65536 00:32:25.071 }, 00:32:25.071 { 00:32:25.071 "name": "BaseBdev3", 00:32:25.071 "uuid": "19e68fde-6629-5cdc-b325-e30eb5f7ac30", 00:32:25.071 "is_configured": true, 00:32:25.071 "data_offset": 0, 00:32:25.071 "data_size": 65536 00:32:25.071 }, 00:32:25.071 { 00:32:25.071 "name": "BaseBdev4", 00:32:25.071 "uuid": "e81878c6-9f6d-52ad-b9c5-b6a02c678b02", 00:32:25.071 "is_configured": true, 00:32:25.071 "data_offset": 0, 00:32:25.071 "data_size": 65536 00:32:25.071 } 00:32:25.071 ] 00:32:25.071 }' 00:32:25.071 11:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:25.071 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:25.071 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:25.071 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:25.071 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:25.071 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:25.071 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:25.071 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:25.071 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:25.071 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:25.071 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:25.071 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:25.071 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:25.071 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:25.071 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.071 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:25.329 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:25.329 "name": "raid_bdev1", 00:32:25.329 "uuid": "b0813c40-875b-4f70-8a8f-8a4492900499", 00:32:25.329 "strip_size_kb": 0, 00:32:25.329 "state": "online", 00:32:25.329 "raid_level": "raid1", 00:32:25.329 "superblock": false, 00:32:25.329 "num_base_bdevs": 4, 00:32:25.329 "num_base_bdevs_discovered": 3, 00:32:25.329 "num_base_bdevs_operational": 3, 00:32:25.329 "base_bdevs_list": [ 00:32:25.329 { 00:32:25.329 "name": "spare", 00:32:25.329 "uuid": "89e71873-7ea8-5c84-b6c9-e44f85b54911", 00:32:25.329 "is_configured": true, 00:32:25.329 "data_offset": 0, 00:32:25.329 "data_size": 65536 00:32:25.329 }, 00:32:25.329 { 00:32:25.329 "name": null, 00:32:25.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.329 "is_configured": false, 00:32:25.329 "data_offset": 0, 00:32:25.329 "data_size": 65536 00:32:25.329 }, 00:32:25.329 { 00:32:25.329 "name": "BaseBdev3", 00:32:25.329 "uuid": "19e68fde-6629-5cdc-b325-e30eb5f7ac30", 00:32:25.329 "is_configured": true, 00:32:25.329 "data_offset": 0, 00:32:25.329 "data_size": 65536 00:32:25.329 }, 00:32:25.329 { 00:32:25.329 "name": "BaseBdev4", 00:32:25.329 "uuid": "e81878c6-9f6d-52ad-b9c5-b6a02c678b02", 00:32:25.329 "is_configured": true, 00:32:25.329 "data_offset": 0, 00:32:25.329 "data_size": 65536 00:32:25.329 } 00:32:25.329 ] 00:32:25.329 }' 00:32:25.329 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:25.329 11:55:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.894 11:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:26.463 [2024-06-10 11:55:58.220562] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:26.463 [2024-06-10 11:55:58.220779] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:26.463 [2024-06-10 11:55:58.220971] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:26.463 [2024-06-10 11:55:58.221141] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:26.463 [2024-06-10 11:55:58.221249] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:32:26.463 11:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:32:26.463 11:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:26.734 11:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:32:26.734 11:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:32:26.734 11:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:32:26.734 11:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:26.734 11:55:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:26.734 11:55:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:26.734 11:55:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:26.734 11:55:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:26.734 11:55:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:26.734 11:55:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:32:26.734 11:55:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:26.734 11:55:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:26.734 11:55:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:26.734 /dev/nbd0 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # break 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:26.993 1+0 records in 00:32:26.993 1+0 records out 00:32:26.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517091 s, 7.9 MB/s 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:26.993 11:55:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:32:27.251 /dev/nbd1 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # break 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:27.251 1+0 records in 00:32:27.251 1+0 records out 00:32:27.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581683 s, 7.0 MB/s 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:27.251 11:55:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:32:27.508 11:55:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:32:27.508 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:27.508 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:27.508 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:27.508 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:32:27.508 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:27.508 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:27.765 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:27.765 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:27.765 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:27.765 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:27.765 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:27.765 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:27.765 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:27.765 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:27.765 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:27.765 11:55:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 148738 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@949 -- # '[' -z 148738 ']' 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # kill -0 148738 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # uname 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 148738 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 148738' 00:32:28.023 killing process with pid 148738 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # kill 148738 00:32:28.023 Received shutdown signal, test time was about 60.000000 seconds 00:32:28.023 00:32:28.023 Latency(us) 00:32:28.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.023 =================================================================================================================== 00:32:28.023 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:28.023 11:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # wait 148738 00:32:28.023 [2024-06-10 11:56:00.072778] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:28.956 [2024-06-10 11:56:00.696103] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:30.853 ************************************ 00:32:30.853 END TEST raid_rebuild_test 00:32:30.853 ************************************ 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:32:30.853 00:32:30.853 real 0m26.677s 00:32:30.853 user 0m35.900s 00:32:30.853 sys 0m4.944s 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.853 11:56:02 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:32:30.853 11:56:02 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:32:30.853 11:56:02 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:30.853 11:56:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:30.853 ************************************ 00:32:30.853 START TEST raid_rebuild_test_sb 00:32:30.853 ************************************ 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 4 true false true 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:30.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:32:30.853 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:32:30.854 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:32:30.854 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:32:30.854 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:32:30.854 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:32:30.854 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=149332 00:32:30.854 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 149332 /var/tmp/spdk-raid.sock 00:32:30.854 11:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@830 -- # '[' -z 149332 ']' 00:32:30.854 11:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:30.854 11:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:30.854 11:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:30.854 11:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:30.854 11:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:30.854 11:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:30.854 [2024-06-10 11:56:02.555434] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:32:30.854 [2024-06-10 11:56:02.555595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149332 ] 00:32:30.854 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:30.854 Zero copy mechanism will not be used. 00:32:30.854 [2024-06-10 11:56:02.723402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.112 [2024-06-10 11:56:03.017356] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.370 [2024-06-10 11:56:03.270322] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:31.627 11:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:31.627 11:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@863 -- # return 0 00:32:31.627 11:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:31.627 11:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:31.884 BaseBdev1_malloc 00:32:31.884 11:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:32.142 [2024-06-10 11:56:04.097190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:32.142 [2024-06-10 11:56:04.097320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:32.142 [2024-06-10 11:56:04.097366] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:32:32.142 [2024-06-10 11:56:04.097390] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:32.142 [2024-06-10 11:56:04.100058] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:32.142 [2024-06-10 11:56:04.100114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:32.142 BaseBdev1 00:32:32.142 11:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:32.142 11:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:32.400 BaseBdev2_malloc 00:32:32.400 11:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:32.658 [2024-06-10 11:56:04.604388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:32.658 [2024-06-10 11:56:04.604538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:32.658 [2024-06-10 11:56:04.604614] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:32:32.658 [2024-06-10 11:56:04.604637] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:32.658 [2024-06-10 11:56:04.607271] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:32.658 [2024-06-10 11:56:04.607324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:32.658 BaseBdev2 00:32:32.658 11:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:32.658 11:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:32.916 BaseBdev3_malloc 00:32:32.916 11:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:32:33.174 [2024-06-10 11:56:05.096286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:32:33.174 [2024-06-10 11:56:05.096381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:33.174 [2024-06-10 11:56:05.096432] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:33.174 [2024-06-10 11:56:05.096463] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:33.174 [2024-06-10 11:56:05.099134] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:33.174 [2024-06-10 11:56:05.099200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:33.174 BaseBdev3 00:32:33.174 11:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:33.174 11:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:33.432 BaseBdev4_malloc 00:32:33.432 11:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:32:33.689 [2024-06-10 11:56:05.552455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:32:33.689 [2024-06-10 11:56:05.552568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:33.689 [2024-06-10 11:56:05.552604] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:32:33.689 [2024-06-10 11:56:05.552634] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:33.690 [2024-06-10 11:56:05.555134] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:33.690 [2024-06-10 11:56:05.555188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:33.690 BaseBdev4 00:32:33.690 11:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:32:33.947 spare_malloc 00:32:33.947 11:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:34.205 spare_delay 00:32:34.205 11:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:34.463 [2024-06-10 11:56:06.460824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:34.463 [2024-06-10 11:56:06.460938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:34.463 [2024-06-10 11:56:06.460978] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:34.463 [2024-06-10 11:56:06.461007] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:34.463 [2024-06-10 11:56:06.463866] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:34.463 [2024-06-10 11:56:06.463929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:34.463 spare 00:32:34.463 11:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:32:34.720 [2024-06-10 11:56:06.729078] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:34.720 [2024-06-10 11:56:06.732115] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:34.720 [2024-06-10 11:56:06.732241] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:34.720 [2024-06-10 11:56:06.732325] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:34.720 [2024-06-10 11:56:06.732713] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:32:34.720 [2024-06-10 11:56:06.732750] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:32:34.720 [2024-06-10 11:56:06.732953] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:34.720 [2024-06-10 11:56:06.733501] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:32:34.720 [2024-06-10 11:56:06.733536] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:32:34.720 [2024-06-10 11:56:06.733858] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:34.720 11:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:32:34.720 11:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:34.720 11:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:34.720 11:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:34.720 11:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:34.720 11:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:34.720 11:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:34.720 11:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:34.720 11:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:34.720 11:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:34.720 11:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:34.720 11:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.978 11:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:34.978 "name": "raid_bdev1", 00:32:34.978 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:32:34.978 "strip_size_kb": 0, 00:32:34.978 "state": "online", 00:32:34.978 "raid_level": "raid1", 00:32:34.978 "superblock": true, 00:32:34.978 "num_base_bdevs": 4, 00:32:34.978 "num_base_bdevs_discovered": 4, 00:32:34.978 "num_base_bdevs_operational": 4, 00:32:34.978 "base_bdevs_list": [ 00:32:34.978 { 00:32:34.978 "name": "BaseBdev1", 00:32:34.978 "uuid": "50513d5c-d6b3-5dc5-a94c-507d0b909f58", 00:32:34.978 "is_configured": true, 00:32:34.978 "data_offset": 2048, 00:32:34.978 "data_size": 63488 00:32:34.978 }, 00:32:34.978 { 00:32:34.978 "name": "BaseBdev2", 00:32:34.978 "uuid": "b1153877-29bc-52e3-a58c-33a897e583ab", 00:32:34.978 "is_configured": true, 00:32:34.978 "data_offset": 2048, 00:32:34.978 "data_size": 63488 00:32:34.978 }, 00:32:34.978 { 00:32:34.978 "name": "BaseBdev3", 00:32:34.978 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:32:34.978 "is_configured": true, 00:32:34.978 "data_offset": 2048, 00:32:34.978 "data_size": 63488 00:32:34.978 }, 00:32:34.978 { 00:32:34.978 "name": "BaseBdev4", 00:32:34.978 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:32:34.978 "is_configured": true, 00:32:34.978 "data_offset": 2048, 00:32:34.978 "data_size": 63488 00:32:34.978 } 00:32:34.978 ] 00:32:34.978 }' 00:32:34.978 11:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:34.978 11:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.543 11:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:32:35.543 11:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:35.823 [2024-06-10 11:56:07.850283] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:36.081 11:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:32:36.081 11:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.081 11:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:36.339 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:32:36.339 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:32:36.339 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:32:36.339 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:32:36.339 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:32:36.339 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:36.339 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:36.339 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:36.339 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:36.339 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:36.339 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:32:36.339 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:36.339 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:36.339 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:36.597 [2024-06-10 11:56:08.434208] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:32:36.597 /dev/nbd0 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:36.597 1+0 records in 00:32:36.597 1+0 records out 00:32:36.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039543 s, 10.4 MB/s 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:32:36.597 11:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:32:43.153 63488+0 records in 00:32:43.153 63488+0 records out 00:32:43.153 32505856 bytes (33 MB, 31 MiB) copied, 5.8398 s, 5.6 MB/s 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:43.153 [2024-06-10 11:56:14.614708] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:32:43.153 [2024-06-10 11:56:14.806440] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:43.153 11:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:43.153 11:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:43.153 "name": "raid_bdev1", 00:32:43.153 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:32:43.153 "strip_size_kb": 0, 00:32:43.153 "state": "online", 00:32:43.153 "raid_level": "raid1", 00:32:43.153 "superblock": true, 00:32:43.153 "num_base_bdevs": 4, 00:32:43.153 "num_base_bdevs_discovered": 3, 00:32:43.153 "num_base_bdevs_operational": 3, 00:32:43.153 "base_bdevs_list": [ 00:32:43.153 { 00:32:43.153 "name": null, 00:32:43.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:43.153 "is_configured": false, 00:32:43.153 "data_offset": 2048, 00:32:43.153 "data_size": 63488 00:32:43.153 }, 00:32:43.153 { 00:32:43.153 "name": "BaseBdev2", 00:32:43.153 "uuid": "b1153877-29bc-52e3-a58c-33a897e583ab", 00:32:43.153 "is_configured": true, 00:32:43.153 "data_offset": 2048, 00:32:43.153 "data_size": 63488 00:32:43.153 }, 00:32:43.153 { 00:32:43.153 "name": "BaseBdev3", 00:32:43.153 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:32:43.153 "is_configured": true, 00:32:43.153 "data_offset": 2048, 00:32:43.153 "data_size": 63488 00:32:43.153 }, 00:32:43.153 { 00:32:43.153 "name": "BaseBdev4", 00:32:43.153 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:32:43.153 "is_configured": true, 00:32:43.153 "data_offset": 2048, 00:32:43.153 "data_size": 63488 00:32:43.153 } 00:32:43.153 ] 00:32:43.153 }' 00:32:43.153 11:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:43.153 11:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.721 11:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:43.978 [2024-06-10 11:56:15.814711] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:43.978 [2024-06-10 11:56:15.831748] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:32:43.978 [2024-06-10 11:56:15.833963] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:43.978 11:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:32:44.911 11:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:44.911 11:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:44.911 11:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:44.911 11:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:44.911 11:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:44.911 11:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:44.911 11:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:45.168 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:45.168 "name": "raid_bdev1", 00:32:45.168 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:32:45.168 "strip_size_kb": 0, 00:32:45.168 "state": "online", 00:32:45.168 "raid_level": "raid1", 00:32:45.168 "superblock": true, 00:32:45.168 "num_base_bdevs": 4, 00:32:45.168 "num_base_bdevs_discovered": 4, 00:32:45.168 "num_base_bdevs_operational": 4, 00:32:45.168 "process": { 00:32:45.168 "type": "rebuild", 00:32:45.168 "target": "spare", 00:32:45.168 "progress": { 00:32:45.168 "blocks": 24576, 00:32:45.168 "percent": 38 00:32:45.168 } 00:32:45.168 }, 00:32:45.168 "base_bdevs_list": [ 00:32:45.168 { 00:32:45.168 "name": "spare", 00:32:45.168 "uuid": "e850886a-2aaf-58d5-91d5-729c2f06d7e4", 00:32:45.168 "is_configured": true, 00:32:45.168 "data_offset": 2048, 00:32:45.168 "data_size": 63488 00:32:45.168 }, 00:32:45.168 { 00:32:45.168 "name": "BaseBdev2", 00:32:45.168 "uuid": "b1153877-29bc-52e3-a58c-33a897e583ab", 00:32:45.168 "is_configured": true, 00:32:45.168 "data_offset": 2048, 00:32:45.168 "data_size": 63488 00:32:45.168 }, 00:32:45.168 { 00:32:45.168 "name": "BaseBdev3", 00:32:45.168 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:32:45.168 "is_configured": true, 00:32:45.168 "data_offset": 2048, 00:32:45.168 "data_size": 63488 00:32:45.168 }, 00:32:45.168 { 00:32:45.168 "name": "BaseBdev4", 00:32:45.168 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:32:45.168 "is_configured": true, 00:32:45.168 "data_offset": 2048, 00:32:45.168 "data_size": 63488 00:32:45.168 } 00:32:45.168 ] 00:32:45.168 }' 00:32:45.168 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:45.168 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:45.168 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:45.168 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:45.168 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:45.426 [2024-06-10 11:56:17.483848] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:45.683 [2024-06-10 11:56:17.544896] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:45.683 [2024-06-10 11:56:17.544974] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:45.684 [2024-06-10 11:56:17.544992] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:45.684 [2024-06-10 11:56:17.545001] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:45.684 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:45.684 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:45.684 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:45.684 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:45.684 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:45.684 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:45.684 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:45.684 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:45.684 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:45.684 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:45.684 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:45.684 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:45.942 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:45.942 "name": "raid_bdev1", 00:32:45.942 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:32:45.942 "strip_size_kb": 0, 00:32:45.942 "state": "online", 00:32:45.942 "raid_level": "raid1", 00:32:45.942 "superblock": true, 00:32:45.942 "num_base_bdevs": 4, 00:32:45.942 "num_base_bdevs_discovered": 3, 00:32:45.942 "num_base_bdevs_operational": 3, 00:32:45.942 "base_bdevs_list": [ 00:32:45.942 { 00:32:45.942 "name": null, 00:32:45.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:45.942 "is_configured": false, 00:32:45.942 "data_offset": 2048, 00:32:45.942 "data_size": 63488 00:32:45.942 }, 00:32:45.942 { 00:32:45.942 "name": "BaseBdev2", 00:32:45.942 "uuid": "b1153877-29bc-52e3-a58c-33a897e583ab", 00:32:45.942 "is_configured": true, 00:32:45.942 "data_offset": 2048, 00:32:45.942 "data_size": 63488 00:32:45.942 }, 00:32:45.942 { 00:32:45.942 "name": "BaseBdev3", 00:32:45.942 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:32:45.942 "is_configured": true, 00:32:45.942 "data_offset": 2048, 00:32:45.942 "data_size": 63488 00:32:45.942 }, 00:32:45.942 { 00:32:45.942 "name": "BaseBdev4", 00:32:45.942 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:32:45.942 "is_configured": true, 00:32:45.942 "data_offset": 2048, 00:32:45.942 "data_size": 63488 00:32:45.942 } 00:32:45.942 ] 00:32:45.942 }' 00:32:45.942 11:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:45.942 11:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:46.508 11:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:46.508 11:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:46.508 11:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:46.508 11:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:46.508 11:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:46.508 11:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:46.508 11:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:46.766 11:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:46.766 "name": "raid_bdev1", 00:32:46.766 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:32:46.766 "strip_size_kb": 0, 00:32:46.766 "state": "online", 00:32:46.766 "raid_level": "raid1", 00:32:46.766 "superblock": true, 00:32:46.766 "num_base_bdevs": 4, 00:32:46.766 "num_base_bdevs_discovered": 3, 00:32:46.766 "num_base_bdevs_operational": 3, 00:32:46.766 "base_bdevs_list": [ 00:32:46.766 { 00:32:46.766 "name": null, 00:32:46.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:46.766 "is_configured": false, 00:32:46.766 "data_offset": 2048, 00:32:46.766 "data_size": 63488 00:32:46.766 }, 00:32:46.766 { 00:32:46.766 "name": "BaseBdev2", 00:32:46.766 "uuid": "b1153877-29bc-52e3-a58c-33a897e583ab", 00:32:46.766 "is_configured": true, 00:32:46.766 "data_offset": 2048, 00:32:46.766 "data_size": 63488 00:32:46.766 }, 00:32:46.766 { 00:32:46.766 "name": "BaseBdev3", 00:32:46.766 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:32:46.766 "is_configured": true, 00:32:46.766 "data_offset": 2048, 00:32:46.766 "data_size": 63488 00:32:46.766 }, 00:32:46.766 { 00:32:46.766 "name": "BaseBdev4", 00:32:46.766 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:32:46.766 "is_configured": true, 00:32:46.766 "data_offset": 2048, 00:32:46.766 "data_size": 63488 00:32:46.766 } 00:32:46.766 ] 00:32:46.766 }' 00:32:46.766 11:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:46.766 11:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:46.766 11:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:46.766 11:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:46.766 11:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:47.024 [2024-06-10 11:56:19.059645] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:47.024 [2024-06-10 11:56:19.077386] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:32:47.024 [2024-06-10 11:56:19.079675] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:47.285 11:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:48.219 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:48.219 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:48.219 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:48.219 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:48.219 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:48.219 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:48.219 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:48.477 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:48.477 "name": "raid_bdev1", 00:32:48.477 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:32:48.477 "strip_size_kb": 0, 00:32:48.477 "state": "online", 00:32:48.477 "raid_level": "raid1", 00:32:48.477 "superblock": true, 00:32:48.477 "num_base_bdevs": 4, 00:32:48.477 "num_base_bdevs_discovered": 4, 00:32:48.477 "num_base_bdevs_operational": 4, 00:32:48.477 "process": { 00:32:48.477 "type": "rebuild", 00:32:48.477 "target": "spare", 00:32:48.477 "progress": { 00:32:48.477 "blocks": 24576, 00:32:48.477 "percent": 38 00:32:48.477 } 00:32:48.477 }, 00:32:48.477 "base_bdevs_list": [ 00:32:48.477 { 00:32:48.477 "name": "spare", 00:32:48.478 "uuid": "e850886a-2aaf-58d5-91d5-729c2f06d7e4", 00:32:48.478 "is_configured": true, 00:32:48.478 "data_offset": 2048, 00:32:48.478 "data_size": 63488 00:32:48.478 }, 00:32:48.478 { 00:32:48.478 "name": "BaseBdev2", 00:32:48.478 "uuid": "b1153877-29bc-52e3-a58c-33a897e583ab", 00:32:48.478 "is_configured": true, 00:32:48.478 "data_offset": 2048, 00:32:48.478 "data_size": 63488 00:32:48.478 }, 00:32:48.478 { 00:32:48.478 "name": "BaseBdev3", 00:32:48.478 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:32:48.478 "is_configured": true, 00:32:48.478 "data_offset": 2048, 00:32:48.478 "data_size": 63488 00:32:48.478 }, 00:32:48.478 { 00:32:48.478 "name": "BaseBdev4", 00:32:48.478 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:32:48.478 "is_configured": true, 00:32:48.478 "data_offset": 2048, 00:32:48.478 "data_size": 63488 00:32:48.478 } 00:32:48.478 ] 00:32:48.478 }' 00:32:48.478 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:48.478 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:48.478 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:48.478 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:48.478 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:32:48.478 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:32:48.478 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:32:48.478 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:32:48.478 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:32:48.478 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:32:48.478 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:32:48.736 [2024-06-10 11:56:20.678253] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:48.736 [2024-06-10 11:56:20.789642] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:32:48.994 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:32:48.994 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:32:48.994 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:48.994 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:48.994 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:48.994 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:48.994 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:48.994 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:48.994 11:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:49.252 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:49.252 "name": "raid_bdev1", 00:32:49.252 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:32:49.252 "strip_size_kb": 0, 00:32:49.252 "state": "online", 00:32:49.252 "raid_level": "raid1", 00:32:49.252 "superblock": true, 00:32:49.252 "num_base_bdevs": 4, 00:32:49.252 "num_base_bdevs_discovered": 3, 00:32:49.252 "num_base_bdevs_operational": 3, 00:32:49.252 "process": { 00:32:49.252 "type": "rebuild", 00:32:49.252 "target": "spare", 00:32:49.252 "progress": { 00:32:49.252 "blocks": 36864, 00:32:49.252 "percent": 58 00:32:49.252 } 00:32:49.252 }, 00:32:49.252 "base_bdevs_list": [ 00:32:49.252 { 00:32:49.252 "name": "spare", 00:32:49.252 "uuid": "e850886a-2aaf-58d5-91d5-729c2f06d7e4", 00:32:49.252 "is_configured": true, 00:32:49.252 "data_offset": 2048, 00:32:49.252 "data_size": 63488 00:32:49.252 }, 00:32:49.252 { 00:32:49.252 "name": null, 00:32:49.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:49.252 "is_configured": false, 00:32:49.252 "data_offset": 2048, 00:32:49.252 "data_size": 63488 00:32:49.252 }, 00:32:49.252 { 00:32:49.252 "name": "BaseBdev3", 00:32:49.252 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:32:49.252 "is_configured": true, 00:32:49.252 "data_offset": 2048, 00:32:49.252 "data_size": 63488 00:32:49.252 }, 00:32:49.252 { 00:32:49.252 "name": "BaseBdev4", 00:32:49.252 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:32:49.252 "is_configured": true, 00:32:49.252 "data_offset": 2048, 00:32:49.252 "data_size": 63488 00:32:49.252 } 00:32:49.252 ] 00:32:49.252 }' 00:32:49.252 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:49.252 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:49.252 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:49.252 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:49.252 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1041 00:32:49.252 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:49.252 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:49.252 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:49.252 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:49.252 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:49.252 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:49.252 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:49.252 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.511 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:49.511 "name": "raid_bdev1", 00:32:49.511 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:32:49.511 "strip_size_kb": 0, 00:32:49.511 "state": "online", 00:32:49.511 "raid_level": "raid1", 00:32:49.511 "superblock": true, 00:32:49.511 "num_base_bdevs": 4, 00:32:49.511 "num_base_bdevs_discovered": 3, 00:32:49.511 "num_base_bdevs_operational": 3, 00:32:49.511 "process": { 00:32:49.511 "type": "rebuild", 00:32:49.511 "target": "spare", 00:32:49.511 "progress": { 00:32:49.511 "blocks": 43008, 00:32:49.511 "percent": 67 00:32:49.511 } 00:32:49.511 }, 00:32:49.511 "base_bdevs_list": [ 00:32:49.511 { 00:32:49.511 "name": "spare", 00:32:49.511 "uuid": "e850886a-2aaf-58d5-91d5-729c2f06d7e4", 00:32:49.511 "is_configured": true, 00:32:49.511 "data_offset": 2048, 00:32:49.511 "data_size": 63488 00:32:49.511 }, 00:32:49.511 { 00:32:49.511 "name": null, 00:32:49.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:49.511 "is_configured": false, 00:32:49.511 "data_offset": 2048, 00:32:49.511 "data_size": 63488 00:32:49.511 }, 00:32:49.511 { 00:32:49.511 "name": "BaseBdev3", 00:32:49.511 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:32:49.511 "is_configured": true, 00:32:49.511 "data_offset": 2048, 00:32:49.511 "data_size": 63488 00:32:49.511 }, 00:32:49.511 { 00:32:49.511 "name": "BaseBdev4", 00:32:49.511 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:32:49.511 "is_configured": true, 00:32:49.511 "data_offset": 2048, 00:32:49.511 "data_size": 63488 00:32:49.511 } 00:32:49.511 ] 00:32:49.511 }' 00:32:49.511 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:49.511 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:49.511 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:49.511 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:49.511 11:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:50.444 [2024-06-10 11:56:22.299606] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:50.444 [2024-06-10 11:56:22.299706] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:50.444 [2024-06-10 11:56:22.299850] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:50.444 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:50.444 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:50.444 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:50.444 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:50.444 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:50.444 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:50.444 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:50.444 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:50.702 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:50.702 "name": "raid_bdev1", 00:32:50.702 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:32:50.702 "strip_size_kb": 0, 00:32:50.702 "state": "online", 00:32:50.702 "raid_level": "raid1", 00:32:50.702 "superblock": true, 00:32:50.702 "num_base_bdevs": 4, 00:32:50.702 "num_base_bdevs_discovered": 3, 00:32:50.702 "num_base_bdevs_operational": 3, 00:32:50.702 "base_bdevs_list": [ 00:32:50.702 { 00:32:50.702 "name": "spare", 00:32:50.702 "uuid": "e850886a-2aaf-58d5-91d5-729c2f06d7e4", 00:32:50.702 "is_configured": true, 00:32:50.702 "data_offset": 2048, 00:32:50.702 "data_size": 63488 00:32:50.702 }, 00:32:50.702 { 00:32:50.702 "name": null, 00:32:50.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:50.702 "is_configured": false, 00:32:50.702 "data_offset": 2048, 00:32:50.702 "data_size": 63488 00:32:50.702 }, 00:32:50.702 { 00:32:50.702 "name": "BaseBdev3", 00:32:50.702 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:32:50.702 "is_configured": true, 00:32:50.702 "data_offset": 2048, 00:32:50.702 "data_size": 63488 00:32:50.702 }, 00:32:50.702 { 00:32:50.702 "name": "BaseBdev4", 00:32:50.702 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:32:50.702 "is_configured": true, 00:32:50.702 "data_offset": 2048, 00:32:50.702 "data_size": 63488 00:32:50.702 } 00:32:50.702 ] 00:32:50.702 }' 00:32:50.702 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:50.960 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:50.960 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:50.960 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:32:50.960 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:32:50.960 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:50.960 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:50.960 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:50.960 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:50.960 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:50.960 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:50.960 11:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.218 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:51.219 "name": "raid_bdev1", 00:32:51.219 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:32:51.219 "strip_size_kb": 0, 00:32:51.219 "state": "online", 00:32:51.219 "raid_level": "raid1", 00:32:51.219 "superblock": true, 00:32:51.219 "num_base_bdevs": 4, 00:32:51.219 "num_base_bdevs_discovered": 3, 00:32:51.219 "num_base_bdevs_operational": 3, 00:32:51.219 "base_bdevs_list": [ 00:32:51.219 { 00:32:51.219 "name": "spare", 00:32:51.219 "uuid": "e850886a-2aaf-58d5-91d5-729c2f06d7e4", 00:32:51.219 "is_configured": true, 00:32:51.219 "data_offset": 2048, 00:32:51.219 "data_size": 63488 00:32:51.219 }, 00:32:51.219 { 00:32:51.219 "name": null, 00:32:51.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:51.219 "is_configured": false, 00:32:51.219 "data_offset": 2048, 00:32:51.219 "data_size": 63488 00:32:51.219 }, 00:32:51.219 { 00:32:51.219 "name": "BaseBdev3", 00:32:51.219 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:32:51.219 "is_configured": true, 00:32:51.219 "data_offset": 2048, 00:32:51.219 "data_size": 63488 00:32:51.219 }, 00:32:51.219 { 00:32:51.219 "name": "BaseBdev4", 00:32:51.219 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:32:51.219 "is_configured": true, 00:32:51.219 "data_offset": 2048, 00:32:51.219 "data_size": 63488 00:32:51.219 } 00:32:51.219 ] 00:32:51.219 }' 00:32:51.219 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:51.219 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:51.219 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:51.219 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:51.219 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:51.219 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:51.219 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:51.219 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:51.219 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:51.219 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:51.219 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:51.219 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:51.219 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:51.219 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:51.219 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:51.219 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.476 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:51.476 "name": "raid_bdev1", 00:32:51.476 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:32:51.476 "strip_size_kb": 0, 00:32:51.476 "state": "online", 00:32:51.476 "raid_level": "raid1", 00:32:51.476 "superblock": true, 00:32:51.476 "num_base_bdevs": 4, 00:32:51.476 "num_base_bdevs_discovered": 3, 00:32:51.476 "num_base_bdevs_operational": 3, 00:32:51.476 "base_bdevs_list": [ 00:32:51.476 { 00:32:51.476 "name": "spare", 00:32:51.476 "uuid": "e850886a-2aaf-58d5-91d5-729c2f06d7e4", 00:32:51.476 "is_configured": true, 00:32:51.476 "data_offset": 2048, 00:32:51.476 "data_size": 63488 00:32:51.476 }, 00:32:51.476 { 00:32:51.476 "name": null, 00:32:51.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:51.476 "is_configured": false, 00:32:51.476 "data_offset": 2048, 00:32:51.476 "data_size": 63488 00:32:51.476 }, 00:32:51.476 { 00:32:51.476 "name": "BaseBdev3", 00:32:51.476 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:32:51.476 "is_configured": true, 00:32:51.476 "data_offset": 2048, 00:32:51.476 "data_size": 63488 00:32:51.476 }, 00:32:51.476 { 00:32:51.476 "name": "BaseBdev4", 00:32:51.476 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:32:51.476 "is_configured": true, 00:32:51.476 "data_offset": 2048, 00:32:51.476 "data_size": 63488 00:32:51.476 } 00:32:51.476 ] 00:32:51.476 }' 00:32:51.476 11:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:51.476 11:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:52.106 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:52.364 [2024-06-10 11:56:24.344357] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:52.364 [2024-06-10 11:56:24.344405] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:52.364 [2024-06-10 11:56:24.344513] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:52.364 [2024-06-10 11:56:24.344603] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:52.364 [2024-06-10 11:56:24.344614] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:32:52.364 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:52.364 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:32:52.622 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:32:52.622 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:32:52.622 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:32:52.622 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:52.622 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:52.622 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:52.622 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:52.622 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:52.622 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:52.622 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:32:52.622 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:52.622 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:52.622 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:53.188 /dev/nbd0 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:53.188 1+0 records in 00:32:53.188 1+0 records out 00:32:53.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903908 s, 4.5 MB/s 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:53.188 11:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:32:53.188 /dev/nbd1 00:32:53.188 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:53.188 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:53.188 11:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:32:53.188 11:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:32:53.188 11:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:53.188 11:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:53.188 11:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:53.447 1+0 records in 00:32:53.447 1+0 records out 00:32:53.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468482 s, 8.7 MB/s 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:53.447 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:53.704 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:53.704 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:53.704 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:53.705 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:53.705 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:53.705 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:53.705 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:53.705 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:53.705 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:53.705 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:53.962 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:53.962 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:53.962 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:53.962 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:53.962 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:53.962 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:53.962 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:53.962 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:53.962 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:32:53.962 11:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:54.220 11:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:54.479 [2024-06-10 11:56:26.538423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:54.479 [2024-06-10 11:56:26.538536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:54.479 [2024-06-10 11:56:26.538606] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:32:54.479 [2024-06-10 11:56:26.538641] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:54.743 [2024-06-10 11:56:26.541336] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:54.743 [2024-06-10 11:56:26.541585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:54.743 [2024-06-10 11:56:26.541870] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:54.743 [2024-06-10 11:56:26.542086] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:54.743 [2024-06-10 11:56:26.542396] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:54.743 [2024-06-10 11:56:26.542682] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:54.743 spare 00:32:54.743 11:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:54.743 11:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:54.743 11:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:54.743 11:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:54.743 11:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:54.743 11:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:54.743 11:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:54.743 11:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:54.743 11:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:54.743 11:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:54.743 11:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:54.743 11:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:54.743 [2024-06-10 11:56:26.642919] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:32:54.743 [2024-06-10 11:56:26.643137] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:32:54.743 [2024-06-10 11:56:26.643400] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:32:54.743 [2024-06-10 11:56:26.643882] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:32:54.743 [2024-06-10 11:56:26.644002] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:32:54.743 [2024-06-10 11:56:26.644256] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:55.000 11:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:55.000 "name": "raid_bdev1", 00:32:55.000 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:32:55.000 "strip_size_kb": 0, 00:32:55.000 "state": "online", 00:32:55.000 "raid_level": "raid1", 00:32:55.000 "superblock": true, 00:32:55.000 "num_base_bdevs": 4, 00:32:55.000 "num_base_bdevs_discovered": 3, 00:32:55.000 "num_base_bdevs_operational": 3, 00:32:55.000 "base_bdevs_list": [ 00:32:55.000 { 00:32:55.000 "name": "spare", 00:32:55.000 "uuid": "e850886a-2aaf-58d5-91d5-729c2f06d7e4", 00:32:55.000 "is_configured": true, 00:32:55.000 "data_offset": 2048, 00:32:55.000 "data_size": 63488 00:32:55.000 }, 00:32:55.000 { 00:32:55.000 "name": null, 00:32:55.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:55.000 "is_configured": false, 00:32:55.000 "data_offset": 2048, 00:32:55.000 "data_size": 63488 00:32:55.000 }, 00:32:55.000 { 00:32:55.000 "name": "BaseBdev3", 00:32:55.000 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:32:55.000 "is_configured": true, 00:32:55.000 "data_offset": 2048, 00:32:55.000 "data_size": 63488 00:32:55.000 }, 00:32:55.000 { 00:32:55.000 "name": "BaseBdev4", 00:32:55.000 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:32:55.000 "is_configured": true, 00:32:55.000 "data_offset": 2048, 00:32:55.000 "data_size": 63488 00:32:55.000 } 00:32:55.000 ] 00:32:55.000 }' 00:32:55.000 11:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:55.000 11:56:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:55.566 11:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:55.566 11:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:55.566 11:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:55.566 11:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:55.566 11:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:55.566 11:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:55.566 11:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:55.824 11:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:55.824 "name": "raid_bdev1", 00:32:55.824 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:32:55.824 "strip_size_kb": 0, 00:32:55.824 "state": "online", 00:32:55.824 "raid_level": "raid1", 00:32:55.824 "superblock": true, 00:32:55.824 "num_base_bdevs": 4, 00:32:55.824 "num_base_bdevs_discovered": 3, 00:32:55.824 "num_base_bdevs_operational": 3, 00:32:55.824 "base_bdevs_list": [ 00:32:55.824 { 00:32:55.824 "name": "spare", 00:32:55.824 "uuid": "e850886a-2aaf-58d5-91d5-729c2f06d7e4", 00:32:55.824 "is_configured": true, 00:32:55.824 "data_offset": 2048, 00:32:55.824 "data_size": 63488 00:32:55.824 }, 00:32:55.824 { 00:32:55.824 "name": null, 00:32:55.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:55.824 "is_configured": false, 00:32:55.824 "data_offset": 2048, 00:32:55.824 "data_size": 63488 00:32:55.824 }, 00:32:55.824 { 00:32:55.824 "name": "BaseBdev3", 00:32:55.824 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:32:55.824 "is_configured": true, 00:32:55.824 "data_offset": 2048, 00:32:55.824 "data_size": 63488 00:32:55.824 }, 00:32:55.824 { 00:32:55.824 "name": "BaseBdev4", 00:32:55.824 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:32:55.824 "is_configured": true, 00:32:55.825 "data_offset": 2048, 00:32:55.825 "data_size": 63488 00:32:55.825 } 00:32:55.825 ] 00:32:55.825 }' 00:32:55.825 11:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:55.825 11:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:55.825 11:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:55.825 11:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:55.825 11:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:55.825 11:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:56.083 11:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:32:56.083 11:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:56.341 [2024-06-10 11:56:28.339100] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:56.341 11:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:56.341 11:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:56.341 11:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:56.341 11:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:56.341 11:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:56.341 11:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:56.341 11:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:56.341 11:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:56.341 11:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:56.341 11:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:56.341 11:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:56.341 11:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:56.599 11:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:56.599 "name": "raid_bdev1", 00:32:56.599 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:32:56.599 "strip_size_kb": 0, 00:32:56.599 "state": "online", 00:32:56.599 "raid_level": "raid1", 00:32:56.599 "superblock": true, 00:32:56.599 "num_base_bdevs": 4, 00:32:56.599 "num_base_bdevs_discovered": 2, 00:32:56.599 "num_base_bdevs_operational": 2, 00:32:56.599 "base_bdevs_list": [ 00:32:56.599 { 00:32:56.599 "name": null, 00:32:56.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:56.599 "is_configured": false, 00:32:56.599 "data_offset": 2048, 00:32:56.599 "data_size": 63488 00:32:56.599 }, 00:32:56.599 { 00:32:56.599 "name": null, 00:32:56.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:56.599 "is_configured": false, 00:32:56.599 "data_offset": 2048, 00:32:56.599 "data_size": 63488 00:32:56.599 }, 00:32:56.599 { 00:32:56.599 "name": "BaseBdev3", 00:32:56.599 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:32:56.599 "is_configured": true, 00:32:56.599 "data_offset": 2048, 00:32:56.599 "data_size": 63488 00:32:56.599 }, 00:32:56.599 { 00:32:56.599 "name": "BaseBdev4", 00:32:56.599 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:32:56.599 "is_configured": true, 00:32:56.599 "data_offset": 2048, 00:32:56.599 "data_size": 63488 00:32:56.599 } 00:32:56.599 ] 00:32:56.599 }' 00:32:56.599 11:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:56.599 11:56:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:57.533 11:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:57.533 [2024-06-10 11:56:29.503422] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:57.533 [2024-06-10 11:56:29.503869] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:32:57.533 [2024-06-10 11:56:29.504003] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:57.533 [2024-06-10 11:56:29.504163] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:57.533 [2024-06-10 11:56:29.521708] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:32:57.533 [2024-06-10 11:56:29.524106] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:57.533 11:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:32:58.906 11:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:58.906 11:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:58.906 11:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:58.906 11:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:58.906 11:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:58.906 11:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.906 11:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:58.906 11:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:58.906 "name": "raid_bdev1", 00:32:58.906 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:32:58.906 "strip_size_kb": 0, 00:32:58.906 "state": "online", 00:32:58.906 "raid_level": "raid1", 00:32:58.906 "superblock": true, 00:32:58.906 "num_base_bdevs": 4, 00:32:58.906 "num_base_bdevs_discovered": 3, 00:32:58.906 "num_base_bdevs_operational": 3, 00:32:58.906 "process": { 00:32:58.906 "type": "rebuild", 00:32:58.906 "target": "spare", 00:32:58.906 "progress": { 00:32:58.906 "blocks": 26624, 00:32:58.906 "percent": 41 00:32:58.906 } 00:32:58.906 }, 00:32:58.906 "base_bdevs_list": [ 00:32:58.906 { 00:32:58.906 "name": "spare", 00:32:58.906 "uuid": "e850886a-2aaf-58d5-91d5-729c2f06d7e4", 00:32:58.906 "is_configured": true, 00:32:58.906 "data_offset": 2048, 00:32:58.906 "data_size": 63488 00:32:58.906 }, 00:32:58.906 { 00:32:58.906 "name": null, 00:32:58.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.906 "is_configured": false, 00:32:58.906 "data_offset": 2048, 00:32:58.906 "data_size": 63488 00:32:58.906 }, 00:32:58.906 { 00:32:58.906 "name": "BaseBdev3", 00:32:58.906 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:32:58.906 "is_configured": true, 00:32:58.906 "data_offset": 2048, 00:32:58.906 "data_size": 63488 00:32:58.906 }, 00:32:58.906 { 00:32:58.906 "name": "BaseBdev4", 00:32:58.906 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:32:58.906 "is_configured": true, 00:32:58.906 "data_offset": 2048, 00:32:58.906 "data_size": 63488 00:32:58.906 } 00:32:58.906 ] 00:32:58.906 }' 00:32:58.906 11:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:58.906 11:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:58.906 11:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:58.906 11:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:58.906 11:56:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:59.165 [2024-06-10 11:56:31.178568] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:59.423 [2024-06-10 11:56:31.235620] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:59.423 [2024-06-10 11:56:31.235859] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:59.423 [2024-06-10 11:56:31.235979] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:59.423 [2024-06-10 11:56:31.236055] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:59.423 11:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:59.423 11:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:59.423 11:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:59.423 11:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:59.423 11:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:59.423 11:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:59.423 11:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:59.423 11:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:59.423 11:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:59.423 11:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:59.423 11:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:59.423 11:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:59.681 11:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:59.681 "name": "raid_bdev1", 00:32:59.681 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:32:59.681 "strip_size_kb": 0, 00:32:59.681 "state": "online", 00:32:59.681 "raid_level": "raid1", 00:32:59.681 "superblock": true, 00:32:59.681 "num_base_bdevs": 4, 00:32:59.681 "num_base_bdevs_discovered": 2, 00:32:59.681 "num_base_bdevs_operational": 2, 00:32:59.681 "base_bdevs_list": [ 00:32:59.681 { 00:32:59.681 "name": null, 00:32:59.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.681 "is_configured": false, 00:32:59.681 "data_offset": 2048, 00:32:59.681 "data_size": 63488 00:32:59.681 }, 00:32:59.681 { 00:32:59.681 "name": null, 00:32:59.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.681 "is_configured": false, 00:32:59.681 "data_offset": 2048, 00:32:59.681 "data_size": 63488 00:32:59.681 }, 00:32:59.681 { 00:32:59.681 "name": "BaseBdev3", 00:32:59.681 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:32:59.681 "is_configured": true, 00:32:59.681 "data_offset": 2048, 00:32:59.681 "data_size": 63488 00:32:59.681 }, 00:32:59.681 { 00:32:59.681 "name": "BaseBdev4", 00:32:59.681 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:32:59.681 "is_configured": true, 00:32:59.681 "data_offset": 2048, 00:32:59.681 "data_size": 63488 00:32:59.681 } 00:32:59.681 ] 00:32:59.681 }' 00:32:59.681 11:56:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:59.681 11:56:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:00.248 11:56:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:00.507 [2024-06-10 11:56:32.432870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:00.507 [2024-06-10 11:56:32.433230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:00.507 [2024-06-10 11:56:32.433321] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:33:00.507 [2024-06-10 11:56:32.433607] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:00.507 [2024-06-10 11:56:32.434258] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:00.507 [2024-06-10 11:56:32.434426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:00.507 [2024-06-10 11:56:32.434708] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:00.507 [2024-06-10 11:56:32.434830] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:33:00.507 [2024-06-10 11:56:32.434919] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:00.507 [2024-06-10 11:56:32.435051] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:00.507 [2024-06-10 11:56:32.451609] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc23d0 00:33:00.507 spare 00:33:00.507 [2024-06-10 11:56:32.453862] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:00.507 11:56:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:33:01.440 11:56:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:01.440 11:56:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:01.440 11:56:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:01.440 11:56:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:01.440 11:56:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:01.440 11:56:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:01.441 11:56:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:01.699 11:56:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:01.699 "name": "raid_bdev1", 00:33:01.699 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:33:01.699 "strip_size_kb": 0, 00:33:01.699 "state": "online", 00:33:01.699 "raid_level": "raid1", 00:33:01.699 "superblock": true, 00:33:01.699 "num_base_bdevs": 4, 00:33:01.699 "num_base_bdevs_discovered": 3, 00:33:01.699 "num_base_bdevs_operational": 3, 00:33:01.699 "process": { 00:33:01.699 "type": "rebuild", 00:33:01.699 "target": "spare", 00:33:01.699 "progress": { 00:33:01.699 "blocks": 24576, 00:33:01.699 "percent": 38 00:33:01.699 } 00:33:01.699 }, 00:33:01.699 "base_bdevs_list": [ 00:33:01.699 { 00:33:01.699 "name": "spare", 00:33:01.699 "uuid": "e850886a-2aaf-58d5-91d5-729c2f06d7e4", 00:33:01.699 "is_configured": true, 00:33:01.699 "data_offset": 2048, 00:33:01.699 "data_size": 63488 00:33:01.699 }, 00:33:01.699 { 00:33:01.699 "name": null, 00:33:01.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.699 "is_configured": false, 00:33:01.699 "data_offset": 2048, 00:33:01.699 "data_size": 63488 00:33:01.699 }, 00:33:01.699 { 00:33:01.699 "name": "BaseBdev3", 00:33:01.699 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:33:01.699 "is_configured": true, 00:33:01.699 "data_offset": 2048, 00:33:01.699 "data_size": 63488 00:33:01.699 }, 00:33:01.699 { 00:33:01.699 "name": "BaseBdev4", 00:33:01.699 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:33:01.699 "is_configured": true, 00:33:01.699 "data_offset": 2048, 00:33:01.699 "data_size": 63488 00:33:01.699 } 00:33:01.699 ] 00:33:01.699 }' 00:33:01.699 11:56:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:01.956 11:56:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:01.956 11:56:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:01.956 11:56:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:01.956 11:56:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:02.213 [2024-06-10 11:56:34.103960] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:02.213 [2024-06-10 11:56:34.164942] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:02.213 [2024-06-10 11:56:34.165187] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:02.213 [2024-06-10 11:56:34.165249] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:02.213 [2024-06-10 11:56:34.165338] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:02.213 11:56:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:02.213 11:56:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:02.213 11:56:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:02.213 11:56:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:02.213 11:56:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:02.213 11:56:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:02.213 11:56:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:02.213 11:56:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:02.213 11:56:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:02.213 11:56:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:02.213 11:56:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:02.213 11:56:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:02.470 11:56:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:02.470 "name": "raid_bdev1", 00:33:02.470 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:33:02.470 "strip_size_kb": 0, 00:33:02.470 "state": "online", 00:33:02.470 "raid_level": "raid1", 00:33:02.470 "superblock": true, 00:33:02.471 "num_base_bdevs": 4, 00:33:02.471 "num_base_bdevs_discovered": 2, 00:33:02.471 "num_base_bdevs_operational": 2, 00:33:02.471 "base_bdevs_list": [ 00:33:02.471 { 00:33:02.471 "name": null, 00:33:02.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:02.471 "is_configured": false, 00:33:02.471 "data_offset": 2048, 00:33:02.471 "data_size": 63488 00:33:02.471 }, 00:33:02.471 { 00:33:02.471 "name": null, 00:33:02.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:02.471 "is_configured": false, 00:33:02.471 "data_offset": 2048, 00:33:02.471 "data_size": 63488 00:33:02.471 }, 00:33:02.471 { 00:33:02.471 "name": "BaseBdev3", 00:33:02.471 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:33:02.471 "is_configured": true, 00:33:02.471 "data_offset": 2048, 00:33:02.471 "data_size": 63488 00:33:02.471 }, 00:33:02.471 { 00:33:02.471 "name": "BaseBdev4", 00:33:02.471 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:33:02.471 "is_configured": true, 00:33:02.471 "data_offset": 2048, 00:33:02.471 "data_size": 63488 00:33:02.471 } 00:33:02.471 ] 00:33:02.471 }' 00:33:02.471 11:56:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:02.471 11:56:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:03.403 11:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:03.403 11:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:03.403 11:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:03.403 11:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:03.403 11:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:03.403 11:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:03.403 11:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:03.403 11:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:03.403 "name": "raid_bdev1", 00:33:03.403 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:33:03.403 "strip_size_kb": 0, 00:33:03.403 "state": "online", 00:33:03.403 "raid_level": "raid1", 00:33:03.403 "superblock": true, 00:33:03.403 "num_base_bdevs": 4, 00:33:03.403 "num_base_bdevs_discovered": 2, 00:33:03.403 "num_base_bdevs_operational": 2, 00:33:03.403 "base_bdevs_list": [ 00:33:03.403 { 00:33:03.403 "name": null, 00:33:03.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.403 "is_configured": false, 00:33:03.403 "data_offset": 2048, 00:33:03.403 "data_size": 63488 00:33:03.403 }, 00:33:03.403 { 00:33:03.403 "name": null, 00:33:03.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.403 "is_configured": false, 00:33:03.403 "data_offset": 2048, 00:33:03.403 "data_size": 63488 00:33:03.403 }, 00:33:03.403 { 00:33:03.403 "name": "BaseBdev3", 00:33:03.403 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:33:03.403 "is_configured": true, 00:33:03.403 "data_offset": 2048, 00:33:03.403 "data_size": 63488 00:33:03.403 }, 00:33:03.403 { 00:33:03.403 "name": "BaseBdev4", 00:33:03.403 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:33:03.403 "is_configured": true, 00:33:03.403 "data_offset": 2048, 00:33:03.403 "data_size": 63488 00:33:03.403 } 00:33:03.403 ] 00:33:03.403 }' 00:33:03.403 11:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:03.661 11:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:03.661 11:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:03.661 11:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:03.661 11:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:33:03.918 11:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:04.176 [2024-06-10 11:56:36.086597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:04.176 [2024-06-10 11:56:36.086961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:04.176 [2024-06-10 11:56:36.087118] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:33:04.176 [2024-06-10 11:56:36.087231] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:04.176 [2024-06-10 11:56:36.087850] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:04.176 [2024-06-10 11:56:36.088015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:04.176 [2024-06-10 11:56:36.088285] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:04.176 [2024-06-10 11:56:36.088395] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:33:04.176 [2024-06-10 11:56:36.088473] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:04.176 BaseBdev1 00:33:04.176 11:56:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:33:05.171 11:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:05.171 11:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:05.171 11:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:05.171 11:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:05.171 11:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:05.171 11:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:05.171 11:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:05.171 11:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:05.171 11:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:05.171 11:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:05.171 11:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:05.171 11:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:05.428 11:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:05.429 "name": "raid_bdev1", 00:33:05.429 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:33:05.429 "strip_size_kb": 0, 00:33:05.429 "state": "online", 00:33:05.429 "raid_level": "raid1", 00:33:05.429 "superblock": true, 00:33:05.429 "num_base_bdevs": 4, 00:33:05.429 "num_base_bdevs_discovered": 2, 00:33:05.429 "num_base_bdevs_operational": 2, 00:33:05.429 "base_bdevs_list": [ 00:33:05.429 { 00:33:05.429 "name": null, 00:33:05.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.429 "is_configured": false, 00:33:05.429 "data_offset": 2048, 00:33:05.429 "data_size": 63488 00:33:05.429 }, 00:33:05.429 { 00:33:05.429 "name": null, 00:33:05.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.429 "is_configured": false, 00:33:05.429 "data_offset": 2048, 00:33:05.429 "data_size": 63488 00:33:05.429 }, 00:33:05.429 { 00:33:05.429 "name": "BaseBdev3", 00:33:05.429 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:33:05.429 "is_configured": true, 00:33:05.429 "data_offset": 2048, 00:33:05.429 "data_size": 63488 00:33:05.429 }, 00:33:05.429 { 00:33:05.429 "name": "BaseBdev4", 00:33:05.429 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:33:05.429 "is_configured": true, 00:33:05.429 "data_offset": 2048, 00:33:05.429 "data_size": 63488 00:33:05.429 } 00:33:05.429 ] 00:33:05.429 }' 00:33:05.429 11:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:05.429 11:56:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:05.994 11:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:05.994 11:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:05.994 11:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:05.994 11:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:05.994 11:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:05.994 11:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:05.994 11:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:06.252 11:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:06.252 "name": "raid_bdev1", 00:33:06.252 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:33:06.252 "strip_size_kb": 0, 00:33:06.252 "state": "online", 00:33:06.252 "raid_level": "raid1", 00:33:06.252 "superblock": true, 00:33:06.252 "num_base_bdevs": 4, 00:33:06.252 "num_base_bdevs_discovered": 2, 00:33:06.252 "num_base_bdevs_operational": 2, 00:33:06.252 "base_bdevs_list": [ 00:33:06.252 { 00:33:06.252 "name": null, 00:33:06.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:06.252 "is_configured": false, 00:33:06.252 "data_offset": 2048, 00:33:06.252 "data_size": 63488 00:33:06.252 }, 00:33:06.252 { 00:33:06.252 "name": null, 00:33:06.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:06.252 "is_configured": false, 00:33:06.252 "data_offset": 2048, 00:33:06.252 "data_size": 63488 00:33:06.252 }, 00:33:06.252 { 00:33:06.252 "name": "BaseBdev3", 00:33:06.252 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:33:06.252 "is_configured": true, 00:33:06.252 "data_offset": 2048, 00:33:06.252 "data_size": 63488 00:33:06.252 }, 00:33:06.252 { 00:33:06.252 "name": "BaseBdev4", 00:33:06.252 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:33:06.252 "is_configured": true, 00:33:06.252 "data_offset": 2048, 00:33:06.252 "data_size": 63488 00:33:06.252 } 00:33:06.252 ] 00:33:06.252 }' 00:33:06.252 11:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:06.252 11:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:06.252 11:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:06.510 11:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:06.510 11:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:06.510 11:56:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@649 -- # local es=0 00:33:06.510 11:56:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:06.510 11:56:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:06.510 11:56:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:06.510 11:56:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:06.510 11:56:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:06.510 11:56:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:06.510 11:56:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:06.510 11:56:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:06.510 11:56:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:06.510 11:56:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:06.769 [2024-06-10 11:56:38.599249] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:06.769 [2024-06-10 11:56:38.599628] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:33:06.769 [2024-06-10 11:56:38.599743] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:06.769 request: 00:33:06.769 { 00:33:06.769 "base_bdev": "BaseBdev1", 00:33:06.769 "raid_bdev": "raid_bdev1", 00:33:06.769 "method": "bdev_raid_add_base_bdev", 00:33:06.769 "req_id": 1 00:33:06.769 } 00:33:06.769 Got JSON-RPC error response 00:33:06.769 response: 00:33:06.769 { 00:33:06.769 "code": -22, 00:33:06.769 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:06.769 } 00:33:06.769 11:56:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # es=1 00:33:06.769 11:56:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:33:06.769 11:56:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:33:06.769 11:56:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:33:06.769 11:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:33:07.740 11:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:07.740 11:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:07.740 11:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:07.740 11:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:07.740 11:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:07.740 11:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:07.740 11:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:07.740 11:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:07.740 11:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:07.740 11:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:07.740 11:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:07.741 11:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:08.033 11:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:08.033 "name": "raid_bdev1", 00:33:08.033 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:33:08.033 "strip_size_kb": 0, 00:33:08.033 "state": "online", 00:33:08.033 "raid_level": "raid1", 00:33:08.033 "superblock": true, 00:33:08.033 "num_base_bdevs": 4, 00:33:08.033 "num_base_bdevs_discovered": 2, 00:33:08.033 "num_base_bdevs_operational": 2, 00:33:08.033 "base_bdevs_list": [ 00:33:08.033 { 00:33:08.033 "name": null, 00:33:08.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.033 "is_configured": false, 00:33:08.033 "data_offset": 2048, 00:33:08.033 "data_size": 63488 00:33:08.033 }, 00:33:08.033 { 00:33:08.033 "name": null, 00:33:08.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.033 "is_configured": false, 00:33:08.033 "data_offset": 2048, 00:33:08.033 "data_size": 63488 00:33:08.033 }, 00:33:08.033 { 00:33:08.033 "name": "BaseBdev3", 00:33:08.033 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:33:08.033 "is_configured": true, 00:33:08.033 "data_offset": 2048, 00:33:08.033 "data_size": 63488 00:33:08.033 }, 00:33:08.033 { 00:33:08.033 "name": "BaseBdev4", 00:33:08.033 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:33:08.033 "is_configured": true, 00:33:08.033 "data_offset": 2048, 00:33:08.033 "data_size": 63488 00:33:08.033 } 00:33:08.033 ] 00:33:08.033 }' 00:33:08.033 11:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:08.033 11:56:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:08.597 11:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:08.597 11:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:08.597 11:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:08.597 11:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:08.597 11:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:08.597 11:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:08.597 11:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:08.855 11:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:08.855 "name": "raid_bdev1", 00:33:08.855 "uuid": "bf9ac1d7-f51f-4213-8b22-7e1340572ad4", 00:33:08.855 "strip_size_kb": 0, 00:33:08.855 "state": "online", 00:33:08.855 "raid_level": "raid1", 00:33:08.855 "superblock": true, 00:33:08.855 "num_base_bdevs": 4, 00:33:08.855 "num_base_bdevs_discovered": 2, 00:33:08.855 "num_base_bdevs_operational": 2, 00:33:08.855 "base_bdevs_list": [ 00:33:08.855 { 00:33:08.855 "name": null, 00:33:08.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.855 "is_configured": false, 00:33:08.855 "data_offset": 2048, 00:33:08.855 "data_size": 63488 00:33:08.855 }, 00:33:08.855 { 00:33:08.855 "name": null, 00:33:08.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.855 "is_configured": false, 00:33:08.855 "data_offset": 2048, 00:33:08.855 "data_size": 63488 00:33:08.855 }, 00:33:08.855 { 00:33:08.855 "name": "BaseBdev3", 00:33:08.855 "uuid": "367a3d93-8ba5-5944-ad21-5ca3952713a7", 00:33:08.855 "is_configured": true, 00:33:08.855 "data_offset": 2048, 00:33:08.855 "data_size": 63488 00:33:08.855 }, 00:33:08.855 { 00:33:08.855 "name": "BaseBdev4", 00:33:08.855 "uuid": "808e77ec-6c7e-5ce5-bf02-a70acacf07d3", 00:33:08.855 "is_configured": true, 00:33:08.855 "data_offset": 2048, 00:33:08.855 "data_size": 63488 00:33:08.855 } 00:33:08.855 ] 00:33:08.855 }' 00:33:08.855 11:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:08.855 11:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:08.855 11:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:08.855 11:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:08.855 11:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 149332 00:33:08.855 11:56:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@949 -- # '[' -z 149332 ']' 00:33:08.855 11:56:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # kill -0 149332 00:33:08.855 11:56:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # uname 00:33:08.855 11:56:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:08.855 11:56:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 149332 00:33:08.855 11:56:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:08.855 11:56:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:08.855 11:56:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 149332' 00:33:08.855 killing process with pid 149332 00:33:08.855 11:56:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # kill 149332 00:33:08.855 Received shutdown signal, test time was about 60.000000 seconds 00:33:08.855 00:33:08.855 Latency(us) 00:33:08.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.855 =================================================================================================================== 00:33:08.855 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:08.855 11:56:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # wait 149332 00:33:08.855 [2024-06-10 11:56:40.833713] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:08.855 [2024-06-10 11:56:40.833974] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:08.855 [2024-06-10 11:56:40.834176] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:08.855 [2024-06-10 11:56:40.834277] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:33:09.419 [2024-06-10 11:56:41.409737] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:11.318 11:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:33:11.318 00:33:11.318 real 0m40.504s 00:33:11.318 user 0m59.775s 00:33:11.318 sys 0m6.024s 00:33:11.318 11:56:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:11.318 11:56:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.318 ************************************ 00:33:11.318 END TEST raid_rebuild_test_sb 00:33:11.318 ************************************ 00:33:11.318 11:56:43 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:33:11.318 11:56:43 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:33:11.318 11:56:43 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:11.318 11:56:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:11.318 ************************************ 00:33:11.318 START TEST raid_rebuild_test_io 00:33:11.318 ************************************ 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 4 false true true 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=150310 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 150310 /var/tmp/spdk-raid.sock 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@830 -- # '[' -z 150310 ']' 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:11.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:11.318 11:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:11.318 [2024-06-10 11:56:43.169429] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:33:11.318 [2024-06-10 11:56:43.169886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150310 ] 00:33:11.318 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:11.318 Zero copy mechanism will not be used. 00:33:11.318 [2024-06-10 11:56:43.352246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.575 [2024-06-10 11:56:43.579073] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.832 [2024-06-10 11:56:43.819743] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:12.090 11:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:12.090 11:56:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@863 -- # return 0 00:33:12.090 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:12.090 11:56:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:12.348 BaseBdev1_malloc 00:33:12.348 11:56:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:12.606 [2024-06-10 11:56:44.423302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:12.606 [2024-06-10 11:56:44.423627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:12.606 [2024-06-10 11:56:44.423706] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:33:12.606 [2024-06-10 11:56:44.423824] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:12.606 [2024-06-10 11:56:44.426390] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:12.606 [2024-06-10 11:56:44.426558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:12.606 BaseBdev1 00:33:12.606 11:56:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:12.606 11:56:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:12.864 BaseBdev2_malloc 00:33:12.864 11:56:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:13.122 [2024-06-10 11:56:44.981305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:13.122 [2024-06-10 11:56:44.981608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:13.122 [2024-06-10 11:56:44.981708] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:33:13.122 [2024-06-10 11:56:44.981896] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:13.122 [2024-06-10 11:56:44.984472] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:13.122 [2024-06-10 11:56:44.984651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:13.122 BaseBdev2 00:33:13.122 11:56:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:13.122 11:56:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:13.380 BaseBdev3_malloc 00:33:13.380 11:56:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:33:13.637 [2024-06-10 11:56:45.554262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:33:13.637 [2024-06-10 11:56:45.554577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:13.637 [2024-06-10 11:56:45.554652] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:13.637 [2024-06-10 11:56:45.554940] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:13.637 [2024-06-10 11:56:45.557429] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:13.637 [2024-06-10 11:56:45.557608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:13.637 BaseBdev3 00:33:13.637 11:56:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:13.637 11:56:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:33:13.895 BaseBdev4_malloc 00:33:13.895 11:56:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:33:14.153 [2024-06-10 11:56:46.139478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:33:14.153 [2024-06-10 11:56:46.139749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:14.153 [2024-06-10 11:56:46.139825] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:14.153 [2024-06-10 11:56:46.139936] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:14.153 [2024-06-10 11:56:46.142519] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:14.153 [2024-06-10 11:56:46.142706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:33:14.153 BaseBdev4 00:33:14.153 11:56:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:33:14.410 spare_malloc 00:33:14.410 11:56:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:14.976 spare_delay 00:33:14.976 11:56:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:14.976 [2024-06-10 11:56:46.954024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:14.976 [2024-06-10 11:56:46.954355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:14.976 [2024-06-10 11:56:46.954501] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:33:14.976 [2024-06-10 11:56:46.954618] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:14.976 [2024-06-10 11:56:46.957334] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:14.976 [2024-06-10 11:56:46.957508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:14.976 spare 00:33:14.976 11:56:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:33:15.234 [2024-06-10 11:56:47.254341] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:15.234 [2024-06-10 11:56:47.256724] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:15.234 [2024-06-10 11:56:47.256931] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:15.234 [2024-06-10 11:56:47.257021] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:15.234 [2024-06-10 11:56:47.257281] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:33:15.234 [2024-06-10 11:56:47.257378] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:33:15.234 [2024-06-10 11:56:47.257622] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:15.234 [2024-06-10 11:56:47.258106] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:33:15.234 [2024-06-10 11:56:47.258242] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:33:15.234 [2024-06-10 11:56:47.258575] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:15.234 11:56:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:33:15.234 11:56:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:15.234 11:56:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:15.234 11:56:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:15.234 11:56:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:15.235 11:56:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:15.235 11:56:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:15.235 11:56:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:15.235 11:56:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:15.235 11:56:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:15.235 11:56:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:15.235 11:56:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:15.803 11:56:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:15.803 "name": "raid_bdev1", 00:33:15.803 "uuid": "33a780be-239d-4c54-8735-b2bd5ea6554f", 00:33:15.803 "strip_size_kb": 0, 00:33:15.803 "state": "online", 00:33:15.803 "raid_level": "raid1", 00:33:15.803 "superblock": false, 00:33:15.803 "num_base_bdevs": 4, 00:33:15.803 "num_base_bdevs_discovered": 4, 00:33:15.803 "num_base_bdevs_operational": 4, 00:33:15.803 "base_bdevs_list": [ 00:33:15.803 { 00:33:15.803 "name": "BaseBdev1", 00:33:15.803 "uuid": "baf96b40-e561-5697-a167-ac403898f9b3", 00:33:15.803 "is_configured": true, 00:33:15.803 "data_offset": 0, 00:33:15.803 "data_size": 65536 00:33:15.803 }, 00:33:15.803 { 00:33:15.803 "name": "BaseBdev2", 00:33:15.803 "uuid": "85266424-ecf8-55b7-b204-09e52a3542db", 00:33:15.803 "is_configured": true, 00:33:15.803 "data_offset": 0, 00:33:15.803 "data_size": 65536 00:33:15.803 }, 00:33:15.803 { 00:33:15.803 "name": "BaseBdev3", 00:33:15.803 "uuid": "45a66a40-2b7e-5c94-8296-5317ca230b1b", 00:33:15.803 "is_configured": true, 00:33:15.803 "data_offset": 0, 00:33:15.803 "data_size": 65536 00:33:15.803 }, 00:33:15.803 { 00:33:15.803 "name": "BaseBdev4", 00:33:15.803 "uuid": "80269941-d29c-5498-ad3d-e37641cb5e6f", 00:33:15.803 "is_configured": true, 00:33:15.803 "data_offset": 0, 00:33:15.803 "data_size": 65536 00:33:15.803 } 00:33:15.803 ] 00:33:15.803 }' 00:33:15.803 11:56:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:15.803 11:56:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:16.369 11:56:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:33:16.369 11:56:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:16.627 [2024-06-10 11:56:48.583124] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:16.627 11:56:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:33:16.627 11:56:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:16.627 11:56:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:16.884 11:56:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:33:16.884 11:56:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:33:16.884 11:56:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:16.884 11:56:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:33:17.143 [2024-06-10 11:56:48.997114] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:33:17.143 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:17.143 Zero copy mechanism will not be used. 00:33:17.143 Running I/O for 60 seconds... 00:33:17.143 [2024-06-10 11:56:49.177256] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:17.143 [2024-06-10 11:56:49.177675] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006150 00:33:17.402 11:56:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:17.402 11:56:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:17.402 11:56:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:17.402 11:56:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:17.402 11:56:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:17.402 11:56:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:17.402 11:56:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:17.402 11:56:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:17.402 11:56:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:17.402 11:56:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:17.402 11:56:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:17.402 11:56:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:17.660 11:56:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:17.660 "name": "raid_bdev1", 00:33:17.660 "uuid": "33a780be-239d-4c54-8735-b2bd5ea6554f", 00:33:17.660 "strip_size_kb": 0, 00:33:17.660 "state": "online", 00:33:17.660 "raid_level": "raid1", 00:33:17.660 "superblock": false, 00:33:17.660 "num_base_bdevs": 4, 00:33:17.660 "num_base_bdevs_discovered": 3, 00:33:17.660 "num_base_bdevs_operational": 3, 00:33:17.660 "base_bdevs_list": [ 00:33:17.660 { 00:33:17.660 "name": null, 00:33:17.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:17.660 "is_configured": false, 00:33:17.660 "data_offset": 0, 00:33:17.660 "data_size": 65536 00:33:17.660 }, 00:33:17.660 { 00:33:17.660 "name": "BaseBdev2", 00:33:17.660 "uuid": "85266424-ecf8-55b7-b204-09e52a3542db", 00:33:17.660 "is_configured": true, 00:33:17.660 "data_offset": 0, 00:33:17.660 "data_size": 65536 00:33:17.660 }, 00:33:17.660 { 00:33:17.660 "name": "BaseBdev3", 00:33:17.660 "uuid": "45a66a40-2b7e-5c94-8296-5317ca230b1b", 00:33:17.660 "is_configured": true, 00:33:17.660 "data_offset": 0, 00:33:17.660 "data_size": 65536 00:33:17.660 }, 00:33:17.660 { 00:33:17.660 "name": "BaseBdev4", 00:33:17.660 "uuid": "80269941-d29c-5498-ad3d-e37641cb5e6f", 00:33:17.660 "is_configured": true, 00:33:17.660 "data_offset": 0, 00:33:17.660 "data_size": 65536 00:33:17.660 } 00:33:17.660 ] 00:33:17.660 }' 00:33:17.660 11:56:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:17.660 11:56:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:18.226 11:56:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:18.485 [2024-06-10 11:56:50.485284] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:18.744 11:56:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:33:18.744 [2024-06-10 11:56:50.561520] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:18.744 [2024-06-10 11:56:50.564054] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:18.744 [2024-06-10 11:56:50.691831] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:19.003 [2024-06-10 11:56:50.821629] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:19.003 [2024-06-10 11:56:50.822607] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:19.262 [2024-06-10 11:56:51.166225] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:33:19.521 [2024-06-10 11:56:51.428231] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:33:19.521 11:56:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:19.521 11:56:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:19.521 11:56:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:19.521 11:56:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:19.521 11:56:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:19.522 11:56:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:19.522 11:56:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:19.779 [2024-06-10 11:56:51.767344] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:33:20.037 11:56:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:20.037 "name": "raid_bdev1", 00:33:20.037 "uuid": "33a780be-239d-4c54-8735-b2bd5ea6554f", 00:33:20.037 "strip_size_kb": 0, 00:33:20.037 "state": "online", 00:33:20.037 "raid_level": "raid1", 00:33:20.037 "superblock": false, 00:33:20.037 "num_base_bdevs": 4, 00:33:20.037 "num_base_bdevs_discovered": 4, 00:33:20.037 "num_base_bdevs_operational": 4, 00:33:20.037 "process": { 00:33:20.037 "type": "rebuild", 00:33:20.037 "target": "spare", 00:33:20.037 "progress": { 00:33:20.037 "blocks": 14336, 00:33:20.037 "percent": 21 00:33:20.037 } 00:33:20.037 }, 00:33:20.037 "base_bdevs_list": [ 00:33:20.037 { 00:33:20.037 "name": "spare", 00:33:20.037 "uuid": "f5dbe5d1-06dc-5352-ba77-c59bf8226c40", 00:33:20.037 "is_configured": true, 00:33:20.037 "data_offset": 0, 00:33:20.037 "data_size": 65536 00:33:20.037 }, 00:33:20.037 { 00:33:20.037 "name": "BaseBdev2", 00:33:20.037 "uuid": "85266424-ecf8-55b7-b204-09e52a3542db", 00:33:20.037 "is_configured": true, 00:33:20.037 "data_offset": 0, 00:33:20.037 "data_size": 65536 00:33:20.037 }, 00:33:20.037 { 00:33:20.037 "name": "BaseBdev3", 00:33:20.037 "uuid": "45a66a40-2b7e-5c94-8296-5317ca230b1b", 00:33:20.037 "is_configured": true, 00:33:20.037 "data_offset": 0, 00:33:20.037 "data_size": 65536 00:33:20.037 }, 00:33:20.037 { 00:33:20.037 "name": "BaseBdev4", 00:33:20.037 "uuid": "80269941-d29c-5498-ad3d-e37641cb5e6f", 00:33:20.037 "is_configured": true, 00:33:20.037 "data_offset": 0, 00:33:20.037 "data_size": 65536 00:33:20.037 } 00:33:20.037 ] 00:33:20.037 }' 00:33:20.037 11:56:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:20.037 11:56:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:20.037 11:56:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:20.037 11:56:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:20.037 11:56:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:20.037 [2024-06-10 11:56:52.040477] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:20.037 [2024-06-10 11:56:52.040975] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:20.295 [2024-06-10 11:56:52.200114] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:20.295 [2024-06-10 11:56:52.284104] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:33:20.295 [2024-06-10 11:56:52.330955] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:20.295 [2024-06-10 11:56:52.347554] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:20.295 [2024-06-10 11:56:52.347779] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:20.295 [2024-06-10 11:56:52.347825] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:20.593 [2024-06-10 11:56:52.372780] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006150 00:33:20.593 11:56:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:20.593 11:56:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:20.593 11:56:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:20.593 11:56:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:20.593 11:56:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:20.593 11:56:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:20.593 11:56:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:20.593 11:56:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:20.593 11:56:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:20.593 11:56:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:20.593 11:56:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:20.593 11:56:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.870 11:56:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:20.870 "name": "raid_bdev1", 00:33:20.870 "uuid": "33a780be-239d-4c54-8735-b2bd5ea6554f", 00:33:20.870 "strip_size_kb": 0, 00:33:20.870 "state": "online", 00:33:20.870 "raid_level": "raid1", 00:33:20.870 "superblock": false, 00:33:20.870 "num_base_bdevs": 4, 00:33:20.870 "num_base_bdevs_discovered": 3, 00:33:20.870 "num_base_bdevs_operational": 3, 00:33:20.870 "base_bdevs_list": [ 00:33:20.870 { 00:33:20.870 "name": null, 00:33:20.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:20.870 "is_configured": false, 00:33:20.870 "data_offset": 0, 00:33:20.870 "data_size": 65536 00:33:20.870 }, 00:33:20.870 { 00:33:20.870 "name": "BaseBdev2", 00:33:20.870 "uuid": "85266424-ecf8-55b7-b204-09e52a3542db", 00:33:20.870 "is_configured": true, 00:33:20.870 "data_offset": 0, 00:33:20.870 "data_size": 65536 00:33:20.870 }, 00:33:20.870 { 00:33:20.870 "name": "BaseBdev3", 00:33:20.870 "uuid": "45a66a40-2b7e-5c94-8296-5317ca230b1b", 00:33:20.870 "is_configured": true, 00:33:20.870 "data_offset": 0, 00:33:20.870 "data_size": 65536 00:33:20.870 }, 00:33:20.870 { 00:33:20.870 "name": "BaseBdev4", 00:33:20.870 "uuid": "80269941-d29c-5498-ad3d-e37641cb5e6f", 00:33:20.870 "is_configured": true, 00:33:20.870 "data_offset": 0, 00:33:20.870 "data_size": 65536 00:33:20.870 } 00:33:20.870 ] 00:33:20.870 }' 00:33:20.870 11:56:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:20.870 11:56:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:21.437 11:56:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:21.437 11:56:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:21.437 11:56:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:21.437 11:56:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:21.437 11:56:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:21.437 11:56:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:21.437 11:56:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:21.696 11:56:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:21.696 "name": "raid_bdev1", 00:33:21.696 "uuid": "33a780be-239d-4c54-8735-b2bd5ea6554f", 00:33:21.696 "strip_size_kb": 0, 00:33:21.696 "state": "online", 00:33:21.696 "raid_level": "raid1", 00:33:21.696 "superblock": false, 00:33:21.696 "num_base_bdevs": 4, 00:33:21.696 "num_base_bdevs_discovered": 3, 00:33:21.696 "num_base_bdevs_operational": 3, 00:33:21.696 "base_bdevs_list": [ 00:33:21.696 { 00:33:21.696 "name": null, 00:33:21.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:21.696 "is_configured": false, 00:33:21.696 "data_offset": 0, 00:33:21.696 "data_size": 65536 00:33:21.696 }, 00:33:21.696 { 00:33:21.696 "name": "BaseBdev2", 00:33:21.696 "uuid": "85266424-ecf8-55b7-b204-09e52a3542db", 00:33:21.696 "is_configured": true, 00:33:21.696 "data_offset": 0, 00:33:21.696 "data_size": 65536 00:33:21.696 }, 00:33:21.696 { 00:33:21.696 "name": "BaseBdev3", 00:33:21.696 "uuid": "45a66a40-2b7e-5c94-8296-5317ca230b1b", 00:33:21.696 "is_configured": true, 00:33:21.696 "data_offset": 0, 00:33:21.696 "data_size": 65536 00:33:21.696 }, 00:33:21.696 { 00:33:21.696 "name": "BaseBdev4", 00:33:21.696 "uuid": "80269941-d29c-5498-ad3d-e37641cb5e6f", 00:33:21.696 "is_configured": true, 00:33:21.696 "data_offset": 0, 00:33:21.696 "data_size": 65536 00:33:21.696 } 00:33:21.696 ] 00:33:21.696 }' 00:33:21.696 11:56:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:21.696 11:56:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:21.696 11:56:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:21.696 11:56:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:21.696 11:56:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:21.955 [2024-06-10 11:56:54.003208] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:22.214 [2024-06-10 11:56:54.068397] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:33:22.214 [2024-06-10 11:56:54.070965] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:22.214 11:56:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:22.214 [2024-06-10 11:56:54.198002] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:22.472 [2024-06-10 11:56:54.438303] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:22.472 [2024-06-10 11:56:54.439232] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:22.731 [2024-06-10 11:56:54.785735] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:33:22.731 [2024-06-10 11:56:54.786468] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:33:23.305 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:23.305 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:23.305 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:23.305 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:23.305 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:23.305 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:23.305 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.305 [2024-06-10 11:56:55.185681] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:33:23.305 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:23.305 "name": "raid_bdev1", 00:33:23.305 "uuid": "33a780be-239d-4c54-8735-b2bd5ea6554f", 00:33:23.305 "strip_size_kb": 0, 00:33:23.305 "state": "online", 00:33:23.305 "raid_level": "raid1", 00:33:23.305 "superblock": false, 00:33:23.305 "num_base_bdevs": 4, 00:33:23.305 "num_base_bdevs_discovered": 4, 00:33:23.305 "num_base_bdevs_operational": 4, 00:33:23.305 "process": { 00:33:23.305 "type": "rebuild", 00:33:23.305 "target": "spare", 00:33:23.305 "progress": { 00:33:23.305 "blocks": 14336, 00:33:23.305 "percent": 21 00:33:23.305 } 00:33:23.305 }, 00:33:23.305 "base_bdevs_list": [ 00:33:23.305 { 00:33:23.305 "name": "spare", 00:33:23.305 "uuid": "f5dbe5d1-06dc-5352-ba77-c59bf8226c40", 00:33:23.305 "is_configured": true, 00:33:23.305 "data_offset": 0, 00:33:23.305 "data_size": 65536 00:33:23.305 }, 00:33:23.305 { 00:33:23.305 "name": "BaseBdev2", 00:33:23.305 "uuid": "85266424-ecf8-55b7-b204-09e52a3542db", 00:33:23.305 "is_configured": true, 00:33:23.305 "data_offset": 0, 00:33:23.305 "data_size": 65536 00:33:23.305 }, 00:33:23.305 { 00:33:23.305 "name": "BaseBdev3", 00:33:23.305 "uuid": "45a66a40-2b7e-5c94-8296-5317ca230b1b", 00:33:23.305 "is_configured": true, 00:33:23.305 "data_offset": 0, 00:33:23.305 "data_size": 65536 00:33:23.305 }, 00:33:23.305 { 00:33:23.305 "name": "BaseBdev4", 00:33:23.305 "uuid": "80269941-d29c-5498-ad3d-e37641cb5e6f", 00:33:23.305 "is_configured": true, 00:33:23.305 "data_offset": 0, 00:33:23.305 "data_size": 65536 00:33:23.305 } 00:33:23.305 ] 00:33:23.305 }' 00:33:23.305 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:23.305 [2024-06-10 11:56:55.316589] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:23.305 [2024-06-10 11:56:55.317483] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:23.305 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:23.306 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:23.564 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:23.564 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:33:23.564 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:33:23.564 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:33:23.564 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:33:23.564 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:33:23.564 [2024-06-10 11:56:55.597029] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:23.822 [2024-06-10 11:56:55.782447] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006150 00:33:23.822 [2024-06-10 11:56:55.782722] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:33:23.822 [2024-06-10 11:56:55.794700] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:33:23.822 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:33:23.822 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:33:23.822 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:23.822 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:23.822 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:23.822 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:23.822 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:23.822 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:23.822 11:56:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.081 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:24.081 "name": "raid_bdev1", 00:33:24.081 "uuid": "33a780be-239d-4c54-8735-b2bd5ea6554f", 00:33:24.081 "strip_size_kb": 0, 00:33:24.081 "state": "online", 00:33:24.081 "raid_level": "raid1", 00:33:24.081 "superblock": false, 00:33:24.081 "num_base_bdevs": 4, 00:33:24.081 "num_base_bdevs_discovered": 3, 00:33:24.081 "num_base_bdevs_operational": 3, 00:33:24.081 "process": { 00:33:24.081 "type": "rebuild", 00:33:24.081 "target": "spare", 00:33:24.081 "progress": { 00:33:24.081 "blocks": 22528, 00:33:24.081 "percent": 34 00:33:24.081 } 00:33:24.081 }, 00:33:24.081 "base_bdevs_list": [ 00:33:24.081 { 00:33:24.081 "name": "spare", 00:33:24.081 "uuid": "f5dbe5d1-06dc-5352-ba77-c59bf8226c40", 00:33:24.081 "is_configured": true, 00:33:24.081 "data_offset": 0, 00:33:24.081 "data_size": 65536 00:33:24.081 }, 00:33:24.081 { 00:33:24.081 "name": null, 00:33:24.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:24.081 "is_configured": false, 00:33:24.081 "data_offset": 0, 00:33:24.081 "data_size": 65536 00:33:24.081 }, 00:33:24.081 { 00:33:24.081 "name": "BaseBdev3", 00:33:24.081 "uuid": "45a66a40-2b7e-5c94-8296-5317ca230b1b", 00:33:24.081 "is_configured": true, 00:33:24.081 "data_offset": 0, 00:33:24.081 "data_size": 65536 00:33:24.081 }, 00:33:24.081 { 00:33:24.081 "name": "BaseBdev4", 00:33:24.081 "uuid": "80269941-d29c-5498-ad3d-e37641cb5e6f", 00:33:24.081 "is_configured": true, 00:33:24.081 "data_offset": 0, 00:33:24.081 "data_size": 65536 00:33:24.081 } 00:33:24.081 ] 00:33:24.081 }' 00:33:24.081 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:24.081 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:24.081 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:24.339 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:24.339 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=1076 00:33:24.339 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:24.339 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:24.339 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:24.339 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:24.339 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:24.339 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:24.339 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.339 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:24.339 [2024-06-10 11:56:56.389745] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:33:24.596 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:24.596 "name": "raid_bdev1", 00:33:24.596 "uuid": "33a780be-239d-4c54-8735-b2bd5ea6554f", 00:33:24.596 "strip_size_kb": 0, 00:33:24.596 "state": "online", 00:33:24.596 "raid_level": "raid1", 00:33:24.596 "superblock": false, 00:33:24.596 "num_base_bdevs": 4, 00:33:24.596 "num_base_bdevs_discovered": 3, 00:33:24.596 "num_base_bdevs_operational": 3, 00:33:24.596 "process": { 00:33:24.596 "type": "rebuild", 00:33:24.596 "target": "spare", 00:33:24.596 "progress": { 00:33:24.596 "blocks": 28672, 00:33:24.596 "percent": 43 00:33:24.596 } 00:33:24.596 }, 00:33:24.596 "base_bdevs_list": [ 00:33:24.596 { 00:33:24.596 "name": "spare", 00:33:24.596 "uuid": "f5dbe5d1-06dc-5352-ba77-c59bf8226c40", 00:33:24.596 "is_configured": true, 00:33:24.596 "data_offset": 0, 00:33:24.596 "data_size": 65536 00:33:24.596 }, 00:33:24.596 { 00:33:24.596 "name": null, 00:33:24.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:24.596 "is_configured": false, 00:33:24.596 "data_offset": 0, 00:33:24.596 "data_size": 65536 00:33:24.596 }, 00:33:24.596 { 00:33:24.596 "name": "BaseBdev3", 00:33:24.596 "uuid": "45a66a40-2b7e-5c94-8296-5317ca230b1b", 00:33:24.596 "is_configured": true, 00:33:24.596 "data_offset": 0, 00:33:24.596 "data_size": 65536 00:33:24.596 }, 00:33:24.596 { 00:33:24.596 "name": "BaseBdev4", 00:33:24.597 "uuid": "80269941-d29c-5498-ad3d-e37641cb5e6f", 00:33:24.597 "is_configured": true, 00:33:24.597 "data_offset": 0, 00:33:24.597 "data_size": 65536 00:33:24.597 } 00:33:24.597 ] 00:33:24.597 }' 00:33:24.597 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:24.597 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:24.597 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:24.597 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:24.597 11:56:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:24.854 [2024-06-10 11:56:56.726429] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:33:25.419 [2024-06-10 11:56:57.184291] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:33:25.752 11:56:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:25.752 11:56:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:25.752 11:56:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:25.752 11:56:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:25.752 11:56:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:25.752 11:56:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:25.752 11:56:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:25.752 11:56:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.011 11:56:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:26.011 "name": "raid_bdev1", 00:33:26.011 "uuid": "33a780be-239d-4c54-8735-b2bd5ea6554f", 00:33:26.011 "strip_size_kb": 0, 00:33:26.011 "state": "online", 00:33:26.011 "raid_level": "raid1", 00:33:26.011 "superblock": false, 00:33:26.011 "num_base_bdevs": 4, 00:33:26.011 "num_base_bdevs_discovered": 3, 00:33:26.011 "num_base_bdevs_operational": 3, 00:33:26.011 "process": { 00:33:26.011 "type": "rebuild", 00:33:26.011 "target": "spare", 00:33:26.011 "progress": { 00:33:26.011 "blocks": 49152, 00:33:26.011 "percent": 75 00:33:26.011 } 00:33:26.011 }, 00:33:26.011 "base_bdevs_list": [ 00:33:26.011 { 00:33:26.011 "name": "spare", 00:33:26.011 "uuid": "f5dbe5d1-06dc-5352-ba77-c59bf8226c40", 00:33:26.011 "is_configured": true, 00:33:26.011 "data_offset": 0, 00:33:26.011 "data_size": 65536 00:33:26.011 }, 00:33:26.011 { 00:33:26.011 "name": null, 00:33:26.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:26.011 "is_configured": false, 00:33:26.011 "data_offset": 0, 00:33:26.011 "data_size": 65536 00:33:26.011 }, 00:33:26.011 { 00:33:26.011 "name": "BaseBdev3", 00:33:26.011 "uuid": "45a66a40-2b7e-5c94-8296-5317ca230b1b", 00:33:26.011 "is_configured": true, 00:33:26.011 "data_offset": 0, 00:33:26.011 "data_size": 65536 00:33:26.011 }, 00:33:26.011 { 00:33:26.011 "name": "BaseBdev4", 00:33:26.011 "uuid": "80269941-d29c-5498-ad3d-e37641cb5e6f", 00:33:26.011 "is_configured": true, 00:33:26.011 "data_offset": 0, 00:33:26.011 "data_size": 65536 00:33:26.011 } 00:33:26.011 ] 00:33:26.011 }' 00:33:26.011 11:56:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:26.011 11:56:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:26.011 11:56:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:26.011 11:56:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:26.011 11:56:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:26.269 [2024-06-10 11:56:58.309743] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:33:26.834 [2024-06-10 11:56:58.656194] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:26.834 [2024-06-10 11:56:58.762919] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:26.834 [2024-06-10 11:56:58.765888] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:27.092 11:56:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:27.092 11:56:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:27.092 11:56:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:27.092 11:56:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:27.092 11:56:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:27.092 11:56:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:27.092 11:56:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:27.092 11:56:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:27.350 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:27.350 "name": "raid_bdev1", 00:33:27.350 "uuid": "33a780be-239d-4c54-8735-b2bd5ea6554f", 00:33:27.350 "strip_size_kb": 0, 00:33:27.350 "state": "online", 00:33:27.350 "raid_level": "raid1", 00:33:27.350 "superblock": false, 00:33:27.350 "num_base_bdevs": 4, 00:33:27.350 "num_base_bdevs_discovered": 3, 00:33:27.350 "num_base_bdevs_operational": 3, 00:33:27.350 "base_bdevs_list": [ 00:33:27.350 { 00:33:27.350 "name": "spare", 00:33:27.350 "uuid": "f5dbe5d1-06dc-5352-ba77-c59bf8226c40", 00:33:27.350 "is_configured": true, 00:33:27.350 "data_offset": 0, 00:33:27.350 "data_size": 65536 00:33:27.350 }, 00:33:27.350 { 00:33:27.350 "name": null, 00:33:27.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:27.350 "is_configured": false, 00:33:27.350 "data_offset": 0, 00:33:27.350 "data_size": 65536 00:33:27.350 }, 00:33:27.350 { 00:33:27.350 "name": "BaseBdev3", 00:33:27.350 "uuid": "45a66a40-2b7e-5c94-8296-5317ca230b1b", 00:33:27.350 "is_configured": true, 00:33:27.350 "data_offset": 0, 00:33:27.350 "data_size": 65536 00:33:27.350 }, 00:33:27.350 { 00:33:27.350 "name": "BaseBdev4", 00:33:27.350 "uuid": "80269941-d29c-5498-ad3d-e37641cb5e6f", 00:33:27.350 "is_configured": true, 00:33:27.350 "data_offset": 0, 00:33:27.350 "data_size": 65536 00:33:27.350 } 00:33:27.350 ] 00:33:27.350 }' 00:33:27.350 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:27.350 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:27.350 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:27.350 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:33:27.350 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:33:27.350 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:27.350 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:27.350 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:27.350 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:27.350 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:27.350 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:27.350 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:27.607 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:27.607 "name": "raid_bdev1", 00:33:27.607 "uuid": "33a780be-239d-4c54-8735-b2bd5ea6554f", 00:33:27.607 "strip_size_kb": 0, 00:33:27.607 "state": "online", 00:33:27.607 "raid_level": "raid1", 00:33:27.607 "superblock": false, 00:33:27.607 "num_base_bdevs": 4, 00:33:27.607 "num_base_bdevs_discovered": 3, 00:33:27.607 "num_base_bdevs_operational": 3, 00:33:27.607 "base_bdevs_list": [ 00:33:27.607 { 00:33:27.607 "name": "spare", 00:33:27.607 "uuid": "f5dbe5d1-06dc-5352-ba77-c59bf8226c40", 00:33:27.607 "is_configured": true, 00:33:27.607 "data_offset": 0, 00:33:27.607 "data_size": 65536 00:33:27.607 }, 00:33:27.607 { 00:33:27.607 "name": null, 00:33:27.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:27.607 "is_configured": false, 00:33:27.607 "data_offset": 0, 00:33:27.607 "data_size": 65536 00:33:27.607 }, 00:33:27.607 { 00:33:27.607 "name": "BaseBdev3", 00:33:27.607 "uuid": "45a66a40-2b7e-5c94-8296-5317ca230b1b", 00:33:27.607 "is_configured": true, 00:33:27.607 "data_offset": 0, 00:33:27.607 "data_size": 65536 00:33:27.607 }, 00:33:27.607 { 00:33:27.607 "name": "BaseBdev4", 00:33:27.607 "uuid": "80269941-d29c-5498-ad3d-e37641cb5e6f", 00:33:27.607 "is_configured": true, 00:33:27.607 "data_offset": 0, 00:33:27.607 "data_size": 65536 00:33:27.607 } 00:33:27.607 ] 00:33:27.607 }' 00:33:27.607 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:27.864 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:27.864 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:27.864 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:27.864 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:27.864 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:27.864 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:27.864 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:27.864 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:27.864 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:27.864 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:27.864 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:27.864 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:27.864 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:27.864 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:27.864 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:28.122 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:28.122 "name": "raid_bdev1", 00:33:28.122 "uuid": "33a780be-239d-4c54-8735-b2bd5ea6554f", 00:33:28.122 "strip_size_kb": 0, 00:33:28.122 "state": "online", 00:33:28.122 "raid_level": "raid1", 00:33:28.122 "superblock": false, 00:33:28.122 "num_base_bdevs": 4, 00:33:28.122 "num_base_bdevs_discovered": 3, 00:33:28.122 "num_base_bdevs_operational": 3, 00:33:28.122 "base_bdevs_list": [ 00:33:28.122 { 00:33:28.122 "name": "spare", 00:33:28.122 "uuid": "f5dbe5d1-06dc-5352-ba77-c59bf8226c40", 00:33:28.122 "is_configured": true, 00:33:28.122 "data_offset": 0, 00:33:28.122 "data_size": 65536 00:33:28.122 }, 00:33:28.122 { 00:33:28.122 "name": null, 00:33:28.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:28.122 "is_configured": false, 00:33:28.122 "data_offset": 0, 00:33:28.122 "data_size": 65536 00:33:28.122 }, 00:33:28.122 { 00:33:28.122 "name": "BaseBdev3", 00:33:28.122 "uuid": "45a66a40-2b7e-5c94-8296-5317ca230b1b", 00:33:28.122 "is_configured": true, 00:33:28.122 "data_offset": 0, 00:33:28.122 "data_size": 65536 00:33:28.122 }, 00:33:28.122 { 00:33:28.122 "name": "BaseBdev4", 00:33:28.122 "uuid": "80269941-d29c-5498-ad3d-e37641cb5e6f", 00:33:28.122 "is_configured": true, 00:33:28.122 "data_offset": 0, 00:33:28.122 "data_size": 65536 00:33:28.122 } 00:33:28.122 ] 00:33:28.122 }' 00:33:28.122 11:56:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:28.122 11:56:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:28.705 11:57:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:28.964 [2024-06-10 11:57:00.956400] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:28.964 [2024-06-10 11:57:00.956658] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:29.222 00:33:29.222 Latency(us) 00:33:29.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.222 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:33:29.222 raid_bdev1 : 12.05 93.68 281.03 0.00 0.00 14638.71 349.14 125829.12 00:33:29.222 =================================================================================================================== 00:33:29.222 Total : 93.68 281.03 0.00 0.00 14638.71 349.14 125829.12 00:33:29.222 [2024-06-10 11:57:01.083186] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:29.222 0 00:33:29.223 [2024-06-10 11:57:01.083411] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:29.223 [2024-06-10 11:57:01.083528] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:29.223 [2024-06-10 11:57:01.083540] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:33:29.223 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:29.223 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:33:29.481 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:33:29.481 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:33:29.481 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:33:29.481 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:33:29.481 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:29.481 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:33:29.481 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:29.481 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:29.481 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:29.481 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:33:29.481 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:29.481 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:29.481 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:33:29.740 /dev/nbd0 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local i 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # break 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:29.740 1+0 records in 00:33:29.740 1+0 records out 00:33:29.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539827 s, 7.6 MB/s 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # size=4096 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # return 0 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:29.740 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:33:29.998 /dev/nbd1 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local i 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # break 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:29.998 1+0 records in 00:33:29.998 1+0 records out 00:33:29.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735033 s, 5.6 MB/s 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # size=4096 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # return 0 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:29.998 11:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:33:30.262 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:33:30.262 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:30.262 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:33:30.262 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:30.262 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:33:30.262 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:30.262 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:30.525 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:33:30.864 /dev/nbd1 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local i 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # break 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:30.864 1+0 records in 00:33:30.864 1+0 records out 00:33:30.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072321 s, 5.7 MB/s 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # size=4096 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # return 0 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:30.864 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:33:31.122 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:33:31.122 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:31.122 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:33:31.122 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:31.122 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:33:31.122 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:31.122 11:57:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:31.378 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:31.378 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:31.378 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:31.378 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:31.378 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:31.378 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:31.378 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:33:31.378 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:31.378 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:33:31.378 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:31.378 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:31.378 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:31.378 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:33:31.378 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:31.378 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 150310 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@949 -- # '[' -z 150310 ']' 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # kill -0 150310 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # uname 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 150310 00:33:31.635 killing process with pid 150310 00:33:31.635 Received shutdown signal, test time was about 14.629191 seconds 00:33:31.635 00:33:31.635 Latency(us) 00:33:31.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.635 =================================================================================================================== 00:33:31.635 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # echo 'killing process with pid 150310' 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # kill 150310 00:33:31.635 11:57:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # wait 150310 00:33:31.635 [2024-06-10 11:57:03.629254] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:32.199 [2024-06-10 11:57:04.200578] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:34.096 ************************************ 00:33:34.096 END TEST raid_rebuild_test_io 00:33:34.096 ************************************ 00:33:34.096 11:57:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:33:34.096 00:33:34.096 real 0m22.920s 00:33:34.096 user 0m35.116s 00:33:34.096 sys 0m3.158s 00:33:34.096 11:57:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:34.096 11:57:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:34.096 11:57:06 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:33:34.096 11:57:06 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:33:34.096 11:57:06 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:34.096 11:57:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:34.096 ************************************ 00:33:34.096 START TEST raid_rebuild_test_sb_io 00:33:34.096 ************************************ 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 4 true true true 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=150858 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 150858 /var/tmp/spdk-raid.sock 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@830 -- # '[' -z 150858 ']' 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:34.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:34.096 11:57:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:34.096 [2024-06-10 11:57:06.145430] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:33:34.096 [2024-06-10 11:57:06.145848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150858 ] 00:33:34.096 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:34.096 Zero copy mechanism will not be used. 00:33:34.354 [2024-06-10 11:57:06.327760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.611 [2024-06-10 11:57:06.567949] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.869 [2024-06-10 11:57:06.833874] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:35.127 11:57:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:35.127 11:57:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@863 -- # return 0 00:33:35.127 11:57:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:35.127 11:57:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:35.383 BaseBdev1_malloc 00:33:35.383 11:57:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:35.641 [2024-06-10 11:57:07.567192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:35.641 [2024-06-10 11:57:07.567442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:35.641 [2024-06-10 11:57:07.567610] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:33:35.641 [2024-06-10 11:57:07.567725] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:35.641 [2024-06-10 11:57:07.570435] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:35.641 [2024-06-10 11:57:07.570602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:35.641 BaseBdev1 00:33:35.641 11:57:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:35.641 11:57:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:35.899 BaseBdev2_malloc 00:33:35.899 11:57:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:36.224 [2024-06-10 11:57:08.143175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:36.224 [2024-06-10 11:57:08.143526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:36.224 [2024-06-10 11:57:08.143691] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:33:36.224 [2024-06-10 11:57:08.143793] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:36.224 [2024-06-10 11:57:08.146509] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:36.224 [2024-06-10 11:57:08.146696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:36.224 BaseBdev2 00:33:36.224 11:57:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:36.224 11:57:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:36.480 BaseBdev3_malloc 00:33:36.480 11:57:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:33:36.738 [2024-06-10 11:57:08.696133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:33:36.738 [2024-06-10 11:57:08.696411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:36.738 [2024-06-10 11:57:08.696487] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:36.738 [2024-06-10 11:57:08.696663] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:36.738 [2024-06-10 11:57:08.699304] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:36.738 [2024-06-10 11:57:08.699477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:36.738 BaseBdev3 00:33:36.738 11:57:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:36.738 11:57:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:33:36.995 BaseBdev4_malloc 00:33:36.995 11:57:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:33:37.251 [2024-06-10 11:57:09.227696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:33:37.251 [2024-06-10 11:57:09.228012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:37.251 [2024-06-10 11:57:09.228088] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:37.251 [2024-06-10 11:57:09.228349] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:37.251 [2024-06-10 11:57:09.230958] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:37.251 [2024-06-10 11:57:09.231137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:33:37.251 BaseBdev4 00:33:37.251 11:57:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:33:37.508 spare_malloc 00:33:37.508 11:57:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:37.766 spare_delay 00:33:37.766 11:57:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:38.024 [2024-06-10 11:57:09.991940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:38.024 [2024-06-10 11:57:09.992233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:38.024 [2024-06-10 11:57:09.992320] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:33:38.024 [2024-06-10 11:57:09.992426] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:38.024 [2024-06-10 11:57:09.995008] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:38.024 [2024-06-10 11:57:09.995187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:38.024 spare 00:33:38.024 11:57:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:33:38.290 [2024-06-10 11:57:10.276704] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:38.290 [2024-06-10 11:57:10.279169] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:38.290 [2024-06-10 11:57:10.279382] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:38.290 [2024-06-10 11:57:10.279556] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:38.290 [2024-06-10 11:57:10.279830] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:33:38.290 [2024-06-10 11:57:10.279880] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:38.290 [2024-06-10 11:57:10.280101] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:38.290 [2024-06-10 11:57:10.281107] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:33:38.290 [2024-06-10 11:57:10.281344] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:33:38.290 [2024-06-10 11:57:10.281892] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:38.290 11:57:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:33:38.290 11:57:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:38.290 11:57:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:38.290 11:57:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:38.290 11:57:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:38.290 11:57:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:38.290 11:57:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:38.290 11:57:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:38.290 11:57:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:38.290 11:57:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:38.290 11:57:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:38.290 11:57:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:38.591 11:57:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:38.591 "name": "raid_bdev1", 00:33:38.591 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:33:38.591 "strip_size_kb": 0, 00:33:38.591 "state": "online", 00:33:38.591 "raid_level": "raid1", 00:33:38.591 "superblock": true, 00:33:38.591 "num_base_bdevs": 4, 00:33:38.591 "num_base_bdevs_discovered": 4, 00:33:38.591 "num_base_bdevs_operational": 4, 00:33:38.591 "base_bdevs_list": [ 00:33:38.591 { 00:33:38.591 "name": "BaseBdev1", 00:33:38.591 "uuid": "e338c740-7d3d-5843-b54e-8d82e1d1da71", 00:33:38.591 "is_configured": true, 00:33:38.591 "data_offset": 2048, 00:33:38.591 "data_size": 63488 00:33:38.591 }, 00:33:38.591 { 00:33:38.591 "name": "BaseBdev2", 00:33:38.591 "uuid": "f78d0575-e83a-5ea5-80ee-d4297005aea2", 00:33:38.591 "is_configured": true, 00:33:38.591 "data_offset": 2048, 00:33:38.591 "data_size": 63488 00:33:38.591 }, 00:33:38.591 { 00:33:38.591 "name": "BaseBdev3", 00:33:38.591 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:33:38.591 "is_configured": true, 00:33:38.591 "data_offset": 2048, 00:33:38.591 "data_size": 63488 00:33:38.591 }, 00:33:38.591 { 00:33:38.591 "name": "BaseBdev4", 00:33:38.591 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:33:38.591 "is_configured": true, 00:33:38.591 "data_offset": 2048, 00:33:38.591 "data_size": 63488 00:33:38.591 } 00:33:38.591 ] 00:33:38.591 }' 00:33:38.591 11:57:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:38.591 11:57:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:39.525 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:39.525 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:33:39.525 [2024-06-10 11:57:11.450436] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:39.525 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:33:39.525 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:39.525 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:39.783 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:33:39.783 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:33:39.783 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:39.783 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:33:40.041 [2024-06-10 11:57:11.848266] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:33:40.041 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:40.041 Zero copy mechanism will not be used. 00:33:40.041 Running I/O for 60 seconds... 00:33:40.041 [2024-06-10 11:57:11.888030] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:40.041 [2024-06-10 11:57:11.900782] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006150 00:33:40.041 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:40.041 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:40.041 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:40.041 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:40.041 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:40.041 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:40.041 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:40.041 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:40.041 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:40.041 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:40.041 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:40.041 11:57:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:40.336 11:57:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:40.336 "name": "raid_bdev1", 00:33:40.336 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:33:40.336 "strip_size_kb": 0, 00:33:40.336 "state": "online", 00:33:40.336 "raid_level": "raid1", 00:33:40.336 "superblock": true, 00:33:40.336 "num_base_bdevs": 4, 00:33:40.336 "num_base_bdevs_discovered": 3, 00:33:40.336 "num_base_bdevs_operational": 3, 00:33:40.336 "base_bdevs_list": [ 00:33:40.336 { 00:33:40.336 "name": null, 00:33:40.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.336 "is_configured": false, 00:33:40.336 "data_offset": 2048, 00:33:40.336 "data_size": 63488 00:33:40.336 }, 00:33:40.336 { 00:33:40.336 "name": "BaseBdev2", 00:33:40.336 "uuid": "f78d0575-e83a-5ea5-80ee-d4297005aea2", 00:33:40.336 "is_configured": true, 00:33:40.336 "data_offset": 2048, 00:33:40.336 "data_size": 63488 00:33:40.336 }, 00:33:40.336 { 00:33:40.336 "name": "BaseBdev3", 00:33:40.336 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:33:40.336 "is_configured": true, 00:33:40.336 "data_offset": 2048, 00:33:40.336 "data_size": 63488 00:33:40.336 }, 00:33:40.336 { 00:33:40.336 "name": "BaseBdev4", 00:33:40.336 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:33:40.336 "is_configured": true, 00:33:40.336 "data_offset": 2048, 00:33:40.336 "data_size": 63488 00:33:40.336 } 00:33:40.336 ] 00:33:40.336 }' 00:33:40.336 11:57:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:40.336 11:57:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:40.902 11:57:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:41.160 [2024-06-10 11:57:13.060001] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:41.160 11:57:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:33:41.160 [2024-06-10 11:57:13.129118] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:41.160 [2024-06-10 11:57:13.131612] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:41.419 [2024-06-10 11:57:13.257219] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:41.419 [2024-06-10 11:57:13.257890] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:41.419 [2024-06-10 11:57:13.387422] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:41.419 [2024-06-10 11:57:13.387902] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:41.678 [2024-06-10 11:57:13.612942] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:33:41.936 [2024-06-10 11:57:13.842852] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:33:42.195 [2024-06-10 11:57:14.079631] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:33:42.195 [2024-06-10 11:57:14.081177] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:33:42.195 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:42.195 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:42.195 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:42.195 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:42.195 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:42.195 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:42.195 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:42.454 [2024-06-10 11:57:14.303558] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:42.454 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:42.454 "name": "raid_bdev1", 00:33:42.454 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:33:42.454 "strip_size_kb": 0, 00:33:42.454 "state": "online", 00:33:42.454 "raid_level": "raid1", 00:33:42.454 "superblock": true, 00:33:42.454 "num_base_bdevs": 4, 00:33:42.454 "num_base_bdevs_discovered": 4, 00:33:42.454 "num_base_bdevs_operational": 4, 00:33:42.454 "process": { 00:33:42.454 "type": "rebuild", 00:33:42.454 "target": "spare", 00:33:42.454 "progress": { 00:33:42.454 "blocks": 16384, 00:33:42.454 "percent": 25 00:33:42.454 } 00:33:42.454 }, 00:33:42.454 "base_bdevs_list": [ 00:33:42.454 { 00:33:42.454 "name": "spare", 00:33:42.454 "uuid": "d84fe9de-f036-522d-aaa6-cfefbe753583", 00:33:42.454 "is_configured": true, 00:33:42.454 "data_offset": 2048, 00:33:42.454 "data_size": 63488 00:33:42.454 }, 00:33:42.454 { 00:33:42.454 "name": "BaseBdev2", 00:33:42.454 "uuid": "f78d0575-e83a-5ea5-80ee-d4297005aea2", 00:33:42.454 "is_configured": true, 00:33:42.454 "data_offset": 2048, 00:33:42.454 "data_size": 63488 00:33:42.454 }, 00:33:42.454 { 00:33:42.454 "name": "BaseBdev3", 00:33:42.454 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:33:42.454 "is_configured": true, 00:33:42.454 "data_offset": 2048, 00:33:42.454 "data_size": 63488 00:33:42.454 }, 00:33:42.454 { 00:33:42.454 "name": "BaseBdev4", 00:33:42.454 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:33:42.454 "is_configured": true, 00:33:42.454 "data_offset": 2048, 00:33:42.454 "data_size": 63488 00:33:42.454 } 00:33:42.454 ] 00:33:42.454 }' 00:33:42.454 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:42.454 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:42.454 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:42.454 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:42.454 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:42.712 [2024-06-10 11:57:14.560213] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:33:42.712 [2024-06-10 11:57:14.764382] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:42.712 [2024-06-10 11:57:14.771176] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:33:42.713 [2024-06-10 11:57:14.772018] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:33:42.971 [2024-06-10 11:57:14.887687] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:42.971 [2024-06-10 11:57:14.892784] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:42.971 [2024-06-10 11:57:14.892995] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:42.971 [2024-06-10 11:57:14.893035] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:42.971 [2024-06-10 11:57:14.932381] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006150 00:33:42.971 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:42.971 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:42.971 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:42.971 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:42.971 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:42.971 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:42.971 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:42.971 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:42.971 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:42.971 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:42.971 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:42.971 11:57:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:43.229 11:57:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:43.229 "name": "raid_bdev1", 00:33:43.229 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:33:43.229 "strip_size_kb": 0, 00:33:43.229 "state": "online", 00:33:43.229 "raid_level": "raid1", 00:33:43.229 "superblock": true, 00:33:43.229 "num_base_bdevs": 4, 00:33:43.229 "num_base_bdevs_discovered": 3, 00:33:43.229 "num_base_bdevs_operational": 3, 00:33:43.229 "base_bdevs_list": [ 00:33:43.229 { 00:33:43.229 "name": null, 00:33:43.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:43.229 "is_configured": false, 00:33:43.229 "data_offset": 2048, 00:33:43.229 "data_size": 63488 00:33:43.229 }, 00:33:43.229 { 00:33:43.229 "name": "BaseBdev2", 00:33:43.229 "uuid": "f78d0575-e83a-5ea5-80ee-d4297005aea2", 00:33:43.229 "is_configured": true, 00:33:43.229 "data_offset": 2048, 00:33:43.229 "data_size": 63488 00:33:43.229 }, 00:33:43.229 { 00:33:43.229 "name": "BaseBdev3", 00:33:43.229 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:33:43.229 "is_configured": true, 00:33:43.229 "data_offset": 2048, 00:33:43.229 "data_size": 63488 00:33:43.229 }, 00:33:43.229 { 00:33:43.229 "name": "BaseBdev4", 00:33:43.229 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:33:43.229 "is_configured": true, 00:33:43.229 "data_offset": 2048, 00:33:43.229 "data_size": 63488 00:33:43.229 } 00:33:43.229 ] 00:33:43.229 }' 00:33:43.229 11:57:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:43.229 11:57:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:44.162 11:57:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:44.162 11:57:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:44.162 11:57:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:44.162 11:57:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:44.162 11:57:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:44.162 11:57:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:44.162 11:57:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:44.162 11:57:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:44.162 "name": "raid_bdev1", 00:33:44.162 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:33:44.162 "strip_size_kb": 0, 00:33:44.162 "state": "online", 00:33:44.162 "raid_level": "raid1", 00:33:44.162 "superblock": true, 00:33:44.162 "num_base_bdevs": 4, 00:33:44.162 "num_base_bdevs_discovered": 3, 00:33:44.162 "num_base_bdevs_operational": 3, 00:33:44.162 "base_bdevs_list": [ 00:33:44.162 { 00:33:44.162 "name": null, 00:33:44.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.162 "is_configured": false, 00:33:44.162 "data_offset": 2048, 00:33:44.162 "data_size": 63488 00:33:44.162 }, 00:33:44.162 { 00:33:44.162 "name": "BaseBdev2", 00:33:44.162 "uuid": "f78d0575-e83a-5ea5-80ee-d4297005aea2", 00:33:44.162 "is_configured": true, 00:33:44.162 "data_offset": 2048, 00:33:44.162 "data_size": 63488 00:33:44.162 }, 00:33:44.162 { 00:33:44.162 "name": "BaseBdev3", 00:33:44.162 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:33:44.162 "is_configured": true, 00:33:44.162 "data_offset": 2048, 00:33:44.162 "data_size": 63488 00:33:44.162 }, 00:33:44.162 { 00:33:44.162 "name": "BaseBdev4", 00:33:44.162 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:33:44.162 "is_configured": true, 00:33:44.162 "data_offset": 2048, 00:33:44.162 "data_size": 63488 00:33:44.162 } 00:33:44.162 ] 00:33:44.162 }' 00:33:44.162 11:57:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:44.162 11:57:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:44.162 11:57:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:44.471 11:57:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:44.471 11:57:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:44.471 [2024-06-10 11:57:16.502161] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:44.729 [2024-06-10 11:57:16.570939] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:33:44.729 [2024-06-10 11:57:16.573559] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:44.729 11:57:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:44.729 [2024-06-10 11:57:16.692555] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:44.729 [2024-06-10 11:57:16.693292] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:44.986 [2024-06-10 11:57:16.897203] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:44.986 [2024-06-10 11:57:16.897725] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:45.243 [2024-06-10 11:57:17.163655] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:33:45.807 [2024-06-10 11:57:17.571302] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:33:45.807 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:45.807 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:45.807 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:45.807 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:45.807 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:45.807 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:45.807 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:45.807 [2024-06-10 11:57:17.698468] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:45.807 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:45.807 "name": "raid_bdev1", 00:33:45.807 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:33:45.807 "strip_size_kb": 0, 00:33:45.807 "state": "online", 00:33:45.807 "raid_level": "raid1", 00:33:45.807 "superblock": true, 00:33:45.807 "num_base_bdevs": 4, 00:33:45.807 "num_base_bdevs_discovered": 4, 00:33:45.807 "num_base_bdevs_operational": 4, 00:33:45.807 "process": { 00:33:45.807 "type": "rebuild", 00:33:45.807 "target": "spare", 00:33:45.807 "progress": { 00:33:45.807 "blocks": 18432, 00:33:45.807 "percent": 29 00:33:45.807 } 00:33:45.807 }, 00:33:45.807 "base_bdevs_list": [ 00:33:45.807 { 00:33:45.808 "name": "spare", 00:33:45.808 "uuid": "d84fe9de-f036-522d-aaa6-cfefbe753583", 00:33:45.808 "is_configured": true, 00:33:45.808 "data_offset": 2048, 00:33:45.808 "data_size": 63488 00:33:45.808 }, 00:33:45.808 { 00:33:45.808 "name": "BaseBdev2", 00:33:45.808 "uuid": "f78d0575-e83a-5ea5-80ee-d4297005aea2", 00:33:45.808 "is_configured": true, 00:33:45.808 "data_offset": 2048, 00:33:45.808 "data_size": 63488 00:33:45.808 }, 00:33:45.808 { 00:33:45.808 "name": "BaseBdev3", 00:33:45.808 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:33:45.808 "is_configured": true, 00:33:45.808 "data_offset": 2048, 00:33:45.808 "data_size": 63488 00:33:45.808 }, 00:33:45.808 { 00:33:45.808 "name": "BaseBdev4", 00:33:45.808 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:33:45.808 "is_configured": true, 00:33:45.808 "data_offset": 2048, 00:33:45.808 "data_size": 63488 00:33:45.808 } 00:33:45.808 ] 00:33:45.808 }' 00:33:45.808 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:46.064 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:46.064 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:46.064 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:46.064 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:33:46.064 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:33:46.064 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:33:46.064 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:33:46.064 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:33:46.064 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:33:46.064 11:57:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:33:46.322 [2024-06-10 11:57:18.174445] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:46.581 [2024-06-10 11:57:18.468800] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006150 00:33:46.581 [2024-06-10 11:57:18.469029] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:33:46.581 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:33:46.581 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:33:46.581 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:46.581 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:46.581 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:46.581 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:46.581 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:46.581 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:46.581 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:46.581 [2024-06-10 11:57:18.613083] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:33:46.839 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:46.839 "name": "raid_bdev1", 00:33:46.839 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:33:46.839 "strip_size_kb": 0, 00:33:46.839 "state": "online", 00:33:46.839 "raid_level": "raid1", 00:33:46.839 "superblock": true, 00:33:46.839 "num_base_bdevs": 4, 00:33:46.839 "num_base_bdevs_discovered": 3, 00:33:46.839 "num_base_bdevs_operational": 3, 00:33:46.839 "process": { 00:33:46.839 "type": "rebuild", 00:33:46.839 "target": "spare", 00:33:46.839 "progress": { 00:33:46.839 "blocks": 28672, 00:33:46.839 "percent": 45 00:33:46.839 } 00:33:46.839 }, 00:33:46.839 "base_bdevs_list": [ 00:33:46.839 { 00:33:46.839 "name": "spare", 00:33:46.839 "uuid": "d84fe9de-f036-522d-aaa6-cfefbe753583", 00:33:46.839 "is_configured": true, 00:33:46.839 "data_offset": 2048, 00:33:46.839 "data_size": 63488 00:33:46.839 }, 00:33:46.839 { 00:33:46.839 "name": null, 00:33:46.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:46.839 "is_configured": false, 00:33:46.839 "data_offset": 2048, 00:33:46.839 "data_size": 63488 00:33:46.839 }, 00:33:46.839 { 00:33:46.839 "name": "BaseBdev3", 00:33:46.839 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:33:46.839 "is_configured": true, 00:33:46.839 "data_offset": 2048, 00:33:46.839 "data_size": 63488 00:33:46.839 }, 00:33:46.839 { 00:33:46.839 "name": "BaseBdev4", 00:33:46.839 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:33:46.839 "is_configured": true, 00:33:46.839 "data_offset": 2048, 00:33:46.839 "data_size": 63488 00:33:46.839 } 00:33:46.839 ] 00:33:46.839 }' 00:33:46.839 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:46.839 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:46.839 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:46.839 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:46.839 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=1098 00:33:46.839 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:46.839 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:46.839 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:46.839 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:46.839 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:46.839 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:46.839 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:46.839 11:57:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:47.096 11:57:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:47.096 "name": "raid_bdev1", 00:33:47.096 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:33:47.096 "strip_size_kb": 0, 00:33:47.096 "state": "online", 00:33:47.096 "raid_level": "raid1", 00:33:47.096 "superblock": true, 00:33:47.096 "num_base_bdevs": 4, 00:33:47.096 "num_base_bdevs_discovered": 3, 00:33:47.097 "num_base_bdevs_operational": 3, 00:33:47.097 "process": { 00:33:47.097 "type": "rebuild", 00:33:47.097 "target": "spare", 00:33:47.097 "progress": { 00:33:47.097 "blocks": 34816, 00:33:47.097 "percent": 54 00:33:47.097 } 00:33:47.097 }, 00:33:47.097 "base_bdevs_list": [ 00:33:47.097 { 00:33:47.097 "name": "spare", 00:33:47.097 "uuid": "d84fe9de-f036-522d-aaa6-cfefbe753583", 00:33:47.097 "is_configured": true, 00:33:47.097 "data_offset": 2048, 00:33:47.097 "data_size": 63488 00:33:47.097 }, 00:33:47.097 { 00:33:47.097 "name": null, 00:33:47.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:47.097 "is_configured": false, 00:33:47.097 "data_offset": 2048, 00:33:47.097 "data_size": 63488 00:33:47.097 }, 00:33:47.097 { 00:33:47.097 "name": "BaseBdev3", 00:33:47.097 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:33:47.097 "is_configured": true, 00:33:47.097 "data_offset": 2048, 00:33:47.097 "data_size": 63488 00:33:47.097 }, 00:33:47.097 { 00:33:47.097 "name": "BaseBdev4", 00:33:47.097 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:33:47.097 "is_configured": true, 00:33:47.097 "data_offset": 2048, 00:33:47.097 "data_size": 63488 00:33:47.097 } 00:33:47.097 ] 00:33:47.097 }' 00:33:47.354 11:57:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:47.354 11:57:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:47.354 11:57:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:47.354 11:57:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:47.354 11:57:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:47.354 [2024-06-10 11:57:19.232421] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:33:47.612 [2024-06-10 11:57:19.441367] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:33:48.178 11:57:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:48.178 11:57:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:48.178 11:57:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:48.178 11:57:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:48.178 11:57:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:48.178 11:57:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:48.437 11:57:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:48.437 11:57:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:48.437 [2024-06-10 11:57:20.456298] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:33:48.702 11:57:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:48.702 "name": "raid_bdev1", 00:33:48.702 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:33:48.702 "strip_size_kb": 0, 00:33:48.702 "state": "online", 00:33:48.702 "raid_level": "raid1", 00:33:48.702 "superblock": true, 00:33:48.702 "num_base_bdevs": 4, 00:33:48.702 "num_base_bdevs_discovered": 3, 00:33:48.702 "num_base_bdevs_operational": 3, 00:33:48.702 "process": { 00:33:48.702 "type": "rebuild", 00:33:48.702 "target": "spare", 00:33:48.702 "progress": { 00:33:48.702 "blocks": 57344, 00:33:48.702 "percent": 90 00:33:48.702 } 00:33:48.702 }, 00:33:48.702 "base_bdevs_list": [ 00:33:48.702 { 00:33:48.702 "name": "spare", 00:33:48.702 "uuid": "d84fe9de-f036-522d-aaa6-cfefbe753583", 00:33:48.702 "is_configured": true, 00:33:48.702 "data_offset": 2048, 00:33:48.702 "data_size": 63488 00:33:48.702 }, 00:33:48.702 { 00:33:48.702 "name": null, 00:33:48.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:48.702 "is_configured": false, 00:33:48.702 "data_offset": 2048, 00:33:48.702 "data_size": 63488 00:33:48.702 }, 00:33:48.702 { 00:33:48.702 "name": "BaseBdev3", 00:33:48.702 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:33:48.702 "is_configured": true, 00:33:48.702 "data_offset": 2048, 00:33:48.702 "data_size": 63488 00:33:48.702 }, 00:33:48.702 { 00:33:48.702 "name": "BaseBdev4", 00:33:48.702 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:33:48.702 "is_configured": true, 00:33:48.702 "data_offset": 2048, 00:33:48.702 "data_size": 63488 00:33:48.702 } 00:33:48.702 ] 00:33:48.702 }' 00:33:48.702 11:57:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:48.702 11:57:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:48.702 11:57:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:48.702 [2024-06-10 11:57:20.575926] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:33:48.702 11:57:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:48.702 11:57:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:48.969 [2024-06-10 11:57:20.805130] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:48.969 [2024-06-10 11:57:20.911487] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:48.969 [2024-06-10 11:57:20.914541] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:49.902 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:49.902 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:49.902 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:49.902 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:49.902 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:49.902 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:49.902 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:49.902 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:49.902 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:49.902 "name": "raid_bdev1", 00:33:49.902 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:33:49.902 "strip_size_kb": 0, 00:33:49.902 "state": "online", 00:33:49.902 "raid_level": "raid1", 00:33:49.902 "superblock": true, 00:33:49.902 "num_base_bdevs": 4, 00:33:49.902 "num_base_bdevs_discovered": 3, 00:33:49.902 "num_base_bdevs_operational": 3, 00:33:49.902 "base_bdevs_list": [ 00:33:49.902 { 00:33:49.902 "name": "spare", 00:33:49.902 "uuid": "d84fe9de-f036-522d-aaa6-cfefbe753583", 00:33:49.902 "is_configured": true, 00:33:49.902 "data_offset": 2048, 00:33:49.902 "data_size": 63488 00:33:49.902 }, 00:33:49.902 { 00:33:49.902 "name": null, 00:33:49.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:49.902 "is_configured": false, 00:33:49.902 "data_offset": 2048, 00:33:49.902 "data_size": 63488 00:33:49.902 }, 00:33:49.902 { 00:33:49.902 "name": "BaseBdev3", 00:33:49.902 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:33:49.902 "is_configured": true, 00:33:49.902 "data_offset": 2048, 00:33:49.902 "data_size": 63488 00:33:49.902 }, 00:33:49.902 { 00:33:49.902 "name": "BaseBdev4", 00:33:49.902 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:33:49.902 "is_configured": true, 00:33:49.902 "data_offset": 2048, 00:33:49.902 "data_size": 63488 00:33:49.902 } 00:33:49.902 ] 00:33:49.902 }' 00:33:49.902 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:49.903 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:49.903 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:50.162 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:33:50.162 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:33:50.162 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:50.162 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:50.162 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:50.162 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:50.162 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:50.162 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:50.162 11:57:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:50.162 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:50.162 "name": "raid_bdev1", 00:33:50.162 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:33:50.162 "strip_size_kb": 0, 00:33:50.162 "state": "online", 00:33:50.162 "raid_level": "raid1", 00:33:50.163 "superblock": true, 00:33:50.163 "num_base_bdevs": 4, 00:33:50.163 "num_base_bdevs_discovered": 3, 00:33:50.163 "num_base_bdevs_operational": 3, 00:33:50.163 "base_bdevs_list": [ 00:33:50.163 { 00:33:50.163 "name": "spare", 00:33:50.163 "uuid": "d84fe9de-f036-522d-aaa6-cfefbe753583", 00:33:50.163 "is_configured": true, 00:33:50.163 "data_offset": 2048, 00:33:50.163 "data_size": 63488 00:33:50.163 }, 00:33:50.163 { 00:33:50.163 "name": null, 00:33:50.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.163 "is_configured": false, 00:33:50.163 "data_offset": 2048, 00:33:50.163 "data_size": 63488 00:33:50.163 }, 00:33:50.163 { 00:33:50.163 "name": "BaseBdev3", 00:33:50.163 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:33:50.163 "is_configured": true, 00:33:50.163 "data_offset": 2048, 00:33:50.163 "data_size": 63488 00:33:50.163 }, 00:33:50.163 { 00:33:50.163 "name": "BaseBdev4", 00:33:50.163 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:33:50.163 "is_configured": true, 00:33:50.163 "data_offset": 2048, 00:33:50.163 "data_size": 63488 00:33:50.163 } 00:33:50.163 ] 00:33:50.163 }' 00:33:50.163 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:50.421 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:50.421 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:50.421 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:50.421 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:50.421 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:50.421 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:50.421 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:50.421 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:50.421 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:50.421 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:50.421 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:50.421 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:50.421 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:50.421 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:50.421 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:50.681 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:50.681 "name": "raid_bdev1", 00:33:50.681 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:33:50.681 "strip_size_kb": 0, 00:33:50.681 "state": "online", 00:33:50.681 "raid_level": "raid1", 00:33:50.681 "superblock": true, 00:33:50.681 "num_base_bdevs": 4, 00:33:50.681 "num_base_bdevs_discovered": 3, 00:33:50.681 "num_base_bdevs_operational": 3, 00:33:50.681 "base_bdevs_list": [ 00:33:50.681 { 00:33:50.681 "name": "spare", 00:33:50.681 "uuid": "d84fe9de-f036-522d-aaa6-cfefbe753583", 00:33:50.681 "is_configured": true, 00:33:50.681 "data_offset": 2048, 00:33:50.681 "data_size": 63488 00:33:50.681 }, 00:33:50.681 { 00:33:50.681 "name": null, 00:33:50.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.681 "is_configured": false, 00:33:50.681 "data_offset": 2048, 00:33:50.681 "data_size": 63488 00:33:50.681 }, 00:33:50.681 { 00:33:50.681 "name": "BaseBdev3", 00:33:50.681 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:33:50.681 "is_configured": true, 00:33:50.681 "data_offset": 2048, 00:33:50.681 "data_size": 63488 00:33:50.681 }, 00:33:50.681 { 00:33:50.681 "name": "BaseBdev4", 00:33:50.681 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:33:50.681 "is_configured": true, 00:33:50.681 "data_offset": 2048, 00:33:50.681 "data_size": 63488 00:33:50.681 } 00:33:50.681 ] 00:33:50.681 }' 00:33:50.681 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:50.681 11:57:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:51.246 11:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:51.502 [2024-06-10 11:57:23.512984] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:51.502 [2024-06-10 11:57:23.513246] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:51.502 00:33:51.502 Latency(us) 00:33:51.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.503 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:33:51.503 raid_bdev1 : 11.69 93.24 279.73 0.00 0.00 15414.45 349.14 118339.29 00:33:51.503 =================================================================================================================== 00:33:51.503 Total : 93.24 279.73 0.00 0.00 15414.45 349.14 118339.29 00:33:51.761 [2024-06-10 11:57:23.570437] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:51.761 [2024-06-10 11:57:23.570669] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:51.761 [2024-06-10 11:57:23.570826] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:51.761 0 00:33:51.761 [2024-06-10 11:57:23.571007] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:33:51.761 11:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:51.761 11:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:33:52.018 11:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:33:52.018 11:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:33:52.018 11:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:33:52.018 11:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:33:52.018 11:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:52.018 11:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:33:52.018 11:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:52.018 11:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:52.018 11:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:52.018 11:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:33:52.018 11:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:52.018 11:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:52.018 11:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:33:52.276 /dev/nbd0 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local i 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # break 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:52.276 1+0 records in 00:33:52.276 1+0 records out 00:33:52.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417272 s, 9.8 MB/s 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # size=4096 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # return 0 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:52.276 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:33:52.277 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:33:52.277 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:33:52.277 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:33:52.277 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:33:52.277 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:33:52.277 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:52.277 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:33:52.277 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:52.277 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:33:52.277 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:52.277 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:33:52.277 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:52.277 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:52.277 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:33:52.535 /dev/nbd1 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local i 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # break 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:52.535 1+0 records in 00:33:52.535 1+0 records out 00:33:52.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457517 s, 9.0 MB/s 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # size=4096 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # return 0 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:52.535 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:33:52.793 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:33:52.793 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:52.793 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:33:52.793 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:52.793 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:33:52.793 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:52.793 11:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:53.051 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:53.051 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:53.051 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:53.051 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:53.051 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:53.051 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:53.051 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:33:53.051 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:53.051 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:33:53.309 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:33:53.309 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:33:53.309 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:53.309 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:33:53.309 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:53.309 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:33:53.309 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:53.309 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:33:53.309 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:53.309 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:53.309 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:33:53.568 /dev/nbd1 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local i 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # break 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:53.568 1+0 records in 00:33:53.568 1+0 records out 00:33:53.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360023 s, 11.4 MB/s 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # size=4096 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # return 0 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:53.568 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:53.826 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:53.826 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:53.826 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:53.826 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:53.826 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:53.826 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:53.826 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:33:53.826 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:53.826 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:33:53.826 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:53.826 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:53.826 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:53.826 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:33:53.826 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:53.826 11:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:54.085 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:54.085 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:54.085 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:54.085 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:54.085 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:54.085 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:54.085 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:33:54.085 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:54.085 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:33:54.085 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:54.343 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:54.601 [2024-06-10 11:57:26.652044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:54.601 [2024-06-10 11:57:26.652145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:54.601 [2024-06-10 11:57:26.652194] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:33:54.601 [2024-06-10 11:57:26.652227] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:54.601 [2024-06-10 11:57:26.654855] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:54.601 [2024-06-10 11:57:26.654919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:54.601 [2024-06-10 11:57:26.655055] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:54.601 [2024-06-10 11:57:26.655123] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:54.601 [2024-06-10 11:57:26.655262] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:54.601 [2024-06-10 11:57:26.655386] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:54.601 spare 00:33:54.864 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:54.864 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:54.864 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:54.864 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:54.864 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:54.864 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:54.864 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:54.864 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:54.864 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:54.864 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:54.864 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:54.864 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:54.864 [2024-06-10 11:57:26.755469] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:33:54.864 [2024-06-10 11:57:26.755515] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:54.864 [2024-06-10 11:57:26.755686] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000373d0 00:33:54.864 [2024-06-10 11:57:26.756087] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:33:54.864 [2024-06-10 11:57:26.756101] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:33:54.864 [2024-06-10 11:57:26.756281] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:54.864 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:54.864 "name": "raid_bdev1", 00:33:54.864 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:33:54.864 "strip_size_kb": 0, 00:33:54.864 "state": "online", 00:33:54.864 "raid_level": "raid1", 00:33:54.864 "superblock": true, 00:33:54.864 "num_base_bdevs": 4, 00:33:54.864 "num_base_bdevs_discovered": 3, 00:33:54.864 "num_base_bdevs_operational": 3, 00:33:54.864 "base_bdevs_list": [ 00:33:54.864 { 00:33:54.864 "name": "spare", 00:33:54.864 "uuid": "d84fe9de-f036-522d-aaa6-cfefbe753583", 00:33:54.864 "is_configured": true, 00:33:54.864 "data_offset": 2048, 00:33:54.864 "data_size": 63488 00:33:54.864 }, 00:33:54.864 { 00:33:54.864 "name": null, 00:33:54.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:54.864 "is_configured": false, 00:33:54.864 "data_offset": 2048, 00:33:54.864 "data_size": 63488 00:33:54.864 }, 00:33:54.864 { 00:33:54.864 "name": "BaseBdev3", 00:33:54.864 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:33:54.864 "is_configured": true, 00:33:54.864 "data_offset": 2048, 00:33:54.864 "data_size": 63488 00:33:54.864 }, 00:33:54.864 { 00:33:54.864 "name": "BaseBdev4", 00:33:54.864 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:33:54.864 "is_configured": true, 00:33:54.864 "data_offset": 2048, 00:33:54.864 "data_size": 63488 00:33:54.864 } 00:33:54.864 ] 00:33:54.864 }' 00:33:54.864 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:54.864 11:57:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:55.432 11:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:55.432 11:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:55.432 11:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:55.432 11:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:55.432 11:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:55.432 11:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:55.432 11:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:56.000 11:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:56.000 "name": "raid_bdev1", 00:33:56.000 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:33:56.000 "strip_size_kb": 0, 00:33:56.000 "state": "online", 00:33:56.000 "raid_level": "raid1", 00:33:56.000 "superblock": true, 00:33:56.000 "num_base_bdevs": 4, 00:33:56.000 "num_base_bdevs_discovered": 3, 00:33:56.000 "num_base_bdevs_operational": 3, 00:33:56.000 "base_bdevs_list": [ 00:33:56.000 { 00:33:56.000 "name": "spare", 00:33:56.000 "uuid": "d84fe9de-f036-522d-aaa6-cfefbe753583", 00:33:56.000 "is_configured": true, 00:33:56.000 "data_offset": 2048, 00:33:56.000 "data_size": 63488 00:33:56.000 }, 00:33:56.000 { 00:33:56.000 "name": null, 00:33:56.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:56.000 "is_configured": false, 00:33:56.000 "data_offset": 2048, 00:33:56.000 "data_size": 63488 00:33:56.000 }, 00:33:56.000 { 00:33:56.000 "name": "BaseBdev3", 00:33:56.000 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:33:56.000 "is_configured": true, 00:33:56.000 "data_offset": 2048, 00:33:56.000 "data_size": 63488 00:33:56.000 }, 00:33:56.000 { 00:33:56.000 "name": "BaseBdev4", 00:33:56.000 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:33:56.000 "is_configured": true, 00:33:56.000 "data_offset": 2048, 00:33:56.000 "data_size": 63488 00:33:56.000 } 00:33:56.000 ] 00:33:56.000 }' 00:33:56.000 11:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:56.000 11:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:56.000 11:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:56.000 11:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:56.000 11:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:56.000 11:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:56.258 11:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:33:56.258 11:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:56.258 [2024-06-10 11:57:28.312883] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:56.516 11:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:56.516 11:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:56.516 11:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:56.516 11:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:56.516 11:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:56.516 11:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:56.516 11:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:56.516 11:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:56.516 11:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:56.516 11:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:56.516 11:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:56.516 11:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:56.775 11:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:56.775 "name": "raid_bdev1", 00:33:56.775 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:33:56.775 "strip_size_kb": 0, 00:33:56.775 "state": "online", 00:33:56.775 "raid_level": "raid1", 00:33:56.775 "superblock": true, 00:33:56.775 "num_base_bdevs": 4, 00:33:56.775 "num_base_bdevs_discovered": 2, 00:33:56.775 "num_base_bdevs_operational": 2, 00:33:56.775 "base_bdevs_list": [ 00:33:56.775 { 00:33:56.775 "name": null, 00:33:56.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:56.775 "is_configured": false, 00:33:56.775 "data_offset": 2048, 00:33:56.775 "data_size": 63488 00:33:56.775 }, 00:33:56.775 { 00:33:56.775 "name": null, 00:33:56.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:56.775 "is_configured": false, 00:33:56.775 "data_offset": 2048, 00:33:56.775 "data_size": 63488 00:33:56.775 }, 00:33:56.775 { 00:33:56.775 "name": "BaseBdev3", 00:33:56.775 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:33:56.775 "is_configured": true, 00:33:56.775 "data_offset": 2048, 00:33:56.775 "data_size": 63488 00:33:56.775 }, 00:33:56.775 { 00:33:56.775 "name": "BaseBdev4", 00:33:56.775 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:33:56.775 "is_configured": true, 00:33:56.775 "data_offset": 2048, 00:33:56.775 "data_size": 63488 00:33:56.775 } 00:33:56.775 ] 00:33:56.775 }' 00:33:56.775 11:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:56.775 11:57:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:57.341 11:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:57.598 [2024-06-10 11:57:29.565358] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:57.598 [2024-06-10 11:57:29.565554] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:33:57.598 [2024-06-10 11:57:29.565569] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:57.598 [2024-06-10 11:57:29.565626] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:57.598 [2024-06-10 11:57:29.583412] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037570 00:33:57.598 [2024-06-10 11:57:29.585638] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:57.598 11:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:33:58.972 11:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:58.973 11:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:58.973 11:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:58.973 11:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:58.973 11:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:58.973 11:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:58.973 11:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:58.973 11:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:58.973 "name": "raid_bdev1", 00:33:58.973 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:33:58.973 "strip_size_kb": 0, 00:33:58.973 "state": "online", 00:33:58.973 "raid_level": "raid1", 00:33:58.973 "superblock": true, 00:33:58.973 "num_base_bdevs": 4, 00:33:58.973 "num_base_bdevs_discovered": 3, 00:33:58.973 "num_base_bdevs_operational": 3, 00:33:58.973 "process": { 00:33:58.973 "type": "rebuild", 00:33:58.973 "target": "spare", 00:33:58.973 "progress": { 00:33:58.973 "blocks": 24576, 00:33:58.973 "percent": 38 00:33:58.973 } 00:33:58.973 }, 00:33:58.973 "base_bdevs_list": [ 00:33:58.973 { 00:33:58.973 "name": "spare", 00:33:58.973 "uuid": "d84fe9de-f036-522d-aaa6-cfefbe753583", 00:33:58.973 "is_configured": true, 00:33:58.973 "data_offset": 2048, 00:33:58.973 "data_size": 63488 00:33:58.973 }, 00:33:58.973 { 00:33:58.973 "name": null, 00:33:58.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:58.973 "is_configured": false, 00:33:58.973 "data_offset": 2048, 00:33:58.973 "data_size": 63488 00:33:58.973 }, 00:33:58.973 { 00:33:58.973 "name": "BaseBdev3", 00:33:58.973 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:33:58.973 "is_configured": true, 00:33:58.973 "data_offset": 2048, 00:33:58.973 "data_size": 63488 00:33:58.973 }, 00:33:58.973 { 00:33:58.973 "name": "BaseBdev4", 00:33:58.973 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:33:58.973 "is_configured": true, 00:33:58.973 "data_offset": 2048, 00:33:58.973 "data_size": 63488 00:33:58.973 } 00:33:58.973 ] 00:33:58.973 }' 00:33:58.973 11:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:58.973 11:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:58.973 11:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:58.973 11:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:58.973 11:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:59.231 [2024-06-10 11:57:31.236457] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:59.490 [2024-06-10 11:57:31.296360] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:59.490 [2024-06-10 11:57:31.296432] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:59.490 [2024-06-10 11:57:31.296448] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:59.490 [2024-06-10 11:57:31.296472] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:59.490 11:57:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:59.490 11:57:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:59.490 11:57:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:59.490 11:57:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:59.490 11:57:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:59.490 11:57:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:59.490 11:57:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:59.490 11:57:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:59.490 11:57:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:59.490 11:57:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:59.490 11:57:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:59.490 11:57:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:59.748 11:57:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:59.748 "name": "raid_bdev1", 00:33:59.748 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:33:59.748 "strip_size_kb": 0, 00:33:59.748 "state": "online", 00:33:59.748 "raid_level": "raid1", 00:33:59.748 "superblock": true, 00:33:59.748 "num_base_bdevs": 4, 00:33:59.748 "num_base_bdevs_discovered": 2, 00:33:59.748 "num_base_bdevs_operational": 2, 00:33:59.748 "base_bdevs_list": [ 00:33:59.748 { 00:33:59.748 "name": null, 00:33:59.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:59.748 "is_configured": false, 00:33:59.748 "data_offset": 2048, 00:33:59.748 "data_size": 63488 00:33:59.748 }, 00:33:59.748 { 00:33:59.748 "name": null, 00:33:59.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:59.748 "is_configured": false, 00:33:59.748 "data_offset": 2048, 00:33:59.748 "data_size": 63488 00:33:59.748 }, 00:33:59.748 { 00:33:59.748 "name": "BaseBdev3", 00:33:59.748 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:33:59.748 "is_configured": true, 00:33:59.748 "data_offset": 2048, 00:33:59.748 "data_size": 63488 00:33:59.748 }, 00:33:59.748 { 00:33:59.748 "name": "BaseBdev4", 00:33:59.748 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:33:59.748 "is_configured": true, 00:33:59.748 "data_offset": 2048, 00:33:59.748 "data_size": 63488 00:33:59.748 } 00:33:59.748 ] 00:33:59.748 }' 00:33:59.748 11:57:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:59.748 11:57:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:00.319 11:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:00.577 [2024-06-10 11:57:32.589725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:00.577 [2024-06-10 11:57:32.589813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:00.577 [2024-06-10 11:57:32.589856] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:34:00.577 [2024-06-10 11:57:32.589879] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:00.577 [2024-06-10 11:57:32.590441] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:00.577 [2024-06-10 11:57:32.590485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:00.577 [2024-06-10 11:57:32.590619] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:00.577 [2024-06-10 11:57:32.590634] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:34:00.577 [2024-06-10 11:57:32.590643] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:00.577 [2024-06-10 11:57:32.590696] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:00.577 [2024-06-10 11:57:32.608064] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000378b0 00:34:00.577 spare 00:34:00.577 [2024-06-10 11:57:32.610314] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:00.577 11:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:34:01.953 11:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:01.953 11:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:01.953 11:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:01.953 11:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:01.953 11:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:01.953 11:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:01.953 11:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:01.953 11:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:01.953 "name": "raid_bdev1", 00:34:01.953 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:34:01.953 "strip_size_kb": 0, 00:34:01.953 "state": "online", 00:34:01.953 "raid_level": "raid1", 00:34:01.953 "superblock": true, 00:34:01.953 "num_base_bdevs": 4, 00:34:01.953 "num_base_bdevs_discovered": 3, 00:34:01.953 "num_base_bdevs_operational": 3, 00:34:01.953 "process": { 00:34:01.953 "type": "rebuild", 00:34:01.953 "target": "spare", 00:34:01.953 "progress": { 00:34:01.953 "blocks": 24576, 00:34:01.953 "percent": 38 00:34:01.953 } 00:34:01.953 }, 00:34:01.953 "base_bdevs_list": [ 00:34:01.953 { 00:34:01.953 "name": "spare", 00:34:01.953 "uuid": "d84fe9de-f036-522d-aaa6-cfefbe753583", 00:34:01.953 "is_configured": true, 00:34:01.953 "data_offset": 2048, 00:34:01.953 "data_size": 63488 00:34:01.953 }, 00:34:01.953 { 00:34:01.953 "name": null, 00:34:01.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:01.953 "is_configured": false, 00:34:01.953 "data_offset": 2048, 00:34:01.953 "data_size": 63488 00:34:01.953 }, 00:34:01.953 { 00:34:01.953 "name": "BaseBdev3", 00:34:01.953 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:34:01.953 "is_configured": true, 00:34:01.953 "data_offset": 2048, 00:34:01.953 "data_size": 63488 00:34:01.953 }, 00:34:01.953 { 00:34:01.953 "name": "BaseBdev4", 00:34:01.953 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:34:01.953 "is_configured": true, 00:34:01.953 "data_offset": 2048, 00:34:01.953 "data_size": 63488 00:34:01.953 } 00:34:01.953 ] 00:34:01.953 }' 00:34:01.953 11:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:01.953 11:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:01.953 11:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:01.953 11:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:01.953 11:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:02.212 [2024-06-10 11:57:34.249228] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:02.470 [2024-06-10 11:57:34.321138] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:02.470 [2024-06-10 11:57:34.321220] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:02.470 [2024-06-10 11:57:34.321238] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:02.470 [2024-06-10 11:57:34.321246] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:02.470 11:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:02.470 11:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:02.470 11:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:02.470 11:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:02.470 11:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:02.470 11:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:02.470 11:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:02.470 11:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:02.470 11:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:02.470 11:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:02.470 11:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:02.470 11:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:02.796 11:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:02.796 "name": "raid_bdev1", 00:34:02.796 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:34:02.796 "strip_size_kb": 0, 00:34:02.796 "state": "online", 00:34:02.796 "raid_level": "raid1", 00:34:02.796 "superblock": true, 00:34:02.796 "num_base_bdevs": 4, 00:34:02.796 "num_base_bdevs_discovered": 2, 00:34:02.796 "num_base_bdevs_operational": 2, 00:34:02.796 "base_bdevs_list": [ 00:34:02.796 { 00:34:02.796 "name": null, 00:34:02.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:02.796 "is_configured": false, 00:34:02.796 "data_offset": 2048, 00:34:02.796 "data_size": 63488 00:34:02.796 }, 00:34:02.796 { 00:34:02.796 "name": null, 00:34:02.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:02.796 "is_configured": false, 00:34:02.796 "data_offset": 2048, 00:34:02.796 "data_size": 63488 00:34:02.796 }, 00:34:02.796 { 00:34:02.796 "name": "BaseBdev3", 00:34:02.796 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:34:02.796 "is_configured": true, 00:34:02.796 "data_offset": 2048, 00:34:02.796 "data_size": 63488 00:34:02.796 }, 00:34:02.796 { 00:34:02.796 "name": "BaseBdev4", 00:34:02.796 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:34:02.796 "is_configured": true, 00:34:02.796 "data_offset": 2048, 00:34:02.796 "data_size": 63488 00:34:02.796 } 00:34:02.796 ] 00:34:02.796 }' 00:34:02.796 11:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:02.796 11:57:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:03.375 11:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:03.375 11:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:03.375 11:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:03.375 11:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:03.375 11:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:03.375 11:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:03.375 11:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:03.634 11:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:03.634 "name": "raid_bdev1", 00:34:03.634 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:34:03.634 "strip_size_kb": 0, 00:34:03.634 "state": "online", 00:34:03.634 "raid_level": "raid1", 00:34:03.634 "superblock": true, 00:34:03.634 "num_base_bdevs": 4, 00:34:03.634 "num_base_bdevs_discovered": 2, 00:34:03.634 "num_base_bdevs_operational": 2, 00:34:03.634 "base_bdevs_list": [ 00:34:03.634 { 00:34:03.634 "name": null, 00:34:03.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:03.634 "is_configured": false, 00:34:03.634 "data_offset": 2048, 00:34:03.634 "data_size": 63488 00:34:03.634 }, 00:34:03.634 { 00:34:03.634 "name": null, 00:34:03.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:03.634 "is_configured": false, 00:34:03.634 "data_offset": 2048, 00:34:03.634 "data_size": 63488 00:34:03.634 }, 00:34:03.634 { 00:34:03.634 "name": "BaseBdev3", 00:34:03.634 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:34:03.634 "is_configured": true, 00:34:03.634 "data_offset": 2048, 00:34:03.634 "data_size": 63488 00:34:03.634 }, 00:34:03.634 { 00:34:03.634 "name": "BaseBdev4", 00:34:03.634 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:34:03.634 "is_configured": true, 00:34:03.634 "data_offset": 2048, 00:34:03.634 "data_size": 63488 00:34:03.634 } 00:34:03.634 ] 00:34:03.634 }' 00:34:03.634 11:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:03.634 11:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:03.634 11:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:03.634 11:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:03.634 11:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:34:03.893 11:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:04.151 [2024-06-10 11:57:36.148085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:04.151 [2024-06-10 11:57:36.148183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:04.151 [2024-06-10 11:57:36.148229] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:34:04.151 [2024-06-10 11:57:36.148251] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:04.151 [2024-06-10 11:57:36.148726] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:04.151 [2024-06-10 11:57:36.148778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:04.151 [2024-06-10 11:57:36.148935] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:04.151 [2024-06-10 11:57:36.148966] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:34:04.151 [2024-06-10 11:57:36.148975] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:04.151 BaseBdev1 00:34:04.151 11:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:34:05.526 11:57:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:05.526 11:57:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:05.526 11:57:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:05.526 11:57:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:05.526 11:57:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:05.526 11:57:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:05.526 11:57:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:05.526 11:57:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:05.526 11:57:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:05.526 11:57:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:05.526 11:57:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:05.526 11:57:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:05.526 11:57:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:05.526 "name": "raid_bdev1", 00:34:05.526 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:34:05.526 "strip_size_kb": 0, 00:34:05.526 "state": "online", 00:34:05.526 "raid_level": "raid1", 00:34:05.526 "superblock": true, 00:34:05.526 "num_base_bdevs": 4, 00:34:05.526 "num_base_bdevs_discovered": 2, 00:34:05.526 "num_base_bdevs_operational": 2, 00:34:05.526 "base_bdevs_list": [ 00:34:05.526 { 00:34:05.526 "name": null, 00:34:05.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.526 "is_configured": false, 00:34:05.526 "data_offset": 2048, 00:34:05.526 "data_size": 63488 00:34:05.526 }, 00:34:05.526 { 00:34:05.526 "name": null, 00:34:05.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.526 "is_configured": false, 00:34:05.526 "data_offset": 2048, 00:34:05.526 "data_size": 63488 00:34:05.526 }, 00:34:05.526 { 00:34:05.526 "name": "BaseBdev3", 00:34:05.526 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:34:05.526 "is_configured": true, 00:34:05.526 "data_offset": 2048, 00:34:05.526 "data_size": 63488 00:34:05.526 }, 00:34:05.526 { 00:34:05.526 "name": "BaseBdev4", 00:34:05.526 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:34:05.526 "is_configured": true, 00:34:05.526 "data_offset": 2048, 00:34:05.526 "data_size": 63488 00:34:05.526 } 00:34:05.526 ] 00:34:05.526 }' 00:34:05.526 11:57:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:05.526 11:57:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:06.092 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:06.092 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:06.092 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:06.092 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:06.092 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:06.092 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:06.092 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:06.352 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:06.352 "name": "raid_bdev1", 00:34:06.352 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:34:06.352 "strip_size_kb": 0, 00:34:06.352 "state": "online", 00:34:06.352 "raid_level": "raid1", 00:34:06.352 "superblock": true, 00:34:06.352 "num_base_bdevs": 4, 00:34:06.352 "num_base_bdevs_discovered": 2, 00:34:06.352 "num_base_bdevs_operational": 2, 00:34:06.352 "base_bdevs_list": [ 00:34:06.352 { 00:34:06.352 "name": null, 00:34:06.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:06.352 "is_configured": false, 00:34:06.352 "data_offset": 2048, 00:34:06.352 "data_size": 63488 00:34:06.352 }, 00:34:06.352 { 00:34:06.352 "name": null, 00:34:06.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:06.352 "is_configured": false, 00:34:06.352 "data_offset": 2048, 00:34:06.352 "data_size": 63488 00:34:06.352 }, 00:34:06.352 { 00:34:06.352 "name": "BaseBdev3", 00:34:06.352 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:34:06.352 "is_configured": true, 00:34:06.352 "data_offset": 2048, 00:34:06.352 "data_size": 63488 00:34:06.352 }, 00:34:06.352 { 00:34:06.352 "name": "BaseBdev4", 00:34:06.352 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:34:06.352 "is_configured": true, 00:34:06.352 "data_offset": 2048, 00:34:06.352 "data_size": 63488 00:34:06.352 } 00:34:06.352 ] 00:34:06.352 }' 00:34:06.352 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:06.352 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:06.352 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:06.610 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:06.610 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:06.610 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@649 -- # local es=0 00:34:06.610 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:06.610 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:06.610 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:06.610 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:06.610 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:06.610 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:06.610 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:06.610 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:06.610 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:06.610 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:06.868 [2024-06-10 11:57:38.693053] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:06.868 [2024-06-10 11:57:38.693223] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:34:06.868 [2024-06-10 11:57:38.693236] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:06.868 request: 00:34:06.868 { 00:34:06.868 "base_bdev": "BaseBdev1", 00:34:06.868 "raid_bdev": "raid_bdev1", 00:34:06.868 "method": "bdev_raid_add_base_bdev", 00:34:06.868 "req_id": 1 00:34:06.868 } 00:34:06.868 Got JSON-RPC error response 00:34:06.868 response: 00:34:06.868 { 00:34:06.868 "code": -22, 00:34:06.868 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:06.868 } 00:34:06.868 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # es=1 00:34:06.868 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:06.868 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:06.868 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:06.868 11:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:34:07.803 11:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:07.803 11:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:07.803 11:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:07.803 11:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:07.803 11:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:07.803 11:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:07.803 11:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:07.803 11:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:07.803 11:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:07.803 11:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:07.803 11:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:07.803 11:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:08.062 11:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:08.062 "name": "raid_bdev1", 00:34:08.062 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:34:08.062 "strip_size_kb": 0, 00:34:08.062 "state": "online", 00:34:08.062 "raid_level": "raid1", 00:34:08.062 "superblock": true, 00:34:08.062 "num_base_bdevs": 4, 00:34:08.062 "num_base_bdevs_discovered": 2, 00:34:08.062 "num_base_bdevs_operational": 2, 00:34:08.062 "base_bdevs_list": [ 00:34:08.062 { 00:34:08.062 "name": null, 00:34:08.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:08.062 "is_configured": false, 00:34:08.062 "data_offset": 2048, 00:34:08.062 "data_size": 63488 00:34:08.062 }, 00:34:08.062 { 00:34:08.062 "name": null, 00:34:08.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:08.062 "is_configured": false, 00:34:08.062 "data_offset": 2048, 00:34:08.062 "data_size": 63488 00:34:08.062 }, 00:34:08.062 { 00:34:08.062 "name": "BaseBdev3", 00:34:08.062 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:34:08.062 "is_configured": true, 00:34:08.062 "data_offset": 2048, 00:34:08.062 "data_size": 63488 00:34:08.062 }, 00:34:08.062 { 00:34:08.062 "name": "BaseBdev4", 00:34:08.062 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:34:08.062 "is_configured": true, 00:34:08.062 "data_offset": 2048, 00:34:08.062 "data_size": 63488 00:34:08.062 } 00:34:08.062 ] 00:34:08.062 }' 00:34:08.062 11:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:08.062 11:57:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:08.997 11:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:08.997 11:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:08.997 11:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:08.997 11:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:08.997 11:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:08.997 11:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:08.997 11:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:08.997 11:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:08.997 "name": "raid_bdev1", 00:34:08.997 "uuid": "00f29337-2330-4d67-a70b-f2f942058a9c", 00:34:08.997 "strip_size_kb": 0, 00:34:08.997 "state": "online", 00:34:08.997 "raid_level": "raid1", 00:34:08.997 "superblock": true, 00:34:08.997 "num_base_bdevs": 4, 00:34:08.997 "num_base_bdevs_discovered": 2, 00:34:08.997 "num_base_bdevs_operational": 2, 00:34:08.997 "base_bdevs_list": [ 00:34:08.997 { 00:34:08.997 "name": null, 00:34:08.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:08.997 "is_configured": false, 00:34:08.997 "data_offset": 2048, 00:34:08.997 "data_size": 63488 00:34:08.997 }, 00:34:08.997 { 00:34:08.997 "name": null, 00:34:08.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:08.997 "is_configured": false, 00:34:08.997 "data_offset": 2048, 00:34:08.997 "data_size": 63488 00:34:08.997 }, 00:34:08.997 { 00:34:08.997 "name": "BaseBdev3", 00:34:08.997 "uuid": "cc6cc02a-1514-5fd8-83fb-02d14245232a", 00:34:08.997 "is_configured": true, 00:34:08.997 "data_offset": 2048, 00:34:08.997 "data_size": 63488 00:34:08.997 }, 00:34:08.997 { 00:34:08.997 "name": "BaseBdev4", 00:34:08.997 "uuid": "e137c1dc-95c4-5d85-9459-7823bd5e2fd8", 00:34:08.997 "is_configured": true, 00:34:08.997 "data_offset": 2048, 00:34:08.997 "data_size": 63488 00:34:08.997 } 00:34:08.997 ] 00:34:08.997 }' 00:34:08.997 11:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:08.997 11:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:08.997 11:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:08.998 11:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:08.998 11:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 150858 00:34:08.998 11:57:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@949 -- # '[' -z 150858 ']' 00:34:08.998 11:57:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # kill -0 150858 00:34:08.998 11:57:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # uname 00:34:08.998 11:57:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:08.998 11:57:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 150858 00:34:08.998 11:57:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:08.998 11:57:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:08.998 11:57:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # echo 'killing process with pid 150858' 00:34:08.998 killing process with pid 150858 00:34:08.998 11:57:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # kill 150858 00:34:08.998 Received shutdown signal, test time was about 29.195825 seconds 00:34:08.998 00:34:08.998 Latency(us) 00:34:08.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:08.998 =================================================================================================================== 00:34:08.998 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:08.998 [2024-06-10 11:57:41.047114] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:08.998 [2024-06-10 11:57:41.047248] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:08.998 [2024-06-10 11:57:41.047321] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:08.998 [2024-06-10 11:57:41.047333] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:34:08.998 11:57:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # wait 150858 00:34:09.563 [2024-06-10 11:57:41.621308] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:11.460 ************************************ 00:34:11.460 END TEST raid_rebuild_test_sb_io 00:34:11.460 ************************************ 00:34:11.460 11:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:34:11.460 00:34:11.460 real 0m37.338s 00:34:11.460 user 0m58.596s 00:34:11.460 sys 0m4.953s 00:34:11.460 11:57:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:11.460 11:57:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:11.460 11:57:43 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' y == y ']' 00:34:11.460 11:57:43 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:34:11.461 11:57:43 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:34:11.461 11:57:43 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:34:11.461 11:57:43 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:11.461 11:57:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:11.461 ************************************ 00:34:11.461 START TEST raid5f_state_function_test 00:34:11.461 ************************************ 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid5f 3 false 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=151812 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 151812' 00:34:11.461 Process raid pid: 151812 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 151812 /var/tmp/spdk-raid.sock 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 151812 ']' 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:11.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:11.461 11:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.719 [2024-06-10 11:57:43.557599] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:34:11.719 [2024-06-10 11:57:43.557804] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.719 [2024-06-10 11:57:43.733333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.976 [2024-06-10 11:57:43.957042] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.233 [2024-06-10 11:57:44.234135] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:12.490 11:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:12.490 11:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:34:12.490 11:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:12.748 [2024-06-10 11:57:44.730955] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:12.748 [2024-06-10 11:57:44.731087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:12.748 [2024-06-10 11:57:44.731126] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:12.748 [2024-06-10 11:57:44.731174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:12.748 [2024-06-10 11:57:44.731192] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:12.748 [2024-06-10 11:57:44.731236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:12.748 11:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:12.748 11:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:12.748 11:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:12.748 11:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:12.748 11:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:12.748 11:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:12.748 11:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:12.748 11:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:12.748 11:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:12.748 11:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:12.748 11:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:12.748 11:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:13.313 11:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:13.313 "name": "Existed_Raid", 00:34:13.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:13.313 "strip_size_kb": 64, 00:34:13.313 "state": "configuring", 00:34:13.313 "raid_level": "raid5f", 00:34:13.313 "superblock": false, 00:34:13.313 "num_base_bdevs": 3, 00:34:13.313 "num_base_bdevs_discovered": 0, 00:34:13.313 "num_base_bdevs_operational": 3, 00:34:13.313 "base_bdevs_list": [ 00:34:13.313 { 00:34:13.313 "name": "BaseBdev1", 00:34:13.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:13.313 "is_configured": false, 00:34:13.313 "data_offset": 0, 00:34:13.313 "data_size": 0 00:34:13.313 }, 00:34:13.313 { 00:34:13.313 "name": "BaseBdev2", 00:34:13.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:13.313 "is_configured": false, 00:34:13.313 "data_offset": 0, 00:34:13.313 "data_size": 0 00:34:13.313 }, 00:34:13.313 { 00:34:13.313 "name": "BaseBdev3", 00:34:13.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:13.313 "is_configured": false, 00:34:13.313 "data_offset": 0, 00:34:13.313 "data_size": 0 00:34:13.313 } 00:34:13.313 ] 00:34:13.313 }' 00:34:13.313 11:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:13.313 11:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.879 11:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:14.192 [2024-06-10 11:57:45.975088] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:14.193 [2024-06-10 11:57:45.975148] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:34:14.193 11:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:14.450 [2024-06-10 11:57:46.259166] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:14.450 [2024-06-10 11:57:46.259261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:14.450 [2024-06-10 11:57:46.259273] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:14.450 [2024-06-10 11:57:46.259304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:14.450 [2024-06-10 11:57:46.259312] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:14.450 [2024-06-10 11:57:46.259354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:14.450 11:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:14.707 [2024-06-10 11:57:46.520614] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:14.707 BaseBdev1 00:34:14.707 11:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:34:14.707 11:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:34:14.707 11:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:34:14.707 11:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local i 00:34:14.707 11:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:34:14.707 11:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:34:14.707 11:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:14.965 11:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:15.222 [ 00:34:15.222 { 00:34:15.223 "name": "BaseBdev1", 00:34:15.223 "aliases": [ 00:34:15.223 "f73d53b6-8ff4-44b7-8605-ae14ada3d60c" 00:34:15.223 ], 00:34:15.223 "product_name": "Malloc disk", 00:34:15.223 "block_size": 512, 00:34:15.223 "num_blocks": 65536, 00:34:15.223 "uuid": "f73d53b6-8ff4-44b7-8605-ae14ada3d60c", 00:34:15.223 "assigned_rate_limits": { 00:34:15.223 "rw_ios_per_sec": 0, 00:34:15.223 "rw_mbytes_per_sec": 0, 00:34:15.223 "r_mbytes_per_sec": 0, 00:34:15.223 "w_mbytes_per_sec": 0 00:34:15.223 }, 00:34:15.223 "claimed": true, 00:34:15.223 "claim_type": "exclusive_write", 00:34:15.223 "zoned": false, 00:34:15.223 "supported_io_types": { 00:34:15.223 "read": true, 00:34:15.223 "write": true, 00:34:15.223 "unmap": true, 00:34:15.223 "write_zeroes": true, 00:34:15.223 "flush": true, 00:34:15.223 "reset": true, 00:34:15.223 "compare": false, 00:34:15.223 "compare_and_write": false, 00:34:15.223 "abort": true, 00:34:15.223 "nvme_admin": false, 00:34:15.223 "nvme_io": false 00:34:15.223 }, 00:34:15.223 "memory_domains": [ 00:34:15.223 { 00:34:15.223 "dma_device_id": "system", 00:34:15.223 "dma_device_type": 1 00:34:15.223 }, 00:34:15.223 { 00:34:15.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:15.223 "dma_device_type": 2 00:34:15.223 } 00:34:15.223 ], 00:34:15.223 "driver_specific": {} 00:34:15.223 } 00:34:15.223 ] 00:34:15.223 11:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:34:15.223 11:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:15.223 11:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:15.223 11:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:15.223 11:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:15.223 11:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:15.223 11:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:15.223 11:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:15.223 11:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:15.223 11:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:15.223 11:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:15.223 11:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:15.223 11:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:15.480 11:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:15.480 "name": "Existed_Raid", 00:34:15.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:15.480 "strip_size_kb": 64, 00:34:15.480 "state": "configuring", 00:34:15.480 "raid_level": "raid5f", 00:34:15.480 "superblock": false, 00:34:15.480 "num_base_bdevs": 3, 00:34:15.480 "num_base_bdevs_discovered": 1, 00:34:15.480 "num_base_bdevs_operational": 3, 00:34:15.480 "base_bdevs_list": [ 00:34:15.480 { 00:34:15.480 "name": "BaseBdev1", 00:34:15.480 "uuid": "f73d53b6-8ff4-44b7-8605-ae14ada3d60c", 00:34:15.480 "is_configured": true, 00:34:15.480 "data_offset": 0, 00:34:15.480 "data_size": 65536 00:34:15.480 }, 00:34:15.480 { 00:34:15.480 "name": "BaseBdev2", 00:34:15.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:15.480 "is_configured": false, 00:34:15.480 "data_offset": 0, 00:34:15.480 "data_size": 0 00:34:15.480 }, 00:34:15.480 { 00:34:15.480 "name": "BaseBdev3", 00:34:15.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:15.480 "is_configured": false, 00:34:15.480 "data_offset": 0, 00:34:15.480 "data_size": 0 00:34:15.480 } 00:34:15.480 ] 00:34:15.480 }' 00:34:15.480 11:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:15.480 11:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.045 11:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:16.303 [2024-06-10 11:57:48.145012] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:16.303 [2024-06-10 11:57:48.145075] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:34:16.303 11:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:16.561 [2024-06-10 11:57:48.425121] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:16.561 [2024-06-10 11:57:48.427411] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:16.561 [2024-06-10 11:57:48.427506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:16.561 [2024-06-10 11:57:48.427518] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:16.561 [2024-06-10 11:57:48.427560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:16.561 11:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:34:16.561 11:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:16.561 11:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:16.561 11:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:16.561 11:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:16.561 11:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:16.561 11:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:16.561 11:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:16.561 11:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:16.561 11:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:16.561 11:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:16.561 11:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:16.561 11:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:16.561 11:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:16.819 11:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:16.819 "name": "Existed_Raid", 00:34:16.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.819 "strip_size_kb": 64, 00:34:16.819 "state": "configuring", 00:34:16.819 "raid_level": "raid5f", 00:34:16.819 "superblock": false, 00:34:16.819 "num_base_bdevs": 3, 00:34:16.819 "num_base_bdevs_discovered": 1, 00:34:16.819 "num_base_bdevs_operational": 3, 00:34:16.819 "base_bdevs_list": [ 00:34:16.819 { 00:34:16.819 "name": "BaseBdev1", 00:34:16.819 "uuid": "f73d53b6-8ff4-44b7-8605-ae14ada3d60c", 00:34:16.819 "is_configured": true, 00:34:16.819 "data_offset": 0, 00:34:16.819 "data_size": 65536 00:34:16.819 }, 00:34:16.819 { 00:34:16.819 "name": "BaseBdev2", 00:34:16.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.819 "is_configured": false, 00:34:16.819 "data_offset": 0, 00:34:16.819 "data_size": 0 00:34:16.819 }, 00:34:16.819 { 00:34:16.819 "name": "BaseBdev3", 00:34:16.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.819 "is_configured": false, 00:34:16.819 "data_offset": 0, 00:34:16.819 "data_size": 0 00:34:16.819 } 00:34:16.819 ] 00:34:16.819 }' 00:34:16.819 11:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:16.819 11:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.386 11:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:17.644 [2024-06-10 11:57:49.534816] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:17.644 BaseBdev2 00:34:17.644 11:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:34:17.644 11:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:34:17.644 11:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:34:17.644 11:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local i 00:34:17.644 11:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:34:17.644 11:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:34:17.644 11:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:17.967 11:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:17.967 [ 00:34:17.967 { 00:34:17.967 "name": "BaseBdev2", 00:34:17.967 "aliases": [ 00:34:17.967 "12f0944b-9762-4e30-90c1-056a15372961" 00:34:17.967 ], 00:34:17.967 "product_name": "Malloc disk", 00:34:17.967 "block_size": 512, 00:34:17.967 "num_blocks": 65536, 00:34:17.967 "uuid": "12f0944b-9762-4e30-90c1-056a15372961", 00:34:17.967 "assigned_rate_limits": { 00:34:17.967 "rw_ios_per_sec": 0, 00:34:17.967 "rw_mbytes_per_sec": 0, 00:34:17.967 "r_mbytes_per_sec": 0, 00:34:17.967 "w_mbytes_per_sec": 0 00:34:17.967 }, 00:34:17.967 "claimed": true, 00:34:17.967 "claim_type": "exclusive_write", 00:34:17.967 "zoned": false, 00:34:17.967 "supported_io_types": { 00:34:17.967 "read": true, 00:34:17.967 "write": true, 00:34:17.967 "unmap": true, 00:34:17.967 "write_zeroes": true, 00:34:17.967 "flush": true, 00:34:17.967 "reset": true, 00:34:17.967 "compare": false, 00:34:17.967 "compare_and_write": false, 00:34:17.967 "abort": true, 00:34:17.967 "nvme_admin": false, 00:34:17.967 "nvme_io": false 00:34:17.967 }, 00:34:17.967 "memory_domains": [ 00:34:17.967 { 00:34:17.967 "dma_device_id": "system", 00:34:17.967 "dma_device_type": 1 00:34:17.967 }, 00:34:17.967 { 00:34:17.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:17.967 "dma_device_type": 2 00:34:17.967 } 00:34:17.967 ], 00:34:17.967 "driver_specific": {} 00:34:17.967 } 00:34:17.967 ] 00:34:17.967 11:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:34:17.967 11:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:17.967 11:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:17.967 11:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:17.967 11:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:17.967 11:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:17.967 11:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:17.967 11:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:17.967 11:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:17.967 11:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:17.967 11:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:17.967 11:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:17.967 11:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:17.967 11:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:17.967 11:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:18.225 11:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:18.225 "name": "Existed_Raid", 00:34:18.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:18.225 "strip_size_kb": 64, 00:34:18.225 "state": "configuring", 00:34:18.225 "raid_level": "raid5f", 00:34:18.225 "superblock": false, 00:34:18.225 "num_base_bdevs": 3, 00:34:18.225 "num_base_bdevs_discovered": 2, 00:34:18.225 "num_base_bdevs_operational": 3, 00:34:18.225 "base_bdevs_list": [ 00:34:18.225 { 00:34:18.225 "name": "BaseBdev1", 00:34:18.225 "uuid": "f73d53b6-8ff4-44b7-8605-ae14ada3d60c", 00:34:18.225 "is_configured": true, 00:34:18.225 "data_offset": 0, 00:34:18.225 "data_size": 65536 00:34:18.225 }, 00:34:18.225 { 00:34:18.225 "name": "BaseBdev2", 00:34:18.225 "uuid": "12f0944b-9762-4e30-90c1-056a15372961", 00:34:18.225 "is_configured": true, 00:34:18.225 "data_offset": 0, 00:34:18.225 "data_size": 65536 00:34:18.225 }, 00:34:18.225 { 00:34:18.225 "name": "BaseBdev3", 00:34:18.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:18.225 "is_configured": false, 00:34:18.225 "data_offset": 0, 00:34:18.225 "data_size": 0 00:34:18.225 } 00:34:18.225 ] 00:34:18.225 }' 00:34:18.225 11:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:18.226 11:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.791 11:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:19.048 [2024-06-10 11:57:51.065750] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:19.048 [2024-06-10 11:57:51.065842] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:34:19.048 [2024-06-10 11:57:51.065855] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:34:19.048 [2024-06-10 11:57:51.065991] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:34:19.048 [2024-06-10 11:57:51.072504] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:34:19.048 [2024-06-10 11:57:51.072539] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:34:19.048 [2024-06-10 11:57:51.072818] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:19.049 BaseBdev3 00:34:19.049 11:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:34:19.049 11:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:34:19.049 11:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:34:19.049 11:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local i 00:34:19.049 11:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:34:19.049 11:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:34:19.049 11:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:19.306 11:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:19.564 [ 00:34:19.564 { 00:34:19.564 "name": "BaseBdev3", 00:34:19.564 "aliases": [ 00:34:19.564 "252f0175-d47c-4907-bdab-a0193c283b19" 00:34:19.564 ], 00:34:19.564 "product_name": "Malloc disk", 00:34:19.564 "block_size": 512, 00:34:19.565 "num_blocks": 65536, 00:34:19.565 "uuid": "252f0175-d47c-4907-bdab-a0193c283b19", 00:34:19.565 "assigned_rate_limits": { 00:34:19.565 "rw_ios_per_sec": 0, 00:34:19.565 "rw_mbytes_per_sec": 0, 00:34:19.565 "r_mbytes_per_sec": 0, 00:34:19.565 "w_mbytes_per_sec": 0 00:34:19.565 }, 00:34:19.565 "claimed": true, 00:34:19.565 "claim_type": "exclusive_write", 00:34:19.565 "zoned": false, 00:34:19.565 "supported_io_types": { 00:34:19.565 "read": true, 00:34:19.565 "write": true, 00:34:19.565 "unmap": true, 00:34:19.565 "write_zeroes": true, 00:34:19.565 "flush": true, 00:34:19.565 "reset": true, 00:34:19.565 "compare": false, 00:34:19.565 "compare_and_write": false, 00:34:19.565 "abort": true, 00:34:19.565 "nvme_admin": false, 00:34:19.565 "nvme_io": false 00:34:19.565 }, 00:34:19.565 "memory_domains": [ 00:34:19.565 { 00:34:19.565 "dma_device_id": "system", 00:34:19.565 "dma_device_type": 1 00:34:19.565 }, 00:34:19.565 { 00:34:19.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:19.565 "dma_device_type": 2 00:34:19.565 } 00:34:19.565 ], 00:34:19.565 "driver_specific": {} 00:34:19.565 } 00:34:19.565 ] 00:34:19.565 11:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:34:19.565 11:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:19.565 11:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:19.565 11:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:34:19.565 11:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:19.565 11:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:19.565 11:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:19.565 11:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:19.565 11:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:19.565 11:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:19.565 11:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:19.565 11:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:19.565 11:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:19.565 11:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:19.565 11:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:19.823 11:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:19.823 "name": "Existed_Raid", 00:34:19.823 "uuid": "980bdfa2-ec5a-45f8-8cee-7c69fe92e620", 00:34:19.823 "strip_size_kb": 64, 00:34:19.823 "state": "online", 00:34:19.823 "raid_level": "raid5f", 00:34:19.823 "superblock": false, 00:34:19.823 "num_base_bdevs": 3, 00:34:19.823 "num_base_bdevs_discovered": 3, 00:34:19.823 "num_base_bdevs_operational": 3, 00:34:19.823 "base_bdevs_list": [ 00:34:19.823 { 00:34:19.823 "name": "BaseBdev1", 00:34:19.823 "uuid": "f73d53b6-8ff4-44b7-8605-ae14ada3d60c", 00:34:19.823 "is_configured": true, 00:34:19.823 "data_offset": 0, 00:34:19.823 "data_size": 65536 00:34:19.823 }, 00:34:19.823 { 00:34:19.823 "name": "BaseBdev2", 00:34:19.823 "uuid": "12f0944b-9762-4e30-90c1-056a15372961", 00:34:19.823 "is_configured": true, 00:34:19.823 "data_offset": 0, 00:34:19.823 "data_size": 65536 00:34:19.823 }, 00:34:19.823 { 00:34:19.823 "name": "BaseBdev3", 00:34:19.823 "uuid": "252f0175-d47c-4907-bdab-a0193c283b19", 00:34:19.823 "is_configured": true, 00:34:19.823 "data_offset": 0, 00:34:19.823 "data_size": 65536 00:34:19.823 } 00:34:19.823 ] 00:34:19.823 }' 00:34:19.823 11:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:19.823 11:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.390 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:34:20.390 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:34:20.390 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:20.390 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:20.391 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:20.391 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:34:20.391 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:20.391 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:20.649 [2024-06-10 11:57:52.588865] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:20.649 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:20.649 "name": "Existed_Raid", 00:34:20.649 "aliases": [ 00:34:20.649 "980bdfa2-ec5a-45f8-8cee-7c69fe92e620" 00:34:20.649 ], 00:34:20.649 "product_name": "Raid Volume", 00:34:20.649 "block_size": 512, 00:34:20.649 "num_blocks": 131072, 00:34:20.649 "uuid": "980bdfa2-ec5a-45f8-8cee-7c69fe92e620", 00:34:20.649 "assigned_rate_limits": { 00:34:20.649 "rw_ios_per_sec": 0, 00:34:20.649 "rw_mbytes_per_sec": 0, 00:34:20.649 "r_mbytes_per_sec": 0, 00:34:20.649 "w_mbytes_per_sec": 0 00:34:20.649 }, 00:34:20.649 "claimed": false, 00:34:20.649 "zoned": false, 00:34:20.649 "supported_io_types": { 00:34:20.649 "read": true, 00:34:20.649 "write": true, 00:34:20.649 "unmap": false, 00:34:20.649 "write_zeroes": true, 00:34:20.649 "flush": false, 00:34:20.649 "reset": true, 00:34:20.649 "compare": false, 00:34:20.649 "compare_and_write": false, 00:34:20.649 "abort": false, 00:34:20.649 "nvme_admin": false, 00:34:20.649 "nvme_io": false 00:34:20.649 }, 00:34:20.649 "driver_specific": { 00:34:20.649 "raid": { 00:34:20.649 "uuid": "980bdfa2-ec5a-45f8-8cee-7c69fe92e620", 00:34:20.649 "strip_size_kb": 64, 00:34:20.649 "state": "online", 00:34:20.649 "raid_level": "raid5f", 00:34:20.649 "superblock": false, 00:34:20.649 "num_base_bdevs": 3, 00:34:20.649 "num_base_bdevs_discovered": 3, 00:34:20.649 "num_base_bdevs_operational": 3, 00:34:20.649 "base_bdevs_list": [ 00:34:20.649 { 00:34:20.649 "name": "BaseBdev1", 00:34:20.649 "uuid": "f73d53b6-8ff4-44b7-8605-ae14ada3d60c", 00:34:20.649 "is_configured": true, 00:34:20.649 "data_offset": 0, 00:34:20.649 "data_size": 65536 00:34:20.649 }, 00:34:20.649 { 00:34:20.649 "name": "BaseBdev2", 00:34:20.649 "uuid": "12f0944b-9762-4e30-90c1-056a15372961", 00:34:20.650 "is_configured": true, 00:34:20.650 "data_offset": 0, 00:34:20.650 "data_size": 65536 00:34:20.650 }, 00:34:20.650 { 00:34:20.650 "name": "BaseBdev3", 00:34:20.650 "uuid": "252f0175-d47c-4907-bdab-a0193c283b19", 00:34:20.650 "is_configured": true, 00:34:20.650 "data_offset": 0, 00:34:20.650 "data_size": 65536 00:34:20.650 } 00:34:20.650 ] 00:34:20.650 } 00:34:20.650 } 00:34:20.650 }' 00:34:20.650 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:20.650 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:34:20.650 BaseBdev2 00:34:20.650 BaseBdev3' 00:34:20.650 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:20.650 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:34:20.650 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:20.908 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:20.908 "name": "BaseBdev1", 00:34:20.908 "aliases": [ 00:34:20.908 "f73d53b6-8ff4-44b7-8605-ae14ada3d60c" 00:34:20.908 ], 00:34:20.908 "product_name": "Malloc disk", 00:34:20.908 "block_size": 512, 00:34:20.908 "num_blocks": 65536, 00:34:20.908 "uuid": "f73d53b6-8ff4-44b7-8605-ae14ada3d60c", 00:34:20.908 "assigned_rate_limits": { 00:34:20.908 "rw_ios_per_sec": 0, 00:34:20.908 "rw_mbytes_per_sec": 0, 00:34:20.908 "r_mbytes_per_sec": 0, 00:34:20.908 "w_mbytes_per_sec": 0 00:34:20.908 }, 00:34:20.908 "claimed": true, 00:34:20.908 "claim_type": "exclusive_write", 00:34:20.908 "zoned": false, 00:34:20.908 "supported_io_types": { 00:34:20.908 "read": true, 00:34:20.908 "write": true, 00:34:20.908 "unmap": true, 00:34:20.908 "write_zeroes": true, 00:34:20.908 "flush": true, 00:34:20.908 "reset": true, 00:34:20.908 "compare": false, 00:34:20.908 "compare_and_write": false, 00:34:20.908 "abort": true, 00:34:20.908 "nvme_admin": false, 00:34:20.908 "nvme_io": false 00:34:20.908 }, 00:34:20.908 "memory_domains": [ 00:34:20.908 { 00:34:20.908 "dma_device_id": "system", 00:34:20.908 "dma_device_type": 1 00:34:20.908 }, 00:34:20.908 { 00:34:20.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:20.908 "dma_device_type": 2 00:34:20.908 } 00:34:20.908 ], 00:34:20.908 "driver_specific": {} 00:34:20.908 }' 00:34:20.908 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:20.908 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:20.908 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:20.908 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:21.166 11:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:21.166 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:21.166 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:21.166 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:21.166 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:21.166 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:21.166 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:21.425 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:21.425 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:21.425 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:21.425 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:21.425 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:21.425 "name": "BaseBdev2", 00:34:21.425 "aliases": [ 00:34:21.425 "12f0944b-9762-4e30-90c1-056a15372961" 00:34:21.425 ], 00:34:21.425 "product_name": "Malloc disk", 00:34:21.425 "block_size": 512, 00:34:21.425 "num_blocks": 65536, 00:34:21.425 "uuid": "12f0944b-9762-4e30-90c1-056a15372961", 00:34:21.425 "assigned_rate_limits": { 00:34:21.425 "rw_ios_per_sec": 0, 00:34:21.425 "rw_mbytes_per_sec": 0, 00:34:21.425 "r_mbytes_per_sec": 0, 00:34:21.425 "w_mbytes_per_sec": 0 00:34:21.425 }, 00:34:21.425 "claimed": true, 00:34:21.425 "claim_type": "exclusive_write", 00:34:21.425 "zoned": false, 00:34:21.425 "supported_io_types": { 00:34:21.425 "read": true, 00:34:21.425 "write": true, 00:34:21.425 "unmap": true, 00:34:21.425 "write_zeroes": true, 00:34:21.426 "flush": true, 00:34:21.426 "reset": true, 00:34:21.426 "compare": false, 00:34:21.426 "compare_and_write": false, 00:34:21.426 "abort": true, 00:34:21.426 "nvme_admin": false, 00:34:21.426 "nvme_io": false 00:34:21.426 }, 00:34:21.426 "memory_domains": [ 00:34:21.426 { 00:34:21.426 "dma_device_id": "system", 00:34:21.426 "dma_device_type": 1 00:34:21.426 }, 00:34:21.426 { 00:34:21.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:21.426 "dma_device_type": 2 00:34:21.426 } 00:34:21.426 ], 00:34:21.426 "driver_specific": {} 00:34:21.426 }' 00:34:21.426 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:21.684 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:21.684 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:21.684 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:21.684 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:21.684 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:21.684 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:21.684 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:21.684 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:21.684 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:21.943 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:21.943 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:21.943 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:21.943 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:34:21.943 11:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:22.200 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:22.200 "name": "BaseBdev3", 00:34:22.200 "aliases": [ 00:34:22.200 "252f0175-d47c-4907-bdab-a0193c283b19" 00:34:22.200 ], 00:34:22.200 "product_name": "Malloc disk", 00:34:22.200 "block_size": 512, 00:34:22.200 "num_blocks": 65536, 00:34:22.200 "uuid": "252f0175-d47c-4907-bdab-a0193c283b19", 00:34:22.200 "assigned_rate_limits": { 00:34:22.200 "rw_ios_per_sec": 0, 00:34:22.200 "rw_mbytes_per_sec": 0, 00:34:22.200 "r_mbytes_per_sec": 0, 00:34:22.200 "w_mbytes_per_sec": 0 00:34:22.200 }, 00:34:22.200 "claimed": true, 00:34:22.200 "claim_type": "exclusive_write", 00:34:22.200 "zoned": false, 00:34:22.200 "supported_io_types": { 00:34:22.200 "read": true, 00:34:22.200 "write": true, 00:34:22.200 "unmap": true, 00:34:22.200 "write_zeroes": true, 00:34:22.200 "flush": true, 00:34:22.200 "reset": true, 00:34:22.200 "compare": false, 00:34:22.200 "compare_and_write": false, 00:34:22.200 "abort": true, 00:34:22.200 "nvme_admin": false, 00:34:22.200 "nvme_io": false 00:34:22.200 }, 00:34:22.200 "memory_domains": [ 00:34:22.200 { 00:34:22.200 "dma_device_id": "system", 00:34:22.200 "dma_device_type": 1 00:34:22.200 }, 00:34:22.200 { 00:34:22.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:22.200 "dma_device_type": 2 00:34:22.200 } 00:34:22.200 ], 00:34:22.200 "driver_specific": {} 00:34:22.200 }' 00:34:22.201 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:22.201 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:22.201 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:22.201 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:22.201 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:22.201 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:22.201 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:22.459 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:22.459 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:22.459 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:22.459 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:22.459 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:22.459 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:22.717 [2024-06-10 11:57:54.649318] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:22.975 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:34:22.975 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:34:22.975 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:34:22.975 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:34:22.975 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:34:22.975 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:34:22.975 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:22.975 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:22.975 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:22.975 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:22.976 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:22.976 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:22.976 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:22.976 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:22.976 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:22.976 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:22.976 11:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:22.976 11:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:22.976 "name": "Existed_Raid", 00:34:22.976 "uuid": "980bdfa2-ec5a-45f8-8cee-7c69fe92e620", 00:34:22.976 "strip_size_kb": 64, 00:34:22.976 "state": "online", 00:34:22.976 "raid_level": "raid5f", 00:34:22.976 "superblock": false, 00:34:22.976 "num_base_bdevs": 3, 00:34:22.976 "num_base_bdevs_discovered": 2, 00:34:22.976 "num_base_bdevs_operational": 2, 00:34:22.976 "base_bdevs_list": [ 00:34:22.976 { 00:34:22.976 "name": null, 00:34:22.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:22.976 "is_configured": false, 00:34:22.976 "data_offset": 0, 00:34:22.976 "data_size": 65536 00:34:22.976 }, 00:34:22.976 { 00:34:22.976 "name": "BaseBdev2", 00:34:22.976 "uuid": "12f0944b-9762-4e30-90c1-056a15372961", 00:34:22.976 "is_configured": true, 00:34:22.976 "data_offset": 0, 00:34:22.976 "data_size": 65536 00:34:22.976 }, 00:34:22.976 { 00:34:22.976 "name": "BaseBdev3", 00:34:22.976 "uuid": "252f0175-d47c-4907-bdab-a0193c283b19", 00:34:22.976 "is_configured": true, 00:34:22.976 "data_offset": 0, 00:34:22.976 "data_size": 65536 00:34:22.976 } 00:34:22.976 ] 00:34:22.976 }' 00:34:22.976 11:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:22.976 11:57:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:23.910 11:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:34:23.910 11:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:23.910 11:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:23.910 11:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:23.910 11:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:23.910 11:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:23.910 11:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:34:24.168 [2024-06-10 11:57:56.047371] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:24.168 [2024-06-10 11:57:56.047488] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:24.168 [2024-06-10 11:57:56.155319] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:24.168 11:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:24.168 11:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:24.168 11:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:24.168 11:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:24.426 11:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:24.426 11:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:24.426 11:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:34:24.684 [2024-06-10 11:57:56.607462] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:24.684 [2024-06-10 11:57:56.607558] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:34:24.684 11:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:24.684 11:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:24.942 11:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:24.942 11:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:34:24.942 11:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:34:24.942 11:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:34:24.942 11:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:34:24.942 11:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:34:24.942 11:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:24.942 11:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:25.200 BaseBdev2 00:34:25.200 11:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:34:25.200 11:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:34:25.200 11:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:34:25.200 11:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local i 00:34:25.200 11:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:34:25.200 11:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:34:25.200 11:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:25.458 11:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:25.717 [ 00:34:25.717 { 00:34:25.717 "name": "BaseBdev2", 00:34:25.717 "aliases": [ 00:34:25.717 "e8305f9a-47dd-42b9-be59-4840574ded5b" 00:34:25.717 ], 00:34:25.717 "product_name": "Malloc disk", 00:34:25.717 "block_size": 512, 00:34:25.717 "num_blocks": 65536, 00:34:25.717 "uuid": "e8305f9a-47dd-42b9-be59-4840574ded5b", 00:34:25.717 "assigned_rate_limits": { 00:34:25.717 "rw_ios_per_sec": 0, 00:34:25.717 "rw_mbytes_per_sec": 0, 00:34:25.717 "r_mbytes_per_sec": 0, 00:34:25.717 "w_mbytes_per_sec": 0 00:34:25.717 }, 00:34:25.717 "claimed": false, 00:34:25.717 "zoned": false, 00:34:25.717 "supported_io_types": { 00:34:25.717 "read": true, 00:34:25.717 "write": true, 00:34:25.717 "unmap": true, 00:34:25.718 "write_zeroes": true, 00:34:25.718 "flush": true, 00:34:25.718 "reset": true, 00:34:25.718 "compare": false, 00:34:25.718 "compare_and_write": false, 00:34:25.718 "abort": true, 00:34:25.718 "nvme_admin": false, 00:34:25.718 "nvme_io": false 00:34:25.718 }, 00:34:25.718 "memory_domains": [ 00:34:25.718 { 00:34:25.718 "dma_device_id": "system", 00:34:25.718 "dma_device_type": 1 00:34:25.718 }, 00:34:25.718 { 00:34:25.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:25.718 "dma_device_type": 2 00:34:25.718 } 00:34:25.718 ], 00:34:25.718 "driver_specific": {} 00:34:25.718 } 00:34:25.718 ] 00:34:25.718 11:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:34:25.718 11:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:25.718 11:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:25.718 11:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:25.976 BaseBdev3 00:34:25.976 11:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:34:25.976 11:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:34:25.976 11:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:34:25.976 11:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local i 00:34:25.976 11:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:34:25.976 11:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:34:25.976 11:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:26.234 11:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:26.492 [ 00:34:26.492 { 00:34:26.492 "name": "BaseBdev3", 00:34:26.492 "aliases": [ 00:34:26.492 "8d814165-06ba-4438-a524-a1d16a9e75ff" 00:34:26.492 ], 00:34:26.492 "product_name": "Malloc disk", 00:34:26.492 "block_size": 512, 00:34:26.492 "num_blocks": 65536, 00:34:26.492 "uuid": "8d814165-06ba-4438-a524-a1d16a9e75ff", 00:34:26.492 "assigned_rate_limits": { 00:34:26.492 "rw_ios_per_sec": 0, 00:34:26.492 "rw_mbytes_per_sec": 0, 00:34:26.492 "r_mbytes_per_sec": 0, 00:34:26.492 "w_mbytes_per_sec": 0 00:34:26.492 }, 00:34:26.492 "claimed": false, 00:34:26.492 "zoned": false, 00:34:26.492 "supported_io_types": { 00:34:26.492 "read": true, 00:34:26.492 "write": true, 00:34:26.492 "unmap": true, 00:34:26.492 "write_zeroes": true, 00:34:26.492 "flush": true, 00:34:26.492 "reset": true, 00:34:26.492 "compare": false, 00:34:26.492 "compare_and_write": false, 00:34:26.492 "abort": true, 00:34:26.492 "nvme_admin": false, 00:34:26.492 "nvme_io": false 00:34:26.492 }, 00:34:26.492 "memory_domains": [ 00:34:26.492 { 00:34:26.492 "dma_device_id": "system", 00:34:26.492 "dma_device_type": 1 00:34:26.492 }, 00:34:26.492 { 00:34:26.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:26.492 "dma_device_type": 2 00:34:26.492 } 00:34:26.492 ], 00:34:26.492 "driver_specific": {} 00:34:26.492 } 00:34:26.492 ] 00:34:26.492 11:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:34:26.492 11:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:26.492 11:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:26.492 11:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:26.750 [2024-06-10 11:57:58.585289] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:26.750 [2024-06-10 11:57:58.585375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:26.750 [2024-06-10 11:57:58.585428] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:26.750 [2024-06-10 11:57:58.587634] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:26.750 11:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:26.750 11:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:26.750 11:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:26.750 11:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:26.750 11:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:26.750 11:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:26.750 11:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:26.750 11:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:26.750 11:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:26.750 11:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:26.750 11:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:26.750 11:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:27.007 11:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:27.007 "name": "Existed_Raid", 00:34:27.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.007 "strip_size_kb": 64, 00:34:27.007 "state": "configuring", 00:34:27.007 "raid_level": "raid5f", 00:34:27.007 "superblock": false, 00:34:27.007 "num_base_bdevs": 3, 00:34:27.007 "num_base_bdevs_discovered": 2, 00:34:27.007 "num_base_bdevs_operational": 3, 00:34:27.007 "base_bdevs_list": [ 00:34:27.007 { 00:34:27.007 "name": "BaseBdev1", 00:34:27.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.007 "is_configured": false, 00:34:27.007 "data_offset": 0, 00:34:27.007 "data_size": 0 00:34:27.007 }, 00:34:27.007 { 00:34:27.007 "name": "BaseBdev2", 00:34:27.007 "uuid": "e8305f9a-47dd-42b9-be59-4840574ded5b", 00:34:27.007 "is_configured": true, 00:34:27.007 "data_offset": 0, 00:34:27.007 "data_size": 65536 00:34:27.007 }, 00:34:27.007 { 00:34:27.007 "name": "BaseBdev3", 00:34:27.007 "uuid": "8d814165-06ba-4438-a524-a1d16a9e75ff", 00:34:27.007 "is_configured": true, 00:34:27.007 "data_offset": 0, 00:34:27.007 "data_size": 65536 00:34:27.007 } 00:34:27.007 ] 00:34:27.007 }' 00:34:27.007 11:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:27.007 11:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:27.572 11:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:34:27.830 [2024-06-10 11:57:59.745518] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:27.830 11:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:27.830 11:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:27.830 11:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:27.830 11:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:27.830 11:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:27.830 11:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:27.830 11:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:27.830 11:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:27.830 11:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:27.830 11:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:27.830 11:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:27.830 11:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:28.088 11:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:28.088 "name": "Existed_Raid", 00:34:28.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:28.088 "strip_size_kb": 64, 00:34:28.088 "state": "configuring", 00:34:28.088 "raid_level": "raid5f", 00:34:28.088 "superblock": false, 00:34:28.088 "num_base_bdevs": 3, 00:34:28.088 "num_base_bdevs_discovered": 1, 00:34:28.088 "num_base_bdevs_operational": 3, 00:34:28.088 "base_bdevs_list": [ 00:34:28.088 { 00:34:28.088 "name": "BaseBdev1", 00:34:28.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:28.088 "is_configured": false, 00:34:28.088 "data_offset": 0, 00:34:28.088 "data_size": 0 00:34:28.088 }, 00:34:28.088 { 00:34:28.088 "name": null, 00:34:28.088 "uuid": "e8305f9a-47dd-42b9-be59-4840574ded5b", 00:34:28.088 "is_configured": false, 00:34:28.088 "data_offset": 0, 00:34:28.088 "data_size": 65536 00:34:28.088 }, 00:34:28.088 { 00:34:28.088 "name": "BaseBdev3", 00:34:28.088 "uuid": "8d814165-06ba-4438-a524-a1d16a9e75ff", 00:34:28.088 "is_configured": true, 00:34:28.088 "data_offset": 0, 00:34:28.088 "data_size": 65536 00:34:28.088 } 00:34:28.088 ] 00:34:28.088 }' 00:34:28.088 11:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:28.088 11:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:29.020 11:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:29.020 11:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:29.278 11:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:34:29.278 11:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:29.536 [2024-06-10 11:58:01.426478] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:29.536 BaseBdev1 00:34:29.536 11:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:34:29.536 11:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:34:29.536 11:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:34:29.536 11:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local i 00:34:29.536 11:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:34:29.536 11:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:34:29.536 11:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:29.793 11:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:30.052 [ 00:34:30.052 { 00:34:30.052 "name": "BaseBdev1", 00:34:30.052 "aliases": [ 00:34:30.052 "2dd5db10-1107-4b72-b8f2-8e46eb6ac5ce" 00:34:30.052 ], 00:34:30.052 "product_name": "Malloc disk", 00:34:30.052 "block_size": 512, 00:34:30.052 "num_blocks": 65536, 00:34:30.052 "uuid": "2dd5db10-1107-4b72-b8f2-8e46eb6ac5ce", 00:34:30.052 "assigned_rate_limits": { 00:34:30.052 "rw_ios_per_sec": 0, 00:34:30.052 "rw_mbytes_per_sec": 0, 00:34:30.052 "r_mbytes_per_sec": 0, 00:34:30.052 "w_mbytes_per_sec": 0 00:34:30.052 }, 00:34:30.052 "claimed": true, 00:34:30.052 "claim_type": "exclusive_write", 00:34:30.052 "zoned": false, 00:34:30.052 "supported_io_types": { 00:34:30.052 "read": true, 00:34:30.052 "write": true, 00:34:30.052 "unmap": true, 00:34:30.052 "write_zeroes": true, 00:34:30.052 "flush": true, 00:34:30.052 "reset": true, 00:34:30.052 "compare": false, 00:34:30.052 "compare_and_write": false, 00:34:30.052 "abort": true, 00:34:30.052 "nvme_admin": false, 00:34:30.052 "nvme_io": false 00:34:30.052 }, 00:34:30.052 "memory_domains": [ 00:34:30.052 { 00:34:30.052 "dma_device_id": "system", 00:34:30.052 "dma_device_type": 1 00:34:30.052 }, 00:34:30.052 { 00:34:30.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:30.052 "dma_device_type": 2 00:34:30.052 } 00:34:30.052 ], 00:34:30.052 "driver_specific": {} 00:34:30.052 } 00:34:30.052 ] 00:34:30.052 11:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:34:30.052 11:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:30.052 11:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:30.052 11:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:30.052 11:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:30.052 11:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:30.052 11:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:30.052 11:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:30.052 11:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:30.052 11:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:30.052 11:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:30.052 11:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:30.052 11:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:30.310 11:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:30.310 "name": "Existed_Raid", 00:34:30.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:30.310 "strip_size_kb": 64, 00:34:30.310 "state": "configuring", 00:34:30.310 "raid_level": "raid5f", 00:34:30.310 "superblock": false, 00:34:30.310 "num_base_bdevs": 3, 00:34:30.310 "num_base_bdevs_discovered": 2, 00:34:30.310 "num_base_bdevs_operational": 3, 00:34:30.310 "base_bdevs_list": [ 00:34:30.310 { 00:34:30.310 "name": "BaseBdev1", 00:34:30.310 "uuid": "2dd5db10-1107-4b72-b8f2-8e46eb6ac5ce", 00:34:30.310 "is_configured": true, 00:34:30.310 "data_offset": 0, 00:34:30.310 "data_size": 65536 00:34:30.310 }, 00:34:30.310 { 00:34:30.310 "name": null, 00:34:30.310 "uuid": "e8305f9a-47dd-42b9-be59-4840574ded5b", 00:34:30.310 "is_configured": false, 00:34:30.310 "data_offset": 0, 00:34:30.310 "data_size": 65536 00:34:30.310 }, 00:34:30.310 { 00:34:30.310 "name": "BaseBdev3", 00:34:30.310 "uuid": "8d814165-06ba-4438-a524-a1d16a9e75ff", 00:34:30.310 "is_configured": true, 00:34:30.310 "data_offset": 0, 00:34:30.310 "data_size": 65536 00:34:30.310 } 00:34:30.310 ] 00:34:30.310 }' 00:34:30.310 11:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:30.310 11:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:30.937 11:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:30.937 11:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:31.195 11:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:34:31.195 11:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:34:31.452 [2024-06-10 11:58:03.471072] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:31.452 11:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:31.452 11:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:31.452 11:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:31.452 11:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:31.452 11:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:31.452 11:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:31.452 11:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:31.452 11:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:31.452 11:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:31.452 11:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:31.452 11:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:31.452 11:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:31.710 11:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:31.710 "name": "Existed_Raid", 00:34:31.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:31.710 "strip_size_kb": 64, 00:34:31.710 "state": "configuring", 00:34:31.710 "raid_level": "raid5f", 00:34:31.710 "superblock": false, 00:34:31.710 "num_base_bdevs": 3, 00:34:31.710 "num_base_bdevs_discovered": 1, 00:34:31.710 "num_base_bdevs_operational": 3, 00:34:31.710 "base_bdevs_list": [ 00:34:31.710 { 00:34:31.710 "name": "BaseBdev1", 00:34:31.710 "uuid": "2dd5db10-1107-4b72-b8f2-8e46eb6ac5ce", 00:34:31.710 "is_configured": true, 00:34:31.710 "data_offset": 0, 00:34:31.710 "data_size": 65536 00:34:31.710 }, 00:34:31.710 { 00:34:31.710 "name": null, 00:34:31.710 "uuid": "e8305f9a-47dd-42b9-be59-4840574ded5b", 00:34:31.710 "is_configured": false, 00:34:31.710 "data_offset": 0, 00:34:31.710 "data_size": 65536 00:34:31.710 }, 00:34:31.710 { 00:34:31.710 "name": null, 00:34:31.710 "uuid": "8d814165-06ba-4438-a524-a1d16a9e75ff", 00:34:31.710 "is_configured": false, 00:34:31.710 "data_offset": 0, 00:34:31.710 "data_size": 65536 00:34:31.710 } 00:34:31.710 ] 00:34:31.710 }' 00:34:31.710 11:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:31.710 11:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:32.644 11:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:32.644 11:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:32.902 11:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:34:32.902 11:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:34:33.161 [2024-06-10 11:58:05.019490] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:33.161 11:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:33.161 11:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:33.161 11:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:33.161 11:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:33.161 11:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:33.161 11:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:33.161 11:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:33.161 11:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:33.161 11:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:33.161 11:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:33.161 11:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:33.161 11:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:33.419 11:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:33.419 "name": "Existed_Raid", 00:34:33.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:33.419 "strip_size_kb": 64, 00:34:33.419 "state": "configuring", 00:34:33.419 "raid_level": "raid5f", 00:34:33.419 "superblock": false, 00:34:33.419 "num_base_bdevs": 3, 00:34:33.419 "num_base_bdevs_discovered": 2, 00:34:33.419 "num_base_bdevs_operational": 3, 00:34:33.419 "base_bdevs_list": [ 00:34:33.419 { 00:34:33.419 "name": "BaseBdev1", 00:34:33.419 "uuid": "2dd5db10-1107-4b72-b8f2-8e46eb6ac5ce", 00:34:33.419 "is_configured": true, 00:34:33.419 "data_offset": 0, 00:34:33.419 "data_size": 65536 00:34:33.419 }, 00:34:33.419 { 00:34:33.419 "name": null, 00:34:33.419 "uuid": "e8305f9a-47dd-42b9-be59-4840574ded5b", 00:34:33.419 "is_configured": false, 00:34:33.419 "data_offset": 0, 00:34:33.419 "data_size": 65536 00:34:33.419 }, 00:34:33.419 { 00:34:33.419 "name": "BaseBdev3", 00:34:33.419 "uuid": "8d814165-06ba-4438-a524-a1d16a9e75ff", 00:34:33.419 "is_configured": true, 00:34:33.419 "data_offset": 0, 00:34:33.419 "data_size": 65536 00:34:33.419 } 00:34:33.419 ] 00:34:33.419 }' 00:34:33.419 11:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:33.419 11:58:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:33.985 11:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:33.986 11:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:34.244 11:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:34:34.244 11:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:34.502 [2024-06-10 11:58:06.415792] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:34.502 11:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:34.502 11:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:34.502 11:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:34.502 11:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:34.502 11:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:34.502 11:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:34.502 11:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:34.502 11:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:34.502 11:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:34.502 11:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:34.502 11:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:34.502 11:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:34.761 11:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:34.761 "name": "Existed_Raid", 00:34:34.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:34.761 "strip_size_kb": 64, 00:34:34.761 "state": "configuring", 00:34:34.761 "raid_level": "raid5f", 00:34:34.761 "superblock": false, 00:34:34.761 "num_base_bdevs": 3, 00:34:34.761 "num_base_bdevs_discovered": 1, 00:34:34.761 "num_base_bdevs_operational": 3, 00:34:34.761 "base_bdevs_list": [ 00:34:34.761 { 00:34:34.761 "name": null, 00:34:34.761 "uuid": "2dd5db10-1107-4b72-b8f2-8e46eb6ac5ce", 00:34:34.761 "is_configured": false, 00:34:34.761 "data_offset": 0, 00:34:34.761 "data_size": 65536 00:34:34.761 }, 00:34:34.761 { 00:34:34.761 "name": null, 00:34:34.761 "uuid": "e8305f9a-47dd-42b9-be59-4840574ded5b", 00:34:34.761 "is_configured": false, 00:34:34.761 "data_offset": 0, 00:34:34.761 "data_size": 65536 00:34:34.761 }, 00:34:34.761 { 00:34:34.761 "name": "BaseBdev3", 00:34:34.761 "uuid": "8d814165-06ba-4438-a524-a1d16a9e75ff", 00:34:34.761 "is_configured": true, 00:34:34.761 "data_offset": 0, 00:34:34.761 "data_size": 65536 00:34:34.761 } 00:34:34.761 ] 00:34:34.761 }' 00:34:34.761 11:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:34.761 11:58:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:35.698 11:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:35.698 11:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:35.956 11:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:34:35.956 11:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:34:36.214 [2024-06-10 11:58:08.018203] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:36.214 11:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:36.214 11:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:36.214 11:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:36.214 11:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:36.214 11:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:36.214 11:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:36.214 11:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:36.214 11:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:36.214 11:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:36.214 11:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:36.214 11:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:36.214 11:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:36.472 11:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:36.472 "name": "Existed_Raid", 00:34:36.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:36.472 "strip_size_kb": 64, 00:34:36.472 "state": "configuring", 00:34:36.472 "raid_level": "raid5f", 00:34:36.472 "superblock": false, 00:34:36.472 "num_base_bdevs": 3, 00:34:36.472 "num_base_bdevs_discovered": 2, 00:34:36.472 "num_base_bdevs_operational": 3, 00:34:36.472 "base_bdevs_list": [ 00:34:36.472 { 00:34:36.472 "name": null, 00:34:36.472 "uuid": "2dd5db10-1107-4b72-b8f2-8e46eb6ac5ce", 00:34:36.472 "is_configured": false, 00:34:36.472 "data_offset": 0, 00:34:36.472 "data_size": 65536 00:34:36.472 }, 00:34:36.472 { 00:34:36.472 "name": "BaseBdev2", 00:34:36.472 "uuid": "e8305f9a-47dd-42b9-be59-4840574ded5b", 00:34:36.472 "is_configured": true, 00:34:36.472 "data_offset": 0, 00:34:36.472 "data_size": 65536 00:34:36.472 }, 00:34:36.472 { 00:34:36.472 "name": "BaseBdev3", 00:34:36.472 "uuid": "8d814165-06ba-4438-a524-a1d16a9e75ff", 00:34:36.472 "is_configured": true, 00:34:36.472 "data_offset": 0, 00:34:36.472 "data_size": 65536 00:34:36.472 } 00:34:36.472 ] 00:34:36.472 }' 00:34:36.472 11:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:36.472 11:58:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.065 11:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:37.065 11:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:37.374 11:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:34:37.374 11:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:37.374 11:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:34:37.374 11:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2dd5db10-1107-4b72-b8f2-8e46eb6ac5ce 00:34:37.633 [2024-06-10 11:58:09.660891] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:34:37.633 [2024-06-10 11:58:09.661200] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:34:37.633 [2024-06-10 11:58:09.661254] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:34:37.633 [2024-06-10 11:58:09.661459] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:34:37.633 [2024-06-10 11:58:09.667484] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:34:37.633 [2024-06-10 11:58:09.667667] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:34:37.633 [2024-06-10 11:58:09.668047] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:37.633 NewBaseBdev 00:34:37.633 11:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:34:37.633 11:58:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:34:37.633 11:58:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:34:37.633 11:58:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local i 00:34:37.633 11:58:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:34:37.633 11:58:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:34:37.633 11:58:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:37.891 11:58:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:34:38.149 [ 00:34:38.149 { 00:34:38.149 "name": "NewBaseBdev", 00:34:38.149 "aliases": [ 00:34:38.149 "2dd5db10-1107-4b72-b8f2-8e46eb6ac5ce" 00:34:38.149 ], 00:34:38.149 "product_name": "Malloc disk", 00:34:38.149 "block_size": 512, 00:34:38.149 "num_blocks": 65536, 00:34:38.149 "uuid": "2dd5db10-1107-4b72-b8f2-8e46eb6ac5ce", 00:34:38.149 "assigned_rate_limits": { 00:34:38.149 "rw_ios_per_sec": 0, 00:34:38.149 "rw_mbytes_per_sec": 0, 00:34:38.149 "r_mbytes_per_sec": 0, 00:34:38.149 "w_mbytes_per_sec": 0 00:34:38.149 }, 00:34:38.149 "claimed": true, 00:34:38.149 "claim_type": "exclusive_write", 00:34:38.149 "zoned": false, 00:34:38.149 "supported_io_types": { 00:34:38.149 "read": true, 00:34:38.149 "write": true, 00:34:38.149 "unmap": true, 00:34:38.149 "write_zeroes": true, 00:34:38.149 "flush": true, 00:34:38.149 "reset": true, 00:34:38.149 "compare": false, 00:34:38.149 "compare_and_write": false, 00:34:38.149 "abort": true, 00:34:38.149 "nvme_admin": false, 00:34:38.149 "nvme_io": false 00:34:38.149 }, 00:34:38.149 "memory_domains": [ 00:34:38.149 { 00:34:38.149 "dma_device_id": "system", 00:34:38.149 "dma_device_type": 1 00:34:38.149 }, 00:34:38.149 { 00:34:38.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:38.149 "dma_device_type": 2 00:34:38.149 } 00:34:38.149 ], 00:34:38.149 "driver_specific": {} 00:34:38.149 } 00:34:38.149 ] 00:34:38.149 11:58:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:34:38.149 11:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:34:38.149 11:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:38.149 11:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:38.149 11:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:38.149 11:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:38.149 11:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:38.149 11:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:38.149 11:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:38.149 11:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:38.149 11:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:38.150 11:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:38.150 11:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:38.408 11:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:38.408 "name": "Existed_Raid", 00:34:38.408 "uuid": "b24fb7a1-6dbd-4746-ac75-1f9f831646a0", 00:34:38.408 "strip_size_kb": 64, 00:34:38.408 "state": "online", 00:34:38.408 "raid_level": "raid5f", 00:34:38.408 "superblock": false, 00:34:38.408 "num_base_bdevs": 3, 00:34:38.408 "num_base_bdevs_discovered": 3, 00:34:38.408 "num_base_bdevs_operational": 3, 00:34:38.408 "base_bdevs_list": [ 00:34:38.408 { 00:34:38.408 "name": "NewBaseBdev", 00:34:38.408 "uuid": "2dd5db10-1107-4b72-b8f2-8e46eb6ac5ce", 00:34:38.408 "is_configured": true, 00:34:38.408 "data_offset": 0, 00:34:38.408 "data_size": 65536 00:34:38.408 }, 00:34:38.408 { 00:34:38.408 "name": "BaseBdev2", 00:34:38.408 "uuid": "e8305f9a-47dd-42b9-be59-4840574ded5b", 00:34:38.408 "is_configured": true, 00:34:38.408 "data_offset": 0, 00:34:38.408 "data_size": 65536 00:34:38.408 }, 00:34:38.408 { 00:34:38.408 "name": "BaseBdev3", 00:34:38.408 "uuid": "8d814165-06ba-4438-a524-a1d16a9e75ff", 00:34:38.408 "is_configured": true, 00:34:38.408 "data_offset": 0, 00:34:38.408 "data_size": 65536 00:34:38.408 } 00:34:38.408 ] 00:34:38.408 }' 00:34:38.408 11:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:38.408 11:58:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.342 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:34:39.342 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:34:39.342 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:39.342 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:39.342 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:39.342 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:34:39.342 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:39.342 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:39.342 [2024-06-10 11:58:11.332587] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:39.342 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:39.342 "name": "Existed_Raid", 00:34:39.342 "aliases": [ 00:34:39.342 "b24fb7a1-6dbd-4746-ac75-1f9f831646a0" 00:34:39.342 ], 00:34:39.342 "product_name": "Raid Volume", 00:34:39.342 "block_size": 512, 00:34:39.342 "num_blocks": 131072, 00:34:39.342 "uuid": "b24fb7a1-6dbd-4746-ac75-1f9f831646a0", 00:34:39.342 "assigned_rate_limits": { 00:34:39.342 "rw_ios_per_sec": 0, 00:34:39.342 "rw_mbytes_per_sec": 0, 00:34:39.342 "r_mbytes_per_sec": 0, 00:34:39.342 "w_mbytes_per_sec": 0 00:34:39.342 }, 00:34:39.342 "claimed": false, 00:34:39.342 "zoned": false, 00:34:39.342 "supported_io_types": { 00:34:39.342 "read": true, 00:34:39.342 "write": true, 00:34:39.342 "unmap": false, 00:34:39.342 "write_zeroes": true, 00:34:39.342 "flush": false, 00:34:39.342 "reset": true, 00:34:39.342 "compare": false, 00:34:39.342 "compare_and_write": false, 00:34:39.342 "abort": false, 00:34:39.342 "nvme_admin": false, 00:34:39.342 "nvme_io": false 00:34:39.342 }, 00:34:39.342 "driver_specific": { 00:34:39.342 "raid": { 00:34:39.342 "uuid": "b24fb7a1-6dbd-4746-ac75-1f9f831646a0", 00:34:39.342 "strip_size_kb": 64, 00:34:39.342 "state": "online", 00:34:39.342 "raid_level": "raid5f", 00:34:39.342 "superblock": false, 00:34:39.342 "num_base_bdevs": 3, 00:34:39.342 "num_base_bdevs_discovered": 3, 00:34:39.342 "num_base_bdevs_operational": 3, 00:34:39.342 "base_bdevs_list": [ 00:34:39.342 { 00:34:39.342 "name": "NewBaseBdev", 00:34:39.342 "uuid": "2dd5db10-1107-4b72-b8f2-8e46eb6ac5ce", 00:34:39.342 "is_configured": true, 00:34:39.342 "data_offset": 0, 00:34:39.342 "data_size": 65536 00:34:39.342 }, 00:34:39.342 { 00:34:39.342 "name": "BaseBdev2", 00:34:39.342 "uuid": "e8305f9a-47dd-42b9-be59-4840574ded5b", 00:34:39.342 "is_configured": true, 00:34:39.342 "data_offset": 0, 00:34:39.342 "data_size": 65536 00:34:39.342 }, 00:34:39.342 { 00:34:39.342 "name": "BaseBdev3", 00:34:39.342 "uuid": "8d814165-06ba-4438-a524-a1d16a9e75ff", 00:34:39.342 "is_configured": true, 00:34:39.342 "data_offset": 0, 00:34:39.342 "data_size": 65536 00:34:39.342 } 00:34:39.342 ] 00:34:39.342 } 00:34:39.342 } 00:34:39.342 }' 00:34:39.342 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:39.600 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:34:39.600 BaseBdev2 00:34:39.600 BaseBdev3' 00:34:39.600 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:39.600 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:34:39.600 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:39.858 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:39.858 "name": "NewBaseBdev", 00:34:39.858 "aliases": [ 00:34:39.858 "2dd5db10-1107-4b72-b8f2-8e46eb6ac5ce" 00:34:39.858 ], 00:34:39.858 "product_name": "Malloc disk", 00:34:39.858 "block_size": 512, 00:34:39.858 "num_blocks": 65536, 00:34:39.858 "uuid": "2dd5db10-1107-4b72-b8f2-8e46eb6ac5ce", 00:34:39.858 "assigned_rate_limits": { 00:34:39.858 "rw_ios_per_sec": 0, 00:34:39.858 "rw_mbytes_per_sec": 0, 00:34:39.858 "r_mbytes_per_sec": 0, 00:34:39.858 "w_mbytes_per_sec": 0 00:34:39.858 }, 00:34:39.858 "claimed": true, 00:34:39.858 "claim_type": "exclusive_write", 00:34:39.858 "zoned": false, 00:34:39.858 "supported_io_types": { 00:34:39.858 "read": true, 00:34:39.858 "write": true, 00:34:39.858 "unmap": true, 00:34:39.858 "write_zeroes": true, 00:34:39.858 "flush": true, 00:34:39.858 "reset": true, 00:34:39.858 "compare": false, 00:34:39.858 "compare_and_write": false, 00:34:39.858 "abort": true, 00:34:39.858 "nvme_admin": false, 00:34:39.858 "nvme_io": false 00:34:39.858 }, 00:34:39.858 "memory_domains": [ 00:34:39.858 { 00:34:39.858 "dma_device_id": "system", 00:34:39.858 "dma_device_type": 1 00:34:39.858 }, 00:34:39.858 { 00:34:39.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:39.858 "dma_device_type": 2 00:34:39.858 } 00:34:39.858 ], 00:34:39.858 "driver_specific": {} 00:34:39.858 }' 00:34:39.858 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:39.858 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:39.858 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:39.858 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:39.858 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:39.858 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:39.858 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:40.116 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:40.116 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:40.116 11:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:40.116 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:40.116 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:40.116 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:40.116 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:40.116 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:40.374 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:40.374 "name": "BaseBdev2", 00:34:40.374 "aliases": [ 00:34:40.374 "e8305f9a-47dd-42b9-be59-4840574ded5b" 00:34:40.374 ], 00:34:40.374 "product_name": "Malloc disk", 00:34:40.374 "block_size": 512, 00:34:40.374 "num_blocks": 65536, 00:34:40.374 "uuid": "e8305f9a-47dd-42b9-be59-4840574ded5b", 00:34:40.374 "assigned_rate_limits": { 00:34:40.374 "rw_ios_per_sec": 0, 00:34:40.374 "rw_mbytes_per_sec": 0, 00:34:40.374 "r_mbytes_per_sec": 0, 00:34:40.374 "w_mbytes_per_sec": 0 00:34:40.374 }, 00:34:40.374 "claimed": true, 00:34:40.374 "claim_type": "exclusive_write", 00:34:40.374 "zoned": false, 00:34:40.374 "supported_io_types": { 00:34:40.374 "read": true, 00:34:40.374 "write": true, 00:34:40.374 "unmap": true, 00:34:40.374 "write_zeroes": true, 00:34:40.374 "flush": true, 00:34:40.374 "reset": true, 00:34:40.374 "compare": false, 00:34:40.374 "compare_and_write": false, 00:34:40.374 "abort": true, 00:34:40.374 "nvme_admin": false, 00:34:40.374 "nvme_io": false 00:34:40.374 }, 00:34:40.374 "memory_domains": [ 00:34:40.374 { 00:34:40.374 "dma_device_id": "system", 00:34:40.374 "dma_device_type": 1 00:34:40.374 }, 00:34:40.374 { 00:34:40.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:40.374 "dma_device_type": 2 00:34:40.374 } 00:34:40.374 ], 00:34:40.374 "driver_specific": {} 00:34:40.374 }' 00:34:40.374 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:40.374 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:40.374 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:40.374 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:40.631 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:40.631 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:40.631 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:40.631 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:40.631 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:40.631 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:40.631 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:40.890 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:40.890 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:40.890 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:34:40.890 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:41.148 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:41.148 "name": "BaseBdev3", 00:34:41.148 "aliases": [ 00:34:41.148 "8d814165-06ba-4438-a524-a1d16a9e75ff" 00:34:41.148 ], 00:34:41.148 "product_name": "Malloc disk", 00:34:41.148 "block_size": 512, 00:34:41.148 "num_blocks": 65536, 00:34:41.148 "uuid": "8d814165-06ba-4438-a524-a1d16a9e75ff", 00:34:41.148 "assigned_rate_limits": { 00:34:41.148 "rw_ios_per_sec": 0, 00:34:41.148 "rw_mbytes_per_sec": 0, 00:34:41.148 "r_mbytes_per_sec": 0, 00:34:41.148 "w_mbytes_per_sec": 0 00:34:41.148 }, 00:34:41.148 "claimed": true, 00:34:41.148 "claim_type": "exclusive_write", 00:34:41.148 "zoned": false, 00:34:41.148 "supported_io_types": { 00:34:41.148 "read": true, 00:34:41.148 "write": true, 00:34:41.148 "unmap": true, 00:34:41.148 "write_zeroes": true, 00:34:41.148 "flush": true, 00:34:41.148 "reset": true, 00:34:41.148 "compare": false, 00:34:41.148 "compare_and_write": false, 00:34:41.148 "abort": true, 00:34:41.148 "nvme_admin": false, 00:34:41.148 "nvme_io": false 00:34:41.148 }, 00:34:41.148 "memory_domains": [ 00:34:41.148 { 00:34:41.148 "dma_device_id": "system", 00:34:41.148 "dma_device_type": 1 00:34:41.148 }, 00:34:41.148 { 00:34:41.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:41.148 "dma_device_type": 2 00:34:41.148 } 00:34:41.148 ], 00:34:41.148 "driver_specific": {} 00:34:41.148 }' 00:34:41.148 11:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:41.148 11:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:41.148 11:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:41.148 11:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:41.148 11:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:41.148 11:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:41.148 11:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:41.406 11:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:41.406 11:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:41.406 11:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:41.406 11:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:41.406 11:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:41.406 11:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:41.665 [2024-06-10 11:58:13.628911] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:41.665 [2024-06-10 11:58:13.629194] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:41.665 [2024-06-10 11:58:13.629474] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:41.665 [2024-06-10 11:58:13.629848] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:41.665 [2024-06-10 11:58:13.629986] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:34:41.665 11:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 151812 00:34:41.665 11:58:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 151812 ']' 00:34:41.665 11:58:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # kill -0 151812 00:34:41.665 11:58:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # uname 00:34:41.665 11:58:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:41.665 11:58:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 151812 00:34:41.665 11:58:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:41.665 11:58:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:41.665 11:58:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 151812' 00:34:41.665 killing process with pid 151812 00:34:41.665 11:58:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # kill 151812 00:34:41.665 [2024-06-10 11:58:13.680375] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:41.665 11:58:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # wait 151812 00:34:42.232 [2024-06-10 11:58:14.011546] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:43.605 ************************************ 00:34:43.605 END TEST raid5f_state_function_test 00:34:43.605 ************************************ 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:34:43.605 00:34:43.605 real 0m32.036s 00:34:43.605 user 0m58.102s 00:34:43.605 sys 0m4.346s 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:43.605 11:58:15 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:34:43.605 11:58:15 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:34:43.605 11:58:15 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:43.605 11:58:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:43.605 ************************************ 00:34:43.605 START TEST raid5f_state_function_test_sb 00:34:43.605 ************************************ 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid5f 3 true 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=152800 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:43.605 Process raid pid: 152800 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 152800' 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 152800 /var/tmp/spdk-raid.sock 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 152800 ']' 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:43.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:43.605 11:58:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:43.605 [2024-06-10 11:58:15.639880] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:34:43.605 [2024-06-10 11:58:15.640295] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:43.863 [2024-06-10 11:58:15.799716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.120 [2024-06-10 11:58:16.024357] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.379 [2024-06-10 11:58:16.257177] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:44.637 11:58:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:44.637 11:58:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:34:44.637 11:58:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:44.895 [2024-06-10 11:58:16.901913] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:44.895 [2024-06-10 11:58:16.902214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:44.895 [2024-06-10 11:58:16.902353] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:44.895 [2024-06-10 11:58:16.902500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:44.895 [2024-06-10 11:58:16.902598] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:44.895 [2024-06-10 11:58:16.902681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:44.895 11:58:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:44.895 11:58:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:44.895 11:58:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:44.895 11:58:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:44.895 11:58:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:44.895 11:58:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:44.895 11:58:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:44.895 11:58:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:44.895 11:58:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:44.895 11:58:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:44.895 11:58:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:44.895 11:58:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:45.152 11:58:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:45.152 "name": "Existed_Raid", 00:34:45.152 "uuid": "566c14e2-9f2c-4859-9b5f-a1c8a900cdb2", 00:34:45.152 "strip_size_kb": 64, 00:34:45.152 "state": "configuring", 00:34:45.152 "raid_level": "raid5f", 00:34:45.152 "superblock": true, 00:34:45.152 "num_base_bdevs": 3, 00:34:45.152 "num_base_bdevs_discovered": 0, 00:34:45.152 "num_base_bdevs_operational": 3, 00:34:45.152 "base_bdevs_list": [ 00:34:45.152 { 00:34:45.152 "name": "BaseBdev1", 00:34:45.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:45.152 "is_configured": false, 00:34:45.152 "data_offset": 0, 00:34:45.152 "data_size": 0 00:34:45.152 }, 00:34:45.152 { 00:34:45.152 "name": "BaseBdev2", 00:34:45.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:45.152 "is_configured": false, 00:34:45.152 "data_offset": 0, 00:34:45.152 "data_size": 0 00:34:45.152 }, 00:34:45.152 { 00:34:45.152 "name": "BaseBdev3", 00:34:45.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:45.152 "is_configured": false, 00:34:45.152 "data_offset": 0, 00:34:45.152 "data_size": 0 00:34:45.152 } 00:34:45.152 ] 00:34:45.152 }' 00:34:45.152 11:58:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:45.152 11:58:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:45.718 11:58:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:45.976 [2024-06-10 11:58:17.961946] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:45.976 [2024-06-10 11:58:17.962186] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:34:45.976 11:58:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:46.234 [2024-06-10 11:58:18.190062] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:46.234 [2024-06-10 11:58:18.190385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:46.234 [2024-06-10 11:58:18.190501] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:46.234 [2024-06-10 11:58:18.190560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:46.234 [2024-06-10 11:58:18.190637] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:46.234 [2024-06-10 11:58:18.190790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:46.234 11:58:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:46.492 [2024-06-10 11:58:18.451961] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:46.492 BaseBdev1 00:34:46.492 11:58:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:34:46.492 11:58:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:34:46.492 11:58:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:34:46.492 11:58:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:34:46.492 11:58:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:34:46.492 11:58:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:34:46.492 11:58:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:46.750 11:58:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:47.008 [ 00:34:47.008 { 00:34:47.008 "name": "BaseBdev1", 00:34:47.008 "aliases": [ 00:34:47.008 "ede484ce-bc1b-40a5-8e67-339974b571fd" 00:34:47.008 ], 00:34:47.008 "product_name": "Malloc disk", 00:34:47.008 "block_size": 512, 00:34:47.008 "num_blocks": 65536, 00:34:47.008 "uuid": "ede484ce-bc1b-40a5-8e67-339974b571fd", 00:34:47.008 "assigned_rate_limits": { 00:34:47.008 "rw_ios_per_sec": 0, 00:34:47.008 "rw_mbytes_per_sec": 0, 00:34:47.008 "r_mbytes_per_sec": 0, 00:34:47.008 "w_mbytes_per_sec": 0 00:34:47.008 }, 00:34:47.008 "claimed": true, 00:34:47.008 "claim_type": "exclusive_write", 00:34:47.008 "zoned": false, 00:34:47.008 "supported_io_types": { 00:34:47.008 "read": true, 00:34:47.008 "write": true, 00:34:47.008 "unmap": true, 00:34:47.008 "write_zeroes": true, 00:34:47.008 "flush": true, 00:34:47.008 "reset": true, 00:34:47.008 "compare": false, 00:34:47.008 "compare_and_write": false, 00:34:47.008 "abort": true, 00:34:47.008 "nvme_admin": false, 00:34:47.008 "nvme_io": false 00:34:47.008 }, 00:34:47.008 "memory_domains": [ 00:34:47.008 { 00:34:47.008 "dma_device_id": "system", 00:34:47.008 "dma_device_type": 1 00:34:47.008 }, 00:34:47.008 { 00:34:47.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:47.009 "dma_device_type": 2 00:34:47.009 } 00:34:47.009 ], 00:34:47.009 "driver_specific": {} 00:34:47.009 } 00:34:47.009 ] 00:34:47.009 11:58:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:34:47.009 11:58:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:47.009 11:58:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:47.009 11:58:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:47.009 11:58:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:47.009 11:58:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:47.009 11:58:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:47.009 11:58:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:47.009 11:58:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:47.009 11:58:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:47.009 11:58:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:47.009 11:58:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:47.009 11:58:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:47.267 11:58:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:47.267 "name": "Existed_Raid", 00:34:47.267 "uuid": "7c26ed20-97e6-4819-8aae-86cccd8eb515", 00:34:47.267 "strip_size_kb": 64, 00:34:47.267 "state": "configuring", 00:34:47.267 "raid_level": "raid5f", 00:34:47.267 "superblock": true, 00:34:47.267 "num_base_bdevs": 3, 00:34:47.267 "num_base_bdevs_discovered": 1, 00:34:47.267 "num_base_bdevs_operational": 3, 00:34:47.267 "base_bdevs_list": [ 00:34:47.267 { 00:34:47.267 "name": "BaseBdev1", 00:34:47.267 "uuid": "ede484ce-bc1b-40a5-8e67-339974b571fd", 00:34:47.267 "is_configured": true, 00:34:47.267 "data_offset": 2048, 00:34:47.267 "data_size": 63488 00:34:47.267 }, 00:34:47.267 { 00:34:47.267 "name": "BaseBdev2", 00:34:47.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:47.267 "is_configured": false, 00:34:47.267 "data_offset": 0, 00:34:47.267 "data_size": 0 00:34:47.267 }, 00:34:47.267 { 00:34:47.267 "name": "BaseBdev3", 00:34:47.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:47.267 "is_configured": false, 00:34:47.267 "data_offset": 0, 00:34:47.267 "data_size": 0 00:34:47.267 } 00:34:47.267 ] 00:34:47.267 }' 00:34:47.267 11:58:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:47.267 11:58:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:47.833 11:58:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:48.091 [2024-06-10 11:58:20.020422] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:48.091 [2024-06-10 11:58:20.020721] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:34:48.091 11:58:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:48.350 [2024-06-10 11:58:20.312530] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:48.350 [2024-06-10 11:58:20.315004] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:48.350 [2024-06-10 11:58:20.315239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:48.350 [2024-06-10 11:58:20.315335] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:48.350 [2024-06-10 11:58:20.315464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:48.350 11:58:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:34:48.350 11:58:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:48.350 11:58:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:48.350 11:58:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:48.350 11:58:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:48.350 11:58:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:48.350 11:58:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:48.350 11:58:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:48.350 11:58:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:48.350 11:58:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:48.350 11:58:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:48.350 11:58:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:48.350 11:58:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:48.350 11:58:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:48.608 11:58:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:48.608 "name": "Existed_Raid", 00:34:48.608 "uuid": "95d6657d-a9c3-43aa-b112-7254caf78d98", 00:34:48.608 "strip_size_kb": 64, 00:34:48.608 "state": "configuring", 00:34:48.608 "raid_level": "raid5f", 00:34:48.608 "superblock": true, 00:34:48.608 "num_base_bdevs": 3, 00:34:48.608 "num_base_bdevs_discovered": 1, 00:34:48.608 "num_base_bdevs_operational": 3, 00:34:48.608 "base_bdevs_list": [ 00:34:48.608 { 00:34:48.608 "name": "BaseBdev1", 00:34:48.608 "uuid": "ede484ce-bc1b-40a5-8e67-339974b571fd", 00:34:48.608 "is_configured": true, 00:34:48.608 "data_offset": 2048, 00:34:48.608 "data_size": 63488 00:34:48.608 }, 00:34:48.608 { 00:34:48.608 "name": "BaseBdev2", 00:34:48.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:48.608 "is_configured": false, 00:34:48.608 "data_offset": 0, 00:34:48.608 "data_size": 0 00:34:48.608 }, 00:34:48.608 { 00:34:48.608 "name": "BaseBdev3", 00:34:48.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:48.608 "is_configured": false, 00:34:48.608 "data_offset": 0, 00:34:48.608 "data_size": 0 00:34:48.608 } 00:34:48.608 ] 00:34:48.608 }' 00:34:48.608 11:58:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:48.608 11:58:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:49.541 11:58:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:49.799 [2024-06-10 11:58:21.687590] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:49.799 BaseBdev2 00:34:49.799 11:58:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:34:49.799 11:58:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:34:49.799 11:58:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:34:49.799 11:58:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:34:49.799 11:58:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:34:49.799 11:58:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:34:49.799 11:58:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:50.058 11:58:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:50.314 [ 00:34:50.314 { 00:34:50.314 "name": "BaseBdev2", 00:34:50.314 "aliases": [ 00:34:50.314 "36b9ac4a-6696-44c3-8e04-3d5f24447602" 00:34:50.314 ], 00:34:50.314 "product_name": "Malloc disk", 00:34:50.314 "block_size": 512, 00:34:50.314 "num_blocks": 65536, 00:34:50.314 "uuid": "36b9ac4a-6696-44c3-8e04-3d5f24447602", 00:34:50.314 "assigned_rate_limits": { 00:34:50.314 "rw_ios_per_sec": 0, 00:34:50.314 "rw_mbytes_per_sec": 0, 00:34:50.314 "r_mbytes_per_sec": 0, 00:34:50.314 "w_mbytes_per_sec": 0 00:34:50.314 }, 00:34:50.314 "claimed": true, 00:34:50.314 "claim_type": "exclusive_write", 00:34:50.314 "zoned": false, 00:34:50.314 "supported_io_types": { 00:34:50.314 "read": true, 00:34:50.314 "write": true, 00:34:50.314 "unmap": true, 00:34:50.314 "write_zeroes": true, 00:34:50.314 "flush": true, 00:34:50.314 "reset": true, 00:34:50.314 "compare": false, 00:34:50.314 "compare_and_write": false, 00:34:50.314 "abort": true, 00:34:50.314 "nvme_admin": false, 00:34:50.314 "nvme_io": false 00:34:50.314 }, 00:34:50.314 "memory_domains": [ 00:34:50.314 { 00:34:50.314 "dma_device_id": "system", 00:34:50.314 "dma_device_type": 1 00:34:50.314 }, 00:34:50.314 { 00:34:50.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:50.314 "dma_device_type": 2 00:34:50.314 } 00:34:50.314 ], 00:34:50.314 "driver_specific": {} 00:34:50.314 } 00:34:50.314 ] 00:34:50.314 11:58:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:34:50.314 11:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:50.314 11:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:50.314 11:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:50.314 11:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:50.314 11:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:50.314 11:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:50.314 11:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:50.314 11:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:50.314 11:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:50.314 11:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:50.314 11:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:50.314 11:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:50.314 11:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:50.314 11:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:50.570 11:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:50.570 "name": "Existed_Raid", 00:34:50.570 "uuid": "95d6657d-a9c3-43aa-b112-7254caf78d98", 00:34:50.570 "strip_size_kb": 64, 00:34:50.570 "state": "configuring", 00:34:50.570 "raid_level": "raid5f", 00:34:50.571 "superblock": true, 00:34:50.571 "num_base_bdevs": 3, 00:34:50.571 "num_base_bdevs_discovered": 2, 00:34:50.571 "num_base_bdevs_operational": 3, 00:34:50.571 "base_bdevs_list": [ 00:34:50.571 { 00:34:50.571 "name": "BaseBdev1", 00:34:50.571 "uuid": "ede484ce-bc1b-40a5-8e67-339974b571fd", 00:34:50.571 "is_configured": true, 00:34:50.571 "data_offset": 2048, 00:34:50.571 "data_size": 63488 00:34:50.571 }, 00:34:50.571 { 00:34:50.571 "name": "BaseBdev2", 00:34:50.571 "uuid": "36b9ac4a-6696-44c3-8e04-3d5f24447602", 00:34:50.571 "is_configured": true, 00:34:50.571 "data_offset": 2048, 00:34:50.571 "data_size": 63488 00:34:50.571 }, 00:34:50.571 { 00:34:50.571 "name": "BaseBdev3", 00:34:50.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:50.571 "is_configured": false, 00:34:50.571 "data_offset": 0, 00:34:50.571 "data_size": 0 00:34:50.571 } 00:34:50.571 ] 00:34:50.571 }' 00:34:50.571 11:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:50.571 11:58:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:51.179 11:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:51.477 [2024-06-10 11:58:23.485745] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:51.477 [2024-06-10 11:58:23.486370] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:34:51.477 [2024-06-10 11:58:23.486511] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:51.477 [2024-06-10 11:58:23.486799] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:34:51.477 BaseBdev3 00:34:51.477 [2024-06-10 11:58:23.493235] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:34:51.477 [2024-06-10 11:58:23.493451] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:34:51.477 [2024-06-10 11:58:23.493768] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:51.477 11:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:34:51.477 11:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:34:51.477 11:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:34:51.477 11:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:34:51.477 11:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:34:51.477 11:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:34:51.477 11:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:52.044 11:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:52.303 [ 00:34:52.303 { 00:34:52.303 "name": "BaseBdev3", 00:34:52.303 "aliases": [ 00:34:52.303 "57bc7493-b477-4f91-b4ca-76de1e26d656" 00:34:52.303 ], 00:34:52.303 "product_name": "Malloc disk", 00:34:52.303 "block_size": 512, 00:34:52.303 "num_blocks": 65536, 00:34:52.303 "uuid": "57bc7493-b477-4f91-b4ca-76de1e26d656", 00:34:52.303 "assigned_rate_limits": { 00:34:52.303 "rw_ios_per_sec": 0, 00:34:52.303 "rw_mbytes_per_sec": 0, 00:34:52.303 "r_mbytes_per_sec": 0, 00:34:52.303 "w_mbytes_per_sec": 0 00:34:52.303 }, 00:34:52.303 "claimed": true, 00:34:52.303 "claim_type": "exclusive_write", 00:34:52.303 "zoned": false, 00:34:52.303 "supported_io_types": { 00:34:52.303 "read": true, 00:34:52.303 "write": true, 00:34:52.303 "unmap": true, 00:34:52.303 "write_zeroes": true, 00:34:52.303 "flush": true, 00:34:52.303 "reset": true, 00:34:52.303 "compare": false, 00:34:52.303 "compare_and_write": false, 00:34:52.303 "abort": true, 00:34:52.303 "nvme_admin": false, 00:34:52.303 "nvme_io": false 00:34:52.303 }, 00:34:52.303 "memory_domains": [ 00:34:52.303 { 00:34:52.303 "dma_device_id": "system", 00:34:52.303 "dma_device_type": 1 00:34:52.303 }, 00:34:52.303 { 00:34:52.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:52.303 "dma_device_type": 2 00:34:52.303 } 00:34:52.303 ], 00:34:52.303 "driver_specific": {} 00:34:52.303 } 00:34:52.303 ] 00:34:52.303 11:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:34:52.303 11:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:52.303 11:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:52.303 11:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:34:52.303 11:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:52.303 11:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:52.303 11:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:52.303 11:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:52.303 11:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:52.303 11:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:52.303 11:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:52.303 11:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:52.303 11:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:52.303 11:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:52.303 11:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:52.561 11:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:52.561 "name": "Existed_Raid", 00:34:52.561 "uuid": "95d6657d-a9c3-43aa-b112-7254caf78d98", 00:34:52.561 "strip_size_kb": 64, 00:34:52.561 "state": "online", 00:34:52.561 "raid_level": "raid5f", 00:34:52.561 "superblock": true, 00:34:52.561 "num_base_bdevs": 3, 00:34:52.561 "num_base_bdevs_discovered": 3, 00:34:52.561 "num_base_bdevs_operational": 3, 00:34:52.561 "base_bdevs_list": [ 00:34:52.561 { 00:34:52.561 "name": "BaseBdev1", 00:34:52.561 "uuid": "ede484ce-bc1b-40a5-8e67-339974b571fd", 00:34:52.561 "is_configured": true, 00:34:52.561 "data_offset": 2048, 00:34:52.561 "data_size": 63488 00:34:52.561 }, 00:34:52.561 { 00:34:52.561 "name": "BaseBdev2", 00:34:52.561 "uuid": "36b9ac4a-6696-44c3-8e04-3d5f24447602", 00:34:52.561 "is_configured": true, 00:34:52.561 "data_offset": 2048, 00:34:52.561 "data_size": 63488 00:34:52.561 }, 00:34:52.561 { 00:34:52.561 "name": "BaseBdev3", 00:34:52.561 "uuid": "57bc7493-b477-4f91-b4ca-76de1e26d656", 00:34:52.561 "is_configured": true, 00:34:52.561 "data_offset": 2048, 00:34:52.561 "data_size": 63488 00:34:52.561 } 00:34:52.561 ] 00:34:52.561 }' 00:34:52.561 11:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:52.561 11:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:53.128 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:34:53.128 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:34:53.128 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:53.128 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:53.128 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:53.128 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:34:53.128 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:53.128 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:53.387 [2024-06-10 11:58:25.346135] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:53.387 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:53.387 "name": "Existed_Raid", 00:34:53.387 "aliases": [ 00:34:53.387 "95d6657d-a9c3-43aa-b112-7254caf78d98" 00:34:53.387 ], 00:34:53.387 "product_name": "Raid Volume", 00:34:53.387 "block_size": 512, 00:34:53.387 "num_blocks": 126976, 00:34:53.387 "uuid": "95d6657d-a9c3-43aa-b112-7254caf78d98", 00:34:53.387 "assigned_rate_limits": { 00:34:53.387 "rw_ios_per_sec": 0, 00:34:53.387 "rw_mbytes_per_sec": 0, 00:34:53.387 "r_mbytes_per_sec": 0, 00:34:53.387 "w_mbytes_per_sec": 0 00:34:53.387 }, 00:34:53.387 "claimed": false, 00:34:53.387 "zoned": false, 00:34:53.387 "supported_io_types": { 00:34:53.387 "read": true, 00:34:53.387 "write": true, 00:34:53.387 "unmap": false, 00:34:53.387 "write_zeroes": true, 00:34:53.387 "flush": false, 00:34:53.387 "reset": true, 00:34:53.387 "compare": false, 00:34:53.387 "compare_and_write": false, 00:34:53.387 "abort": false, 00:34:53.387 "nvme_admin": false, 00:34:53.387 "nvme_io": false 00:34:53.387 }, 00:34:53.387 "driver_specific": { 00:34:53.387 "raid": { 00:34:53.387 "uuid": "95d6657d-a9c3-43aa-b112-7254caf78d98", 00:34:53.387 "strip_size_kb": 64, 00:34:53.387 "state": "online", 00:34:53.387 "raid_level": "raid5f", 00:34:53.387 "superblock": true, 00:34:53.387 "num_base_bdevs": 3, 00:34:53.387 "num_base_bdevs_discovered": 3, 00:34:53.387 "num_base_bdevs_operational": 3, 00:34:53.387 "base_bdevs_list": [ 00:34:53.387 { 00:34:53.387 "name": "BaseBdev1", 00:34:53.387 "uuid": "ede484ce-bc1b-40a5-8e67-339974b571fd", 00:34:53.387 "is_configured": true, 00:34:53.387 "data_offset": 2048, 00:34:53.387 "data_size": 63488 00:34:53.387 }, 00:34:53.387 { 00:34:53.387 "name": "BaseBdev2", 00:34:53.387 "uuid": "36b9ac4a-6696-44c3-8e04-3d5f24447602", 00:34:53.387 "is_configured": true, 00:34:53.387 "data_offset": 2048, 00:34:53.387 "data_size": 63488 00:34:53.387 }, 00:34:53.387 { 00:34:53.387 "name": "BaseBdev3", 00:34:53.387 "uuid": "57bc7493-b477-4f91-b4ca-76de1e26d656", 00:34:53.387 "is_configured": true, 00:34:53.387 "data_offset": 2048, 00:34:53.387 "data_size": 63488 00:34:53.387 } 00:34:53.387 ] 00:34:53.387 } 00:34:53.387 } 00:34:53.387 }' 00:34:53.387 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:53.387 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:34:53.387 BaseBdev2 00:34:53.387 BaseBdev3' 00:34:53.387 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:53.387 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:34:53.387 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:53.646 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:53.646 "name": "BaseBdev1", 00:34:53.646 "aliases": [ 00:34:53.646 "ede484ce-bc1b-40a5-8e67-339974b571fd" 00:34:53.646 ], 00:34:53.646 "product_name": "Malloc disk", 00:34:53.646 "block_size": 512, 00:34:53.646 "num_blocks": 65536, 00:34:53.646 "uuid": "ede484ce-bc1b-40a5-8e67-339974b571fd", 00:34:53.646 "assigned_rate_limits": { 00:34:53.646 "rw_ios_per_sec": 0, 00:34:53.646 "rw_mbytes_per_sec": 0, 00:34:53.646 "r_mbytes_per_sec": 0, 00:34:53.646 "w_mbytes_per_sec": 0 00:34:53.646 }, 00:34:53.646 "claimed": true, 00:34:53.646 "claim_type": "exclusive_write", 00:34:53.646 "zoned": false, 00:34:53.646 "supported_io_types": { 00:34:53.646 "read": true, 00:34:53.646 "write": true, 00:34:53.646 "unmap": true, 00:34:53.646 "write_zeroes": true, 00:34:53.646 "flush": true, 00:34:53.646 "reset": true, 00:34:53.646 "compare": false, 00:34:53.646 "compare_and_write": false, 00:34:53.646 "abort": true, 00:34:53.646 "nvme_admin": false, 00:34:53.646 "nvme_io": false 00:34:53.646 }, 00:34:53.646 "memory_domains": [ 00:34:53.646 { 00:34:53.646 "dma_device_id": "system", 00:34:53.646 "dma_device_type": 1 00:34:53.646 }, 00:34:53.646 { 00:34:53.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:53.646 "dma_device_type": 2 00:34:53.646 } 00:34:53.646 ], 00:34:53.646 "driver_specific": {} 00:34:53.646 }' 00:34:53.646 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:53.646 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:53.646 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:53.906 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:53.906 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:53.906 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:53.906 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:53.906 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:53.906 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:53.906 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:53.906 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:53.906 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:53.906 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:54.164 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:54.164 11:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:54.164 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:54.164 "name": "BaseBdev2", 00:34:54.164 "aliases": [ 00:34:54.164 "36b9ac4a-6696-44c3-8e04-3d5f24447602" 00:34:54.164 ], 00:34:54.164 "product_name": "Malloc disk", 00:34:54.164 "block_size": 512, 00:34:54.164 "num_blocks": 65536, 00:34:54.164 "uuid": "36b9ac4a-6696-44c3-8e04-3d5f24447602", 00:34:54.164 "assigned_rate_limits": { 00:34:54.164 "rw_ios_per_sec": 0, 00:34:54.164 "rw_mbytes_per_sec": 0, 00:34:54.164 "r_mbytes_per_sec": 0, 00:34:54.164 "w_mbytes_per_sec": 0 00:34:54.164 }, 00:34:54.164 "claimed": true, 00:34:54.164 "claim_type": "exclusive_write", 00:34:54.164 "zoned": false, 00:34:54.164 "supported_io_types": { 00:34:54.164 "read": true, 00:34:54.164 "write": true, 00:34:54.164 "unmap": true, 00:34:54.164 "write_zeroes": true, 00:34:54.164 "flush": true, 00:34:54.164 "reset": true, 00:34:54.164 "compare": false, 00:34:54.164 "compare_and_write": false, 00:34:54.164 "abort": true, 00:34:54.164 "nvme_admin": false, 00:34:54.164 "nvme_io": false 00:34:54.164 }, 00:34:54.164 "memory_domains": [ 00:34:54.164 { 00:34:54.164 "dma_device_id": "system", 00:34:54.164 "dma_device_type": 1 00:34:54.164 }, 00:34:54.164 { 00:34:54.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:54.164 "dma_device_type": 2 00:34:54.164 } 00:34:54.164 ], 00:34:54.164 "driver_specific": {} 00:34:54.164 }' 00:34:54.164 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:54.423 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:54.423 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:54.423 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:54.423 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:54.423 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:54.423 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:54.423 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:54.423 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:54.423 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:54.681 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:54.681 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:54.681 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:54.681 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:34:54.681 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:54.940 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:54.940 "name": "BaseBdev3", 00:34:54.940 "aliases": [ 00:34:54.940 "57bc7493-b477-4f91-b4ca-76de1e26d656" 00:34:54.940 ], 00:34:54.940 "product_name": "Malloc disk", 00:34:54.940 "block_size": 512, 00:34:54.940 "num_blocks": 65536, 00:34:54.940 "uuid": "57bc7493-b477-4f91-b4ca-76de1e26d656", 00:34:54.940 "assigned_rate_limits": { 00:34:54.940 "rw_ios_per_sec": 0, 00:34:54.940 "rw_mbytes_per_sec": 0, 00:34:54.940 "r_mbytes_per_sec": 0, 00:34:54.940 "w_mbytes_per_sec": 0 00:34:54.940 }, 00:34:54.940 "claimed": true, 00:34:54.940 "claim_type": "exclusive_write", 00:34:54.940 "zoned": false, 00:34:54.940 "supported_io_types": { 00:34:54.940 "read": true, 00:34:54.940 "write": true, 00:34:54.940 "unmap": true, 00:34:54.940 "write_zeroes": true, 00:34:54.940 "flush": true, 00:34:54.940 "reset": true, 00:34:54.940 "compare": false, 00:34:54.940 "compare_and_write": false, 00:34:54.940 "abort": true, 00:34:54.940 "nvme_admin": false, 00:34:54.940 "nvme_io": false 00:34:54.940 }, 00:34:54.940 "memory_domains": [ 00:34:54.940 { 00:34:54.940 "dma_device_id": "system", 00:34:54.940 "dma_device_type": 1 00:34:54.940 }, 00:34:54.940 { 00:34:54.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:54.940 "dma_device_type": 2 00:34:54.940 } 00:34:54.940 ], 00:34:54.940 "driver_specific": {} 00:34:54.940 }' 00:34:54.940 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:54.940 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:54.940 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:54.940 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:54.940 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:54.940 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:54.940 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:54.940 11:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:55.199 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:55.199 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:55.199 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:55.199 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:55.199 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:55.457 [2024-06-10 11:58:27.386552] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:55.716 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:34:55.716 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:34:55.716 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:34:55.716 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:34:55.716 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:34:55.716 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:34:55.716 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:55.716 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:55.716 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:55.716 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:55.716 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:55.716 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:55.716 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:55.716 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:55.716 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:55.716 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:55.716 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:55.974 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:55.974 "name": "Existed_Raid", 00:34:55.974 "uuid": "95d6657d-a9c3-43aa-b112-7254caf78d98", 00:34:55.974 "strip_size_kb": 64, 00:34:55.974 "state": "online", 00:34:55.974 "raid_level": "raid5f", 00:34:55.974 "superblock": true, 00:34:55.974 "num_base_bdevs": 3, 00:34:55.974 "num_base_bdevs_discovered": 2, 00:34:55.974 "num_base_bdevs_operational": 2, 00:34:55.974 "base_bdevs_list": [ 00:34:55.974 { 00:34:55.974 "name": null, 00:34:55.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.974 "is_configured": false, 00:34:55.974 "data_offset": 2048, 00:34:55.974 "data_size": 63488 00:34:55.974 }, 00:34:55.974 { 00:34:55.974 "name": "BaseBdev2", 00:34:55.974 "uuid": "36b9ac4a-6696-44c3-8e04-3d5f24447602", 00:34:55.974 "is_configured": true, 00:34:55.974 "data_offset": 2048, 00:34:55.975 "data_size": 63488 00:34:55.975 }, 00:34:55.975 { 00:34:55.975 "name": "BaseBdev3", 00:34:55.975 "uuid": "57bc7493-b477-4f91-b4ca-76de1e26d656", 00:34:55.975 "is_configured": true, 00:34:55.975 "data_offset": 2048, 00:34:55.975 "data_size": 63488 00:34:55.975 } 00:34:55.975 ] 00:34:55.975 }' 00:34:55.975 11:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:55.975 11:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:56.542 11:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:34:56.542 11:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:56.542 11:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:56.542 11:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:56.843 11:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:56.843 11:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:56.843 11:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:34:56.843 [2024-06-10 11:58:28.870376] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:56.843 [2024-06-10 11:58:28.870792] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:57.102 [2024-06-10 11:58:28.985725] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:57.102 11:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:57.102 11:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:57.102 11:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:57.102 11:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:57.360 11:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:57.361 11:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:57.361 11:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:34:57.619 [2024-06-10 11:58:29.474005] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:57.619 [2024-06-10 11:58:29.474503] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:34:57.619 11:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:57.619 11:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:57.619 11:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:57.619 11:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:34:58.187 11:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:34:58.187 11:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:34:58.187 11:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:34:58.187 11:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:34:58.187 11:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:58.187 11:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:58.187 BaseBdev2 00:34:58.187 11:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:34:58.187 11:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:34:58.187 11:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:34:58.187 11:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:34:58.187 11:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:34:58.187 11:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:34:58.187 11:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:58.753 11:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:59.012 [ 00:34:59.012 { 00:34:59.012 "name": "BaseBdev2", 00:34:59.012 "aliases": [ 00:34:59.012 "fc3dcda7-ef61-47f6-b089-cca998e7de77" 00:34:59.012 ], 00:34:59.012 "product_name": "Malloc disk", 00:34:59.012 "block_size": 512, 00:34:59.012 "num_blocks": 65536, 00:34:59.012 "uuid": "fc3dcda7-ef61-47f6-b089-cca998e7de77", 00:34:59.012 "assigned_rate_limits": { 00:34:59.012 "rw_ios_per_sec": 0, 00:34:59.012 "rw_mbytes_per_sec": 0, 00:34:59.012 "r_mbytes_per_sec": 0, 00:34:59.012 "w_mbytes_per_sec": 0 00:34:59.012 }, 00:34:59.012 "claimed": false, 00:34:59.012 "zoned": false, 00:34:59.012 "supported_io_types": { 00:34:59.012 "read": true, 00:34:59.012 "write": true, 00:34:59.012 "unmap": true, 00:34:59.012 "write_zeroes": true, 00:34:59.012 "flush": true, 00:34:59.012 "reset": true, 00:34:59.012 "compare": false, 00:34:59.012 "compare_and_write": false, 00:34:59.012 "abort": true, 00:34:59.012 "nvme_admin": false, 00:34:59.012 "nvme_io": false 00:34:59.012 }, 00:34:59.012 "memory_domains": [ 00:34:59.012 { 00:34:59.012 "dma_device_id": "system", 00:34:59.012 "dma_device_type": 1 00:34:59.012 }, 00:34:59.012 { 00:34:59.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:59.012 "dma_device_type": 2 00:34:59.012 } 00:34:59.012 ], 00:34:59.012 "driver_specific": {} 00:34:59.012 } 00:34:59.012 ] 00:34:59.012 11:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:34:59.012 11:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:59.012 11:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:59.012 11:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:59.271 BaseBdev3 00:34:59.271 11:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:34:59.271 11:58:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:34:59.271 11:58:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:34:59.271 11:58:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:34:59.271 11:58:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:34:59.271 11:58:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:34:59.271 11:58:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:59.529 11:58:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:59.789 [ 00:34:59.789 { 00:34:59.789 "name": "BaseBdev3", 00:34:59.789 "aliases": [ 00:34:59.789 "6296693e-d4ec-44e7-a50f-9fff98fc444d" 00:34:59.789 ], 00:34:59.789 "product_name": "Malloc disk", 00:34:59.789 "block_size": 512, 00:34:59.789 "num_blocks": 65536, 00:34:59.789 "uuid": "6296693e-d4ec-44e7-a50f-9fff98fc444d", 00:34:59.789 "assigned_rate_limits": { 00:34:59.789 "rw_ios_per_sec": 0, 00:34:59.789 "rw_mbytes_per_sec": 0, 00:34:59.789 "r_mbytes_per_sec": 0, 00:34:59.789 "w_mbytes_per_sec": 0 00:34:59.789 }, 00:34:59.789 "claimed": false, 00:34:59.789 "zoned": false, 00:34:59.789 "supported_io_types": { 00:34:59.789 "read": true, 00:34:59.789 "write": true, 00:34:59.789 "unmap": true, 00:34:59.789 "write_zeroes": true, 00:34:59.789 "flush": true, 00:34:59.789 "reset": true, 00:34:59.789 "compare": false, 00:34:59.789 "compare_and_write": false, 00:34:59.789 "abort": true, 00:34:59.789 "nvme_admin": false, 00:34:59.789 "nvme_io": false 00:34:59.789 }, 00:34:59.789 "memory_domains": [ 00:34:59.789 { 00:34:59.789 "dma_device_id": "system", 00:34:59.789 "dma_device_type": 1 00:34:59.789 }, 00:34:59.789 { 00:34:59.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:59.789 "dma_device_type": 2 00:34:59.789 } 00:34:59.789 ], 00:34:59.789 "driver_specific": {} 00:34:59.789 } 00:34:59.789 ] 00:34:59.789 11:58:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:34:59.789 11:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:59.789 11:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:59.789 11:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:59.789 [2024-06-10 11:58:31.839263] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:59.789 [2024-06-10 11:58:31.839701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:59.789 [2024-06-10 11:58:31.839937] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:59.789 [2024-06-10 11:58:31.842288] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:00.048 11:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:35:00.048 11:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:00.048 11:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:00.048 11:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:00.048 11:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:00.048 11:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:00.048 11:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:00.048 11:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:00.048 11:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:00.048 11:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:00.048 11:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:00.048 11:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:00.048 11:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:00.048 "name": "Existed_Raid", 00:35:00.048 "uuid": "bd68acea-c667-4231-85fb-82b42126a0a3", 00:35:00.048 "strip_size_kb": 64, 00:35:00.048 "state": "configuring", 00:35:00.048 "raid_level": "raid5f", 00:35:00.048 "superblock": true, 00:35:00.048 "num_base_bdevs": 3, 00:35:00.048 "num_base_bdevs_discovered": 2, 00:35:00.048 "num_base_bdevs_operational": 3, 00:35:00.048 "base_bdevs_list": [ 00:35:00.048 { 00:35:00.048 "name": "BaseBdev1", 00:35:00.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:00.048 "is_configured": false, 00:35:00.048 "data_offset": 0, 00:35:00.048 "data_size": 0 00:35:00.048 }, 00:35:00.048 { 00:35:00.048 "name": "BaseBdev2", 00:35:00.048 "uuid": "fc3dcda7-ef61-47f6-b089-cca998e7de77", 00:35:00.048 "is_configured": true, 00:35:00.048 "data_offset": 2048, 00:35:00.048 "data_size": 63488 00:35:00.048 }, 00:35:00.048 { 00:35:00.048 "name": "BaseBdev3", 00:35:00.048 "uuid": "6296693e-d4ec-44e7-a50f-9fff98fc444d", 00:35:00.048 "is_configured": true, 00:35:00.048 "data_offset": 2048, 00:35:00.048 "data_size": 63488 00:35:00.048 } 00:35:00.048 ] 00:35:00.048 }' 00:35:00.048 11:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:00.048 11:58:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:00.614 11:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:35:00.873 [2024-06-10 11:58:32.903537] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:00.873 11:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:35:00.873 11:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:00.873 11:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:00.873 11:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:00.873 11:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:00.873 11:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:00.873 11:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:00.873 11:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:00.873 11:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:00.873 11:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:00.873 11:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:00.873 11:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:01.131 11:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:01.131 "name": "Existed_Raid", 00:35:01.131 "uuid": "bd68acea-c667-4231-85fb-82b42126a0a3", 00:35:01.131 "strip_size_kb": 64, 00:35:01.131 "state": "configuring", 00:35:01.131 "raid_level": "raid5f", 00:35:01.131 "superblock": true, 00:35:01.131 "num_base_bdevs": 3, 00:35:01.131 "num_base_bdevs_discovered": 1, 00:35:01.131 "num_base_bdevs_operational": 3, 00:35:01.131 "base_bdevs_list": [ 00:35:01.131 { 00:35:01.131 "name": "BaseBdev1", 00:35:01.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:01.131 "is_configured": false, 00:35:01.131 "data_offset": 0, 00:35:01.131 "data_size": 0 00:35:01.131 }, 00:35:01.131 { 00:35:01.131 "name": null, 00:35:01.131 "uuid": "fc3dcda7-ef61-47f6-b089-cca998e7de77", 00:35:01.131 "is_configured": false, 00:35:01.131 "data_offset": 2048, 00:35:01.131 "data_size": 63488 00:35:01.131 }, 00:35:01.131 { 00:35:01.131 "name": "BaseBdev3", 00:35:01.131 "uuid": "6296693e-d4ec-44e7-a50f-9fff98fc444d", 00:35:01.131 "is_configured": true, 00:35:01.131 "data_offset": 2048, 00:35:01.131 "data_size": 63488 00:35:01.131 } 00:35:01.131 ] 00:35:01.131 }' 00:35:01.131 11:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:01.131 11:58:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:02.068 11:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:02.068 11:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:02.328 11:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:35:02.328 11:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:35:02.587 [2024-06-10 11:58:34.450103] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:02.587 BaseBdev1 00:35:02.587 11:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:35:02.587 11:58:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:35:02.587 11:58:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:35:02.587 11:58:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:35:02.587 11:58:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:35:02.587 11:58:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:35:02.587 11:58:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:02.846 11:58:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:03.105 [ 00:35:03.105 { 00:35:03.105 "name": "BaseBdev1", 00:35:03.105 "aliases": [ 00:35:03.105 "90b27ff0-08a6-432b-b11a-25e5af931754" 00:35:03.105 ], 00:35:03.105 "product_name": "Malloc disk", 00:35:03.105 "block_size": 512, 00:35:03.105 "num_blocks": 65536, 00:35:03.105 "uuid": "90b27ff0-08a6-432b-b11a-25e5af931754", 00:35:03.105 "assigned_rate_limits": { 00:35:03.105 "rw_ios_per_sec": 0, 00:35:03.105 "rw_mbytes_per_sec": 0, 00:35:03.105 "r_mbytes_per_sec": 0, 00:35:03.105 "w_mbytes_per_sec": 0 00:35:03.105 }, 00:35:03.105 "claimed": true, 00:35:03.105 "claim_type": "exclusive_write", 00:35:03.105 "zoned": false, 00:35:03.105 "supported_io_types": { 00:35:03.105 "read": true, 00:35:03.105 "write": true, 00:35:03.105 "unmap": true, 00:35:03.105 "write_zeroes": true, 00:35:03.105 "flush": true, 00:35:03.105 "reset": true, 00:35:03.105 "compare": false, 00:35:03.105 "compare_and_write": false, 00:35:03.105 "abort": true, 00:35:03.105 "nvme_admin": false, 00:35:03.105 "nvme_io": false 00:35:03.105 }, 00:35:03.105 "memory_domains": [ 00:35:03.105 { 00:35:03.105 "dma_device_id": "system", 00:35:03.105 "dma_device_type": 1 00:35:03.105 }, 00:35:03.105 { 00:35:03.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:03.105 "dma_device_type": 2 00:35:03.106 } 00:35:03.106 ], 00:35:03.106 "driver_specific": {} 00:35:03.106 } 00:35:03.106 ] 00:35:03.106 11:58:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:35:03.106 11:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:35:03.106 11:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:03.106 11:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:03.106 11:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:03.106 11:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:03.106 11:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:03.106 11:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:03.106 11:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:03.106 11:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:03.106 11:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:03.106 11:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:03.106 11:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:03.365 11:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:03.365 "name": "Existed_Raid", 00:35:03.365 "uuid": "bd68acea-c667-4231-85fb-82b42126a0a3", 00:35:03.365 "strip_size_kb": 64, 00:35:03.365 "state": "configuring", 00:35:03.365 "raid_level": "raid5f", 00:35:03.365 "superblock": true, 00:35:03.365 "num_base_bdevs": 3, 00:35:03.365 "num_base_bdevs_discovered": 2, 00:35:03.365 "num_base_bdevs_operational": 3, 00:35:03.365 "base_bdevs_list": [ 00:35:03.365 { 00:35:03.365 "name": "BaseBdev1", 00:35:03.365 "uuid": "90b27ff0-08a6-432b-b11a-25e5af931754", 00:35:03.365 "is_configured": true, 00:35:03.365 "data_offset": 2048, 00:35:03.365 "data_size": 63488 00:35:03.365 }, 00:35:03.365 { 00:35:03.365 "name": null, 00:35:03.365 "uuid": "fc3dcda7-ef61-47f6-b089-cca998e7de77", 00:35:03.365 "is_configured": false, 00:35:03.365 "data_offset": 2048, 00:35:03.365 "data_size": 63488 00:35:03.365 }, 00:35:03.365 { 00:35:03.365 "name": "BaseBdev3", 00:35:03.365 "uuid": "6296693e-d4ec-44e7-a50f-9fff98fc444d", 00:35:03.365 "is_configured": true, 00:35:03.365 "data_offset": 2048, 00:35:03.365 "data_size": 63488 00:35:03.365 } 00:35:03.365 ] 00:35:03.365 }' 00:35:03.365 11:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:03.365 11:58:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:03.931 11:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:03.931 11:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:04.190 11:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:35:04.190 11:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:35:04.449 [2024-06-10 11:58:36.319157] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:04.449 11:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:35:04.449 11:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:04.449 11:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:04.449 11:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:04.449 11:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:04.449 11:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:04.449 11:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:04.449 11:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:04.449 11:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:04.449 11:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:04.449 11:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:04.449 11:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:04.708 11:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:04.709 "name": "Existed_Raid", 00:35:04.709 "uuid": "bd68acea-c667-4231-85fb-82b42126a0a3", 00:35:04.709 "strip_size_kb": 64, 00:35:04.709 "state": "configuring", 00:35:04.709 "raid_level": "raid5f", 00:35:04.709 "superblock": true, 00:35:04.709 "num_base_bdevs": 3, 00:35:04.709 "num_base_bdevs_discovered": 1, 00:35:04.709 "num_base_bdevs_operational": 3, 00:35:04.709 "base_bdevs_list": [ 00:35:04.709 { 00:35:04.709 "name": "BaseBdev1", 00:35:04.709 "uuid": "90b27ff0-08a6-432b-b11a-25e5af931754", 00:35:04.709 "is_configured": true, 00:35:04.709 "data_offset": 2048, 00:35:04.709 "data_size": 63488 00:35:04.709 }, 00:35:04.709 { 00:35:04.709 "name": null, 00:35:04.709 "uuid": "fc3dcda7-ef61-47f6-b089-cca998e7de77", 00:35:04.709 "is_configured": false, 00:35:04.709 "data_offset": 2048, 00:35:04.709 "data_size": 63488 00:35:04.709 }, 00:35:04.709 { 00:35:04.709 "name": null, 00:35:04.709 "uuid": "6296693e-d4ec-44e7-a50f-9fff98fc444d", 00:35:04.709 "is_configured": false, 00:35:04.709 "data_offset": 2048, 00:35:04.709 "data_size": 63488 00:35:04.709 } 00:35:04.709 ] 00:35:04.709 }' 00:35:04.709 11:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:04.709 11:58:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:05.276 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:05.276 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:05.534 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:35:05.534 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:35:05.792 [2024-06-10 11:58:37.631213] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:05.792 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:35:05.792 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:05.792 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:05.792 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:05.792 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:05.792 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:05.792 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:05.792 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:05.792 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:05.792 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:05.792 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:05.792 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:06.051 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:06.051 "name": "Existed_Raid", 00:35:06.051 "uuid": "bd68acea-c667-4231-85fb-82b42126a0a3", 00:35:06.051 "strip_size_kb": 64, 00:35:06.051 "state": "configuring", 00:35:06.051 "raid_level": "raid5f", 00:35:06.051 "superblock": true, 00:35:06.051 "num_base_bdevs": 3, 00:35:06.051 "num_base_bdevs_discovered": 2, 00:35:06.051 "num_base_bdevs_operational": 3, 00:35:06.051 "base_bdevs_list": [ 00:35:06.051 { 00:35:06.051 "name": "BaseBdev1", 00:35:06.051 "uuid": "90b27ff0-08a6-432b-b11a-25e5af931754", 00:35:06.051 "is_configured": true, 00:35:06.051 "data_offset": 2048, 00:35:06.051 "data_size": 63488 00:35:06.051 }, 00:35:06.051 { 00:35:06.051 "name": null, 00:35:06.051 "uuid": "fc3dcda7-ef61-47f6-b089-cca998e7de77", 00:35:06.051 "is_configured": false, 00:35:06.051 "data_offset": 2048, 00:35:06.051 "data_size": 63488 00:35:06.051 }, 00:35:06.051 { 00:35:06.051 "name": "BaseBdev3", 00:35:06.051 "uuid": "6296693e-d4ec-44e7-a50f-9fff98fc444d", 00:35:06.051 "is_configured": true, 00:35:06.051 "data_offset": 2048, 00:35:06.051 "data_size": 63488 00:35:06.051 } 00:35:06.051 ] 00:35:06.051 }' 00:35:06.051 11:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:06.051 11:58:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:06.619 11:58:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:06.619 11:58:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:06.877 11:58:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:35:06.877 11:58:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:07.136 [2024-06-10 11:58:39.051559] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:07.136 11:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:35:07.136 11:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:07.136 11:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:07.136 11:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:07.136 11:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:07.136 11:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:07.136 11:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:07.136 11:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:07.136 11:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:07.136 11:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:07.136 11:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:07.136 11:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:07.394 11:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:07.394 "name": "Existed_Raid", 00:35:07.394 "uuid": "bd68acea-c667-4231-85fb-82b42126a0a3", 00:35:07.394 "strip_size_kb": 64, 00:35:07.394 "state": "configuring", 00:35:07.394 "raid_level": "raid5f", 00:35:07.394 "superblock": true, 00:35:07.394 "num_base_bdevs": 3, 00:35:07.394 "num_base_bdevs_discovered": 1, 00:35:07.394 "num_base_bdevs_operational": 3, 00:35:07.394 "base_bdevs_list": [ 00:35:07.394 { 00:35:07.394 "name": null, 00:35:07.394 "uuid": "90b27ff0-08a6-432b-b11a-25e5af931754", 00:35:07.394 "is_configured": false, 00:35:07.394 "data_offset": 2048, 00:35:07.394 "data_size": 63488 00:35:07.394 }, 00:35:07.394 { 00:35:07.394 "name": null, 00:35:07.394 "uuid": "fc3dcda7-ef61-47f6-b089-cca998e7de77", 00:35:07.394 "is_configured": false, 00:35:07.394 "data_offset": 2048, 00:35:07.394 "data_size": 63488 00:35:07.395 }, 00:35:07.395 { 00:35:07.395 "name": "BaseBdev3", 00:35:07.395 "uuid": "6296693e-d4ec-44e7-a50f-9fff98fc444d", 00:35:07.395 "is_configured": true, 00:35:07.395 "data_offset": 2048, 00:35:07.395 "data_size": 63488 00:35:07.395 } 00:35:07.395 ] 00:35:07.395 }' 00:35:07.653 11:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:07.653 11:58:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.221 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:08.221 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:08.478 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:35:08.478 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:35:08.736 [2024-06-10 11:58:40.561427] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:08.736 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:35:08.736 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:08.736 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:08.736 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:08.736 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:08.736 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:08.736 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:08.736 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:08.736 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:08.736 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:08.736 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:08.736 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:08.995 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:08.995 "name": "Existed_Raid", 00:35:08.995 "uuid": "bd68acea-c667-4231-85fb-82b42126a0a3", 00:35:08.995 "strip_size_kb": 64, 00:35:08.995 "state": "configuring", 00:35:08.995 "raid_level": "raid5f", 00:35:08.995 "superblock": true, 00:35:08.995 "num_base_bdevs": 3, 00:35:08.995 "num_base_bdevs_discovered": 2, 00:35:08.995 "num_base_bdevs_operational": 3, 00:35:08.995 "base_bdevs_list": [ 00:35:08.995 { 00:35:08.995 "name": null, 00:35:08.995 "uuid": "90b27ff0-08a6-432b-b11a-25e5af931754", 00:35:08.995 "is_configured": false, 00:35:08.995 "data_offset": 2048, 00:35:08.995 "data_size": 63488 00:35:08.995 }, 00:35:08.995 { 00:35:08.995 "name": "BaseBdev2", 00:35:08.995 "uuid": "fc3dcda7-ef61-47f6-b089-cca998e7de77", 00:35:08.995 "is_configured": true, 00:35:08.995 "data_offset": 2048, 00:35:08.995 "data_size": 63488 00:35:08.995 }, 00:35:08.995 { 00:35:08.995 "name": "BaseBdev3", 00:35:08.995 "uuid": "6296693e-d4ec-44e7-a50f-9fff98fc444d", 00:35:08.995 "is_configured": true, 00:35:08.995 "data_offset": 2048, 00:35:08.995 "data_size": 63488 00:35:08.995 } 00:35:08.995 ] 00:35:08.995 }' 00:35:08.995 11:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:08.995 11:58:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.562 11:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:09.562 11:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:09.820 11:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:35:09.820 11:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:35:09.820 11:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:10.078 11:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 90b27ff0-08a6-432b-b11a-25e5af931754 00:35:10.078 [2024-06-10 11:58:42.117172] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:35:10.078 [2024-06-10 11:58:42.117629] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:35:10.078 [2024-06-10 11:58:42.117760] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:10.078 [2024-06-10 11:58:42.117908] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:35:10.078 NewBaseBdev 00:35:10.078 [2024-06-10 11:58:42.124228] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:35:10.078 [2024-06-10 11:58:42.124435] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:35:10.078 [2024-06-10 11:58:42.124767] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:10.078 11:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:35:10.078 11:58:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:35:10.078 11:58:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:35:10.078 11:58:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:35:10.078 11:58:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:35:10.078 11:58:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:35:10.078 11:58:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:10.336 11:58:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:35:10.593 [ 00:35:10.593 { 00:35:10.593 "name": "NewBaseBdev", 00:35:10.593 "aliases": [ 00:35:10.593 "90b27ff0-08a6-432b-b11a-25e5af931754" 00:35:10.593 ], 00:35:10.593 "product_name": "Malloc disk", 00:35:10.593 "block_size": 512, 00:35:10.593 "num_blocks": 65536, 00:35:10.593 "uuid": "90b27ff0-08a6-432b-b11a-25e5af931754", 00:35:10.593 "assigned_rate_limits": { 00:35:10.593 "rw_ios_per_sec": 0, 00:35:10.593 "rw_mbytes_per_sec": 0, 00:35:10.594 "r_mbytes_per_sec": 0, 00:35:10.594 "w_mbytes_per_sec": 0 00:35:10.594 }, 00:35:10.594 "claimed": true, 00:35:10.594 "claim_type": "exclusive_write", 00:35:10.594 "zoned": false, 00:35:10.594 "supported_io_types": { 00:35:10.594 "read": true, 00:35:10.594 "write": true, 00:35:10.594 "unmap": true, 00:35:10.594 "write_zeroes": true, 00:35:10.594 "flush": true, 00:35:10.594 "reset": true, 00:35:10.594 "compare": false, 00:35:10.594 "compare_and_write": false, 00:35:10.594 "abort": true, 00:35:10.594 "nvme_admin": false, 00:35:10.594 "nvme_io": false 00:35:10.594 }, 00:35:10.594 "memory_domains": [ 00:35:10.594 { 00:35:10.594 "dma_device_id": "system", 00:35:10.594 "dma_device_type": 1 00:35:10.594 }, 00:35:10.594 { 00:35:10.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:10.594 "dma_device_type": 2 00:35:10.594 } 00:35:10.594 ], 00:35:10.594 "driver_specific": {} 00:35:10.594 } 00:35:10.594 ] 00:35:10.594 11:58:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:35:10.594 11:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:35:10.594 11:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:10.594 11:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:10.594 11:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:10.594 11:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:10.594 11:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:10.594 11:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:10.594 11:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:10.594 11:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:10.594 11:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:10.594 11:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:10.594 11:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:10.852 11:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:10.852 "name": "Existed_Raid", 00:35:10.852 "uuid": "bd68acea-c667-4231-85fb-82b42126a0a3", 00:35:10.852 "strip_size_kb": 64, 00:35:10.852 "state": "online", 00:35:10.852 "raid_level": "raid5f", 00:35:10.852 "superblock": true, 00:35:10.852 "num_base_bdevs": 3, 00:35:10.852 "num_base_bdevs_discovered": 3, 00:35:10.852 "num_base_bdevs_operational": 3, 00:35:10.852 "base_bdevs_list": [ 00:35:10.852 { 00:35:10.852 "name": "NewBaseBdev", 00:35:10.852 "uuid": "90b27ff0-08a6-432b-b11a-25e5af931754", 00:35:10.852 "is_configured": true, 00:35:10.852 "data_offset": 2048, 00:35:10.852 "data_size": 63488 00:35:10.852 }, 00:35:10.852 { 00:35:10.852 "name": "BaseBdev2", 00:35:10.852 "uuid": "fc3dcda7-ef61-47f6-b089-cca998e7de77", 00:35:10.852 "is_configured": true, 00:35:10.852 "data_offset": 2048, 00:35:10.852 "data_size": 63488 00:35:10.852 }, 00:35:10.852 { 00:35:10.852 "name": "BaseBdev3", 00:35:10.852 "uuid": "6296693e-d4ec-44e7-a50f-9fff98fc444d", 00:35:10.852 "is_configured": true, 00:35:10.852 "data_offset": 2048, 00:35:10.852 "data_size": 63488 00:35:10.852 } 00:35:10.852 ] 00:35:10.852 }' 00:35:10.852 11:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:10.852 11:58:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.419 11:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:35:11.419 11:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:35:11.419 11:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:11.419 11:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:11.419 11:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:11.419 11:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:35:11.419 11:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:11.419 11:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:35:11.677 [2024-06-10 11:58:43.680889] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:11.677 11:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:11.677 "name": "Existed_Raid", 00:35:11.677 "aliases": [ 00:35:11.677 "bd68acea-c667-4231-85fb-82b42126a0a3" 00:35:11.677 ], 00:35:11.677 "product_name": "Raid Volume", 00:35:11.677 "block_size": 512, 00:35:11.677 "num_blocks": 126976, 00:35:11.677 "uuid": "bd68acea-c667-4231-85fb-82b42126a0a3", 00:35:11.677 "assigned_rate_limits": { 00:35:11.677 "rw_ios_per_sec": 0, 00:35:11.677 "rw_mbytes_per_sec": 0, 00:35:11.677 "r_mbytes_per_sec": 0, 00:35:11.677 "w_mbytes_per_sec": 0 00:35:11.677 }, 00:35:11.677 "claimed": false, 00:35:11.677 "zoned": false, 00:35:11.677 "supported_io_types": { 00:35:11.677 "read": true, 00:35:11.677 "write": true, 00:35:11.677 "unmap": false, 00:35:11.677 "write_zeroes": true, 00:35:11.677 "flush": false, 00:35:11.677 "reset": true, 00:35:11.677 "compare": false, 00:35:11.677 "compare_and_write": false, 00:35:11.677 "abort": false, 00:35:11.677 "nvme_admin": false, 00:35:11.677 "nvme_io": false 00:35:11.677 }, 00:35:11.677 "driver_specific": { 00:35:11.677 "raid": { 00:35:11.677 "uuid": "bd68acea-c667-4231-85fb-82b42126a0a3", 00:35:11.677 "strip_size_kb": 64, 00:35:11.677 "state": "online", 00:35:11.677 "raid_level": "raid5f", 00:35:11.677 "superblock": true, 00:35:11.677 "num_base_bdevs": 3, 00:35:11.677 "num_base_bdevs_discovered": 3, 00:35:11.677 "num_base_bdevs_operational": 3, 00:35:11.677 "base_bdevs_list": [ 00:35:11.677 { 00:35:11.677 "name": "NewBaseBdev", 00:35:11.677 "uuid": "90b27ff0-08a6-432b-b11a-25e5af931754", 00:35:11.677 "is_configured": true, 00:35:11.677 "data_offset": 2048, 00:35:11.677 "data_size": 63488 00:35:11.677 }, 00:35:11.677 { 00:35:11.677 "name": "BaseBdev2", 00:35:11.677 "uuid": "fc3dcda7-ef61-47f6-b089-cca998e7de77", 00:35:11.677 "is_configured": true, 00:35:11.677 "data_offset": 2048, 00:35:11.677 "data_size": 63488 00:35:11.677 }, 00:35:11.677 { 00:35:11.677 "name": "BaseBdev3", 00:35:11.677 "uuid": "6296693e-d4ec-44e7-a50f-9fff98fc444d", 00:35:11.677 "is_configured": true, 00:35:11.677 "data_offset": 2048, 00:35:11.677 "data_size": 63488 00:35:11.677 } 00:35:11.677 ] 00:35:11.677 } 00:35:11.677 } 00:35:11.677 }' 00:35:11.677 11:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:11.936 11:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:35:11.936 BaseBdev2 00:35:11.936 BaseBdev3' 00:35:11.936 11:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:11.936 11:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:35:11.936 11:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:12.215 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:12.215 "name": "NewBaseBdev", 00:35:12.215 "aliases": [ 00:35:12.215 "90b27ff0-08a6-432b-b11a-25e5af931754" 00:35:12.215 ], 00:35:12.215 "product_name": "Malloc disk", 00:35:12.215 "block_size": 512, 00:35:12.215 "num_blocks": 65536, 00:35:12.215 "uuid": "90b27ff0-08a6-432b-b11a-25e5af931754", 00:35:12.215 "assigned_rate_limits": { 00:35:12.215 "rw_ios_per_sec": 0, 00:35:12.215 "rw_mbytes_per_sec": 0, 00:35:12.215 "r_mbytes_per_sec": 0, 00:35:12.215 "w_mbytes_per_sec": 0 00:35:12.215 }, 00:35:12.215 "claimed": true, 00:35:12.215 "claim_type": "exclusive_write", 00:35:12.215 "zoned": false, 00:35:12.215 "supported_io_types": { 00:35:12.215 "read": true, 00:35:12.215 "write": true, 00:35:12.215 "unmap": true, 00:35:12.215 "write_zeroes": true, 00:35:12.215 "flush": true, 00:35:12.215 "reset": true, 00:35:12.215 "compare": false, 00:35:12.215 "compare_and_write": false, 00:35:12.215 "abort": true, 00:35:12.215 "nvme_admin": false, 00:35:12.215 "nvme_io": false 00:35:12.215 }, 00:35:12.215 "memory_domains": [ 00:35:12.215 { 00:35:12.215 "dma_device_id": "system", 00:35:12.215 "dma_device_type": 1 00:35:12.215 }, 00:35:12.215 { 00:35:12.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:12.215 "dma_device_type": 2 00:35:12.215 } 00:35:12.215 ], 00:35:12.215 "driver_specific": {} 00:35:12.215 }' 00:35:12.215 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:12.215 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:12.215 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:12.215 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:12.215 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:12.215 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:12.215 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:12.215 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:12.474 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:12.474 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:12.474 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:12.475 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:12.475 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:12.475 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:35:12.475 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:12.734 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:12.734 "name": "BaseBdev2", 00:35:12.734 "aliases": [ 00:35:12.734 "fc3dcda7-ef61-47f6-b089-cca998e7de77" 00:35:12.734 ], 00:35:12.734 "product_name": "Malloc disk", 00:35:12.734 "block_size": 512, 00:35:12.734 "num_blocks": 65536, 00:35:12.734 "uuid": "fc3dcda7-ef61-47f6-b089-cca998e7de77", 00:35:12.734 "assigned_rate_limits": { 00:35:12.734 "rw_ios_per_sec": 0, 00:35:12.734 "rw_mbytes_per_sec": 0, 00:35:12.734 "r_mbytes_per_sec": 0, 00:35:12.734 "w_mbytes_per_sec": 0 00:35:12.734 }, 00:35:12.734 "claimed": true, 00:35:12.734 "claim_type": "exclusive_write", 00:35:12.734 "zoned": false, 00:35:12.734 "supported_io_types": { 00:35:12.734 "read": true, 00:35:12.734 "write": true, 00:35:12.734 "unmap": true, 00:35:12.734 "write_zeroes": true, 00:35:12.734 "flush": true, 00:35:12.734 "reset": true, 00:35:12.734 "compare": false, 00:35:12.734 "compare_and_write": false, 00:35:12.734 "abort": true, 00:35:12.734 "nvme_admin": false, 00:35:12.734 "nvme_io": false 00:35:12.734 }, 00:35:12.734 "memory_domains": [ 00:35:12.734 { 00:35:12.734 "dma_device_id": "system", 00:35:12.734 "dma_device_type": 1 00:35:12.734 }, 00:35:12.734 { 00:35:12.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:12.734 "dma_device_type": 2 00:35:12.734 } 00:35:12.734 ], 00:35:12.734 "driver_specific": {} 00:35:12.734 }' 00:35:12.734 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:12.734 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:12.734 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:12.734 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:12.734 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:12.993 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:12.993 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:12.993 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:12.993 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:12.993 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:12.993 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:12.993 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:12.993 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:12.993 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:12.993 11:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:35:13.254 11:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:13.254 "name": "BaseBdev3", 00:35:13.254 "aliases": [ 00:35:13.254 "6296693e-d4ec-44e7-a50f-9fff98fc444d" 00:35:13.254 ], 00:35:13.254 "product_name": "Malloc disk", 00:35:13.254 "block_size": 512, 00:35:13.254 "num_blocks": 65536, 00:35:13.254 "uuid": "6296693e-d4ec-44e7-a50f-9fff98fc444d", 00:35:13.254 "assigned_rate_limits": { 00:35:13.254 "rw_ios_per_sec": 0, 00:35:13.254 "rw_mbytes_per_sec": 0, 00:35:13.254 "r_mbytes_per_sec": 0, 00:35:13.254 "w_mbytes_per_sec": 0 00:35:13.254 }, 00:35:13.254 "claimed": true, 00:35:13.254 "claim_type": "exclusive_write", 00:35:13.254 "zoned": false, 00:35:13.254 "supported_io_types": { 00:35:13.254 "read": true, 00:35:13.254 "write": true, 00:35:13.254 "unmap": true, 00:35:13.254 "write_zeroes": true, 00:35:13.254 "flush": true, 00:35:13.254 "reset": true, 00:35:13.254 "compare": false, 00:35:13.254 "compare_and_write": false, 00:35:13.254 "abort": true, 00:35:13.254 "nvme_admin": false, 00:35:13.254 "nvme_io": false 00:35:13.254 }, 00:35:13.254 "memory_domains": [ 00:35:13.254 { 00:35:13.254 "dma_device_id": "system", 00:35:13.254 "dma_device_type": 1 00:35:13.254 }, 00:35:13.254 { 00:35:13.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:13.254 "dma_device_type": 2 00:35:13.254 } 00:35:13.254 ], 00:35:13.254 "driver_specific": {} 00:35:13.254 }' 00:35:13.254 11:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:13.254 11:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:13.254 11:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:13.254 11:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:13.254 11:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:13.513 11:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:13.513 11:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:13.513 11:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:13.513 11:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:13.513 11:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:13.513 11:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:13.513 11:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:13.513 11:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:13.772 [2024-06-10 11:58:45.727093] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:13.772 [2024-06-10 11:58:45.727350] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:13.772 [2024-06-10 11:58:45.727530] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:13.772 [2024-06-10 11:58:45.727906] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:13.772 [2024-06-10 11:58:45.728007] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:35:13.772 11:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 152800 00:35:13.772 11:58:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 152800 ']' 00:35:13.772 11:58:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 152800 00:35:13.772 11:58:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:35:13.772 11:58:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:13.772 11:58:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 152800 00:35:13.772 killing process with pid 152800 00:35:13.772 11:58:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:13.772 11:58:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:13.772 11:58:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 152800' 00:35:13.772 11:58:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 152800 00:35:13.772 11:58:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 152800 00:35:13.772 [2024-06-10 11:58:45.772264] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:14.340 [2024-06-10 11:58:46.113264] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:15.715 ************************************ 00:35:15.715 END TEST raid5f_state_function_test_sb 00:35:15.715 ************************************ 00:35:15.715 11:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:35:15.715 00:35:15.715 real 0m32.046s 00:35:15.715 user 0m57.872s 00:35:15.715 sys 0m4.434s 00:35:15.715 11:58:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:15.715 11:58:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.715 11:58:47 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:35:15.715 11:58:47 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:35:15.715 11:58:47 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:15.715 11:58:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:15.715 ************************************ 00:35:15.715 START TEST raid5f_superblock_test 00:35:15.715 ************************************ 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid5f 3 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=153784 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 153784 /var/tmp/spdk-raid.sock 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 153784 ']' 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:15.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:15.715 11:58:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.715 [2024-06-10 11:58:47.753883] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:35:15.715 [2024-06-10 11:58:47.754316] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153784 ] 00:35:15.973 [2024-06-10 11:58:47.922946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.232 [2024-06-10 11:58:48.163459] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.491 [2024-06-10 11:58:48.394238] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:16.749 11:58:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:16.749 11:58:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:35:16.749 11:58:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:35:16.749 11:58:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:16.749 11:58:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:35:16.749 11:58:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:35:16.749 11:58:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:16.749 11:58:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:16.749 11:58:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:16.749 11:58:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:16.749 11:58:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:35:17.023 malloc1 00:35:17.023 11:58:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:17.310 [2024-06-10 11:58:49.093332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:17.310 [2024-06-10 11:58:49.093652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:17.310 [2024-06-10 11:58:49.093751] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:35:17.310 [2024-06-10 11:58:49.093988] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:17.310 [2024-06-10 11:58:49.096688] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:17.310 [2024-06-10 11:58:49.096872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:17.310 pt1 00:35:17.310 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:17.310 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:17.310 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:35:17.310 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:35:17.310 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:17.310 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:17.310 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:17.310 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:17.310 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:35:17.567 malloc2 00:35:17.568 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:17.826 [2024-06-10 11:58:49.694530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:17.826 [2024-06-10 11:58:49.694892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:17.826 [2024-06-10 11:58:49.695064] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:35:17.826 [2024-06-10 11:58:49.695166] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:17.826 [2024-06-10 11:58:49.697830] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:17.826 [2024-06-10 11:58:49.697998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:17.826 pt2 00:35:17.826 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:17.826 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:17.826 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:35:17.826 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:35:17.826 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:35:17.826 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:17.826 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:17.826 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:17.826 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:35:18.084 malloc3 00:35:18.084 11:58:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:18.343 [2024-06-10 11:58:50.148151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:18.343 [2024-06-10 11:58:50.148471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:18.343 [2024-06-10 11:58:50.148542] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:35:18.343 [2024-06-10 11:58:50.148646] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:18.343 [2024-06-10 11:58:50.151142] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:18.343 [2024-06-10 11:58:50.151317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:18.343 pt3 00:35:18.343 11:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:18.343 11:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:18.343 11:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:35:18.343 [2024-06-10 11:58:50.368269] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:18.343 [2024-06-10 11:58:50.370554] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:18.343 [2024-06-10 11:58:50.370799] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:18.343 [2024-06-10 11:58:50.371054] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:35:18.343 [2024-06-10 11:58:50.371161] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:18.343 [2024-06-10 11:58:50.371331] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:35:18.343 [2024-06-10 11:58:50.377314] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:35:18.343 [2024-06-10 11:58:50.377429] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:35:18.343 [2024-06-10 11:58:50.377693] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:18.343 11:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:18.343 11:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:18.343 11:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:18.343 11:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:18.343 11:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:18.343 11:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:18.343 11:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:18.343 11:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:18.343 11:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:18.343 11:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:18.343 11:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:18.343 11:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:18.602 11:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:18.602 "name": "raid_bdev1", 00:35:18.602 "uuid": "4fc00fdc-5bbb-478f-bb66-273586c54eb0", 00:35:18.602 "strip_size_kb": 64, 00:35:18.602 "state": "online", 00:35:18.602 "raid_level": "raid5f", 00:35:18.602 "superblock": true, 00:35:18.602 "num_base_bdevs": 3, 00:35:18.602 "num_base_bdevs_discovered": 3, 00:35:18.602 "num_base_bdevs_operational": 3, 00:35:18.602 "base_bdevs_list": [ 00:35:18.602 { 00:35:18.602 "name": "pt1", 00:35:18.602 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:18.602 "is_configured": true, 00:35:18.602 "data_offset": 2048, 00:35:18.602 "data_size": 63488 00:35:18.602 }, 00:35:18.602 { 00:35:18.602 "name": "pt2", 00:35:18.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:18.602 "is_configured": true, 00:35:18.602 "data_offset": 2048, 00:35:18.602 "data_size": 63488 00:35:18.602 }, 00:35:18.602 { 00:35:18.602 "name": "pt3", 00:35:18.602 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:18.602 "is_configured": true, 00:35:18.602 "data_offset": 2048, 00:35:18.602 "data_size": 63488 00:35:18.602 } 00:35:18.602 ] 00:35:18.602 }' 00:35:18.602 11:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:18.602 11:58:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.535 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:35:19.535 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:35:19.535 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:19.535 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:19.535 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:19.535 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:35:19.535 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:19.535 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:19.535 [2024-06-10 11:58:51.429206] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:19.535 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:19.535 "name": "raid_bdev1", 00:35:19.535 "aliases": [ 00:35:19.535 "4fc00fdc-5bbb-478f-bb66-273586c54eb0" 00:35:19.535 ], 00:35:19.535 "product_name": "Raid Volume", 00:35:19.535 "block_size": 512, 00:35:19.535 "num_blocks": 126976, 00:35:19.535 "uuid": "4fc00fdc-5bbb-478f-bb66-273586c54eb0", 00:35:19.535 "assigned_rate_limits": { 00:35:19.535 "rw_ios_per_sec": 0, 00:35:19.535 "rw_mbytes_per_sec": 0, 00:35:19.535 "r_mbytes_per_sec": 0, 00:35:19.535 "w_mbytes_per_sec": 0 00:35:19.535 }, 00:35:19.535 "claimed": false, 00:35:19.535 "zoned": false, 00:35:19.535 "supported_io_types": { 00:35:19.535 "read": true, 00:35:19.535 "write": true, 00:35:19.535 "unmap": false, 00:35:19.535 "write_zeroes": true, 00:35:19.535 "flush": false, 00:35:19.535 "reset": true, 00:35:19.535 "compare": false, 00:35:19.535 "compare_and_write": false, 00:35:19.535 "abort": false, 00:35:19.535 "nvme_admin": false, 00:35:19.535 "nvme_io": false 00:35:19.535 }, 00:35:19.535 "driver_specific": { 00:35:19.535 "raid": { 00:35:19.535 "uuid": "4fc00fdc-5bbb-478f-bb66-273586c54eb0", 00:35:19.535 "strip_size_kb": 64, 00:35:19.535 "state": "online", 00:35:19.535 "raid_level": "raid5f", 00:35:19.535 "superblock": true, 00:35:19.535 "num_base_bdevs": 3, 00:35:19.535 "num_base_bdevs_discovered": 3, 00:35:19.535 "num_base_bdevs_operational": 3, 00:35:19.535 "base_bdevs_list": [ 00:35:19.535 { 00:35:19.535 "name": "pt1", 00:35:19.535 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:19.535 "is_configured": true, 00:35:19.535 "data_offset": 2048, 00:35:19.535 "data_size": 63488 00:35:19.535 }, 00:35:19.535 { 00:35:19.535 "name": "pt2", 00:35:19.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:19.535 "is_configured": true, 00:35:19.535 "data_offset": 2048, 00:35:19.535 "data_size": 63488 00:35:19.535 }, 00:35:19.535 { 00:35:19.535 "name": "pt3", 00:35:19.535 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:19.535 "is_configured": true, 00:35:19.535 "data_offset": 2048, 00:35:19.535 "data_size": 63488 00:35:19.535 } 00:35:19.535 ] 00:35:19.535 } 00:35:19.535 } 00:35:19.535 }' 00:35:19.536 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:19.536 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:35:19.536 pt2 00:35:19.536 pt3' 00:35:19.536 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:19.536 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:19.536 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:19.793 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:19.793 "name": "pt1", 00:35:19.794 "aliases": [ 00:35:19.794 "00000000-0000-0000-0000-000000000001" 00:35:19.794 ], 00:35:19.794 "product_name": "passthru", 00:35:19.794 "block_size": 512, 00:35:19.794 "num_blocks": 65536, 00:35:19.794 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:19.794 "assigned_rate_limits": { 00:35:19.794 "rw_ios_per_sec": 0, 00:35:19.794 "rw_mbytes_per_sec": 0, 00:35:19.794 "r_mbytes_per_sec": 0, 00:35:19.794 "w_mbytes_per_sec": 0 00:35:19.794 }, 00:35:19.794 "claimed": true, 00:35:19.794 "claim_type": "exclusive_write", 00:35:19.794 "zoned": false, 00:35:19.794 "supported_io_types": { 00:35:19.794 "read": true, 00:35:19.794 "write": true, 00:35:19.794 "unmap": true, 00:35:19.794 "write_zeroes": true, 00:35:19.794 "flush": true, 00:35:19.794 "reset": true, 00:35:19.794 "compare": false, 00:35:19.794 "compare_and_write": false, 00:35:19.794 "abort": true, 00:35:19.794 "nvme_admin": false, 00:35:19.794 "nvme_io": false 00:35:19.794 }, 00:35:19.794 "memory_domains": [ 00:35:19.794 { 00:35:19.794 "dma_device_id": "system", 00:35:19.794 "dma_device_type": 1 00:35:19.794 }, 00:35:19.794 { 00:35:19.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:19.794 "dma_device_type": 2 00:35:19.794 } 00:35:19.794 ], 00:35:19.794 "driver_specific": { 00:35:19.794 "passthru": { 00:35:19.794 "name": "pt1", 00:35:19.794 "base_bdev_name": "malloc1" 00:35:19.794 } 00:35:19.794 } 00:35:19.794 }' 00:35:19.794 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:19.794 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:19.794 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:19.794 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:19.794 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:20.052 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:20.052 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:20.052 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:20.052 11:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:20.052 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:20.052 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:20.052 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:20.052 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:20.052 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:20.052 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:20.616 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:20.616 "name": "pt2", 00:35:20.616 "aliases": [ 00:35:20.616 "00000000-0000-0000-0000-000000000002" 00:35:20.616 ], 00:35:20.616 "product_name": "passthru", 00:35:20.616 "block_size": 512, 00:35:20.616 "num_blocks": 65536, 00:35:20.616 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:20.616 "assigned_rate_limits": { 00:35:20.616 "rw_ios_per_sec": 0, 00:35:20.616 "rw_mbytes_per_sec": 0, 00:35:20.616 "r_mbytes_per_sec": 0, 00:35:20.616 "w_mbytes_per_sec": 0 00:35:20.616 }, 00:35:20.616 "claimed": true, 00:35:20.616 "claim_type": "exclusive_write", 00:35:20.616 "zoned": false, 00:35:20.616 "supported_io_types": { 00:35:20.616 "read": true, 00:35:20.616 "write": true, 00:35:20.616 "unmap": true, 00:35:20.616 "write_zeroes": true, 00:35:20.616 "flush": true, 00:35:20.616 "reset": true, 00:35:20.616 "compare": false, 00:35:20.616 "compare_and_write": false, 00:35:20.616 "abort": true, 00:35:20.616 "nvme_admin": false, 00:35:20.616 "nvme_io": false 00:35:20.616 }, 00:35:20.616 "memory_domains": [ 00:35:20.616 { 00:35:20.616 "dma_device_id": "system", 00:35:20.616 "dma_device_type": 1 00:35:20.616 }, 00:35:20.616 { 00:35:20.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:20.616 "dma_device_type": 2 00:35:20.616 } 00:35:20.616 ], 00:35:20.616 "driver_specific": { 00:35:20.616 "passthru": { 00:35:20.616 "name": "pt2", 00:35:20.616 "base_bdev_name": "malloc2" 00:35:20.616 } 00:35:20.616 } 00:35:20.616 }' 00:35:20.616 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:20.616 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:20.616 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:20.616 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:20.616 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:20.616 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:20.616 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:20.616 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:20.616 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:20.616 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:20.874 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:20.874 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:20.874 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:20.874 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:20.874 11:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:35:21.133 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:21.133 "name": "pt3", 00:35:21.133 "aliases": [ 00:35:21.133 "00000000-0000-0000-0000-000000000003" 00:35:21.133 ], 00:35:21.133 "product_name": "passthru", 00:35:21.133 "block_size": 512, 00:35:21.133 "num_blocks": 65536, 00:35:21.133 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:21.133 "assigned_rate_limits": { 00:35:21.133 "rw_ios_per_sec": 0, 00:35:21.133 "rw_mbytes_per_sec": 0, 00:35:21.133 "r_mbytes_per_sec": 0, 00:35:21.133 "w_mbytes_per_sec": 0 00:35:21.133 }, 00:35:21.133 "claimed": true, 00:35:21.133 "claim_type": "exclusive_write", 00:35:21.133 "zoned": false, 00:35:21.133 "supported_io_types": { 00:35:21.133 "read": true, 00:35:21.133 "write": true, 00:35:21.133 "unmap": true, 00:35:21.133 "write_zeroes": true, 00:35:21.133 "flush": true, 00:35:21.133 "reset": true, 00:35:21.133 "compare": false, 00:35:21.133 "compare_and_write": false, 00:35:21.133 "abort": true, 00:35:21.133 "nvme_admin": false, 00:35:21.133 "nvme_io": false 00:35:21.133 }, 00:35:21.133 "memory_domains": [ 00:35:21.133 { 00:35:21.133 "dma_device_id": "system", 00:35:21.133 "dma_device_type": 1 00:35:21.133 }, 00:35:21.133 { 00:35:21.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:21.134 "dma_device_type": 2 00:35:21.134 } 00:35:21.134 ], 00:35:21.134 "driver_specific": { 00:35:21.134 "passthru": { 00:35:21.134 "name": "pt3", 00:35:21.134 "base_bdev_name": "malloc3" 00:35:21.134 } 00:35:21.134 } 00:35:21.134 }' 00:35:21.134 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:21.134 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:21.134 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:21.134 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:21.393 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:21.393 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:21.393 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:21.393 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:21.393 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:21.393 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:21.393 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:21.393 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:21.393 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:21.393 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:35:21.651 [2024-06-10 11:58:53.677612] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:21.651 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=4fc00fdc-5bbb-478f-bb66-273586c54eb0 00:35:21.651 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 4fc00fdc-5bbb-478f-bb66-273586c54eb0 ']' 00:35:21.651 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:21.909 [2024-06-10 11:58:53.913541] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:21.909 [2024-06-10 11:58:53.913711] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:21.909 [2024-06-10 11:58:53.913898] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:21.909 [2024-06-10 11:58:53.914064] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:21.909 [2024-06-10 11:58:53.914151] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:35:21.909 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:21.909 11:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:35:22.167 11:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:35:22.167 11:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:35:22.167 11:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:35:22.167 11:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:22.426 11:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:35:22.426 11:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:22.684 11:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:35:22.684 11:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:35:22.943 11:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:35:22.943 11:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:23.201 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:35:23.201 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:35:23.202 11:58:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:35:23.202 11:58:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:35:23.202 11:58:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:23.202 11:58:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:23.202 11:58:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:23.202 11:58:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:23.202 11:58:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:23.202 11:58:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:23.202 11:58:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:23.202 11:58:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:23.202 11:58:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:35:23.479 [2024-06-10 11:58:55.369754] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:23.479 [2024-06-10 11:58:55.372021] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:23.479 [2024-06-10 11:58:55.372225] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:35:23.479 [2024-06-10 11:58:55.372313] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:35:23.479 [2024-06-10 11:58:55.372507] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:35:23.479 [2024-06-10 11:58:55.372572] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:35:23.479 [2024-06-10 11:58:55.372733] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:23.479 [2024-06-10 11:58:55.372821] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:35:23.479 request: 00:35:23.479 { 00:35:23.479 "name": "raid_bdev1", 00:35:23.479 "raid_level": "raid5f", 00:35:23.479 "base_bdevs": [ 00:35:23.479 "malloc1", 00:35:23.479 "malloc2", 00:35:23.479 "malloc3" 00:35:23.479 ], 00:35:23.479 "strip_size_kb": 64, 00:35:23.479 "superblock": false, 00:35:23.479 "method": "bdev_raid_create", 00:35:23.479 "req_id": 1 00:35:23.479 } 00:35:23.479 Got JSON-RPC error response 00:35:23.479 response: 00:35:23.479 { 00:35:23.479 "code": -17, 00:35:23.479 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:23.479 } 00:35:23.479 11:58:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:35:23.479 11:58:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:23.479 11:58:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:23.479 11:58:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:23.479 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:23.479 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:35:23.751 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:35:23.751 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:35:23.751 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:24.010 [2024-06-10 11:58:55.842989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:24.010 [2024-06-10 11:58:55.843499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:24.010 [2024-06-10 11:58:55.843745] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:35:24.010 [2024-06-10 11:58:55.843966] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:24.010 [2024-06-10 11:58:55.846743] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:24.010 [2024-06-10 11:58:55.847016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:24.010 [2024-06-10 11:58:55.847353] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:24.010 [2024-06-10 11:58:55.847526] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:24.010 pt1 00:35:24.010 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:35:24.010 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:24.010 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:24.010 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:24.010 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:24.010 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:24.010 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:24.010 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:24.010 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:24.011 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:24.011 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:24.011 11:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:24.269 11:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:24.270 "name": "raid_bdev1", 00:35:24.270 "uuid": "4fc00fdc-5bbb-478f-bb66-273586c54eb0", 00:35:24.270 "strip_size_kb": 64, 00:35:24.270 "state": "configuring", 00:35:24.270 "raid_level": "raid5f", 00:35:24.270 "superblock": true, 00:35:24.270 "num_base_bdevs": 3, 00:35:24.270 "num_base_bdevs_discovered": 1, 00:35:24.270 "num_base_bdevs_operational": 3, 00:35:24.270 "base_bdevs_list": [ 00:35:24.270 { 00:35:24.270 "name": "pt1", 00:35:24.270 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:24.270 "is_configured": true, 00:35:24.270 "data_offset": 2048, 00:35:24.270 "data_size": 63488 00:35:24.270 }, 00:35:24.270 { 00:35:24.270 "name": null, 00:35:24.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:24.270 "is_configured": false, 00:35:24.270 "data_offset": 2048, 00:35:24.270 "data_size": 63488 00:35:24.270 }, 00:35:24.270 { 00:35:24.270 "name": null, 00:35:24.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:24.270 "is_configured": false, 00:35:24.270 "data_offset": 2048, 00:35:24.270 "data_size": 63488 00:35:24.270 } 00:35:24.270 ] 00:35:24.270 }' 00:35:24.270 11:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:24.270 11:58:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.838 11:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:35:24.838 11:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:25.097 [2024-06-10 11:58:56.911655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:25.097 [2024-06-10 11:58:56.912297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:25.097 [2024-06-10 11:58:56.912547] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:35:25.097 [2024-06-10 11:58:56.912793] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:25.097 [2024-06-10 11:58:56.913431] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:25.097 [2024-06-10 11:58:56.913672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:25.097 [2024-06-10 11:58:56.913986] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:25.097 [2024-06-10 11:58:56.914123] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:25.097 pt2 00:35:25.097 11:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:25.355 [2024-06-10 11:58:57.175763] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:35:25.355 11:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:35:25.355 11:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:25.355 11:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:25.355 11:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:25.355 11:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:25.355 11:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:25.355 11:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:25.355 11:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:25.356 11:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:25.356 11:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:25.356 11:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:25.356 11:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:25.356 11:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:25.356 "name": "raid_bdev1", 00:35:25.356 "uuid": "4fc00fdc-5bbb-478f-bb66-273586c54eb0", 00:35:25.356 "strip_size_kb": 64, 00:35:25.356 "state": "configuring", 00:35:25.356 "raid_level": "raid5f", 00:35:25.356 "superblock": true, 00:35:25.356 "num_base_bdevs": 3, 00:35:25.356 "num_base_bdevs_discovered": 1, 00:35:25.356 "num_base_bdevs_operational": 3, 00:35:25.356 "base_bdevs_list": [ 00:35:25.356 { 00:35:25.356 "name": "pt1", 00:35:25.356 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:25.356 "is_configured": true, 00:35:25.356 "data_offset": 2048, 00:35:25.356 "data_size": 63488 00:35:25.356 }, 00:35:25.356 { 00:35:25.356 "name": null, 00:35:25.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:25.356 "is_configured": false, 00:35:25.356 "data_offset": 2048, 00:35:25.356 "data_size": 63488 00:35:25.356 }, 00:35:25.356 { 00:35:25.356 "name": null, 00:35:25.356 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:25.356 "is_configured": false, 00:35:25.356 "data_offset": 2048, 00:35:25.356 "data_size": 63488 00:35:25.356 } 00:35:25.356 ] 00:35:25.356 }' 00:35:25.356 11:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:25.356 11:58:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.923 11:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:35:25.923 11:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:35:25.923 11:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:26.182 [2024-06-10 11:58:58.239955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:26.182 [2024-06-10 11:58:58.240696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:26.182 [2024-06-10 11:58:58.241013] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:35:26.182 [2024-06-10 11:58:58.241294] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:26.182 [2024-06-10 11:58:58.242094] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:26.182 [2024-06-10 11:58:58.242363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:26.182 [2024-06-10 11:58:58.242746] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:26.441 [2024-06-10 11:58:58.242885] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:26.441 pt2 00:35:26.441 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:35:26.441 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:35:26.441 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:26.441 [2024-06-10 11:58:58.464027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:26.441 [2024-06-10 11:58:58.464314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:26.442 [2024-06-10 11:58:58.464423] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:35:26.442 [2024-06-10 11:58:58.464543] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:26.442 [2024-06-10 11:58:58.465152] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:26.442 [2024-06-10 11:58:58.465294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:26.442 [2024-06-10 11:58:58.465507] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:35:26.442 [2024-06-10 11:58:58.465544] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:26.442 [2024-06-10 11:58:58.465708] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:35:26.442 [2024-06-10 11:58:58.465726] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:26.442 [2024-06-10 11:58:58.465824] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:35:26.442 [2024-06-10 11:58:58.471638] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:35:26.442 [2024-06-10 11:58:58.471665] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:35:26.442 [2024-06-10 11:58:58.471882] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:26.442 pt3 00:35:26.442 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:35:26.442 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:35:26.442 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:26.442 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:26.442 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:26.442 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:26.442 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:26.442 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:26.442 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:26.442 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:26.442 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:26.442 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:26.442 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:26.442 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:27.033 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:27.033 "name": "raid_bdev1", 00:35:27.033 "uuid": "4fc00fdc-5bbb-478f-bb66-273586c54eb0", 00:35:27.033 "strip_size_kb": 64, 00:35:27.033 "state": "online", 00:35:27.033 "raid_level": "raid5f", 00:35:27.033 "superblock": true, 00:35:27.033 "num_base_bdevs": 3, 00:35:27.033 "num_base_bdevs_discovered": 3, 00:35:27.033 "num_base_bdevs_operational": 3, 00:35:27.033 "base_bdevs_list": [ 00:35:27.033 { 00:35:27.033 "name": "pt1", 00:35:27.033 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:27.033 "is_configured": true, 00:35:27.033 "data_offset": 2048, 00:35:27.033 "data_size": 63488 00:35:27.033 }, 00:35:27.033 { 00:35:27.033 "name": "pt2", 00:35:27.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:27.033 "is_configured": true, 00:35:27.033 "data_offset": 2048, 00:35:27.033 "data_size": 63488 00:35:27.033 }, 00:35:27.033 { 00:35:27.033 "name": "pt3", 00:35:27.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:27.033 "is_configured": true, 00:35:27.033 "data_offset": 2048, 00:35:27.033 "data_size": 63488 00:35:27.033 } 00:35:27.033 ] 00:35:27.033 }' 00:35:27.033 11:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:27.033 11:58:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.621 11:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:35:27.621 11:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:35:27.621 11:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:27.621 11:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:27.621 11:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:27.621 11:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:35:27.621 11:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:27.621 11:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:27.621 [2024-06-10 11:58:59.671929] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:27.880 11:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:27.880 "name": "raid_bdev1", 00:35:27.880 "aliases": [ 00:35:27.880 "4fc00fdc-5bbb-478f-bb66-273586c54eb0" 00:35:27.880 ], 00:35:27.880 "product_name": "Raid Volume", 00:35:27.880 "block_size": 512, 00:35:27.880 "num_blocks": 126976, 00:35:27.880 "uuid": "4fc00fdc-5bbb-478f-bb66-273586c54eb0", 00:35:27.880 "assigned_rate_limits": { 00:35:27.880 "rw_ios_per_sec": 0, 00:35:27.880 "rw_mbytes_per_sec": 0, 00:35:27.880 "r_mbytes_per_sec": 0, 00:35:27.880 "w_mbytes_per_sec": 0 00:35:27.880 }, 00:35:27.880 "claimed": false, 00:35:27.880 "zoned": false, 00:35:27.880 "supported_io_types": { 00:35:27.880 "read": true, 00:35:27.880 "write": true, 00:35:27.880 "unmap": false, 00:35:27.880 "write_zeroes": true, 00:35:27.880 "flush": false, 00:35:27.880 "reset": true, 00:35:27.880 "compare": false, 00:35:27.880 "compare_and_write": false, 00:35:27.880 "abort": false, 00:35:27.880 "nvme_admin": false, 00:35:27.880 "nvme_io": false 00:35:27.880 }, 00:35:27.880 "driver_specific": { 00:35:27.880 "raid": { 00:35:27.880 "uuid": "4fc00fdc-5bbb-478f-bb66-273586c54eb0", 00:35:27.880 "strip_size_kb": 64, 00:35:27.880 "state": "online", 00:35:27.880 "raid_level": "raid5f", 00:35:27.880 "superblock": true, 00:35:27.880 "num_base_bdevs": 3, 00:35:27.880 "num_base_bdevs_discovered": 3, 00:35:27.880 "num_base_bdevs_operational": 3, 00:35:27.880 "base_bdevs_list": [ 00:35:27.880 { 00:35:27.880 "name": "pt1", 00:35:27.880 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:27.880 "is_configured": true, 00:35:27.880 "data_offset": 2048, 00:35:27.880 "data_size": 63488 00:35:27.880 }, 00:35:27.880 { 00:35:27.880 "name": "pt2", 00:35:27.880 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:27.880 "is_configured": true, 00:35:27.880 "data_offset": 2048, 00:35:27.880 "data_size": 63488 00:35:27.880 }, 00:35:27.880 { 00:35:27.880 "name": "pt3", 00:35:27.880 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:27.880 "is_configured": true, 00:35:27.880 "data_offset": 2048, 00:35:27.880 "data_size": 63488 00:35:27.880 } 00:35:27.880 ] 00:35:27.880 } 00:35:27.880 } 00:35:27.880 }' 00:35:27.880 11:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:27.880 11:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:35:27.880 pt2 00:35:27.880 pt3' 00:35:27.880 11:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:27.880 11:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:27.880 11:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:28.138 11:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:28.138 "name": "pt1", 00:35:28.138 "aliases": [ 00:35:28.138 "00000000-0000-0000-0000-000000000001" 00:35:28.138 ], 00:35:28.138 "product_name": "passthru", 00:35:28.138 "block_size": 512, 00:35:28.138 "num_blocks": 65536, 00:35:28.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:28.138 "assigned_rate_limits": { 00:35:28.138 "rw_ios_per_sec": 0, 00:35:28.138 "rw_mbytes_per_sec": 0, 00:35:28.138 "r_mbytes_per_sec": 0, 00:35:28.138 "w_mbytes_per_sec": 0 00:35:28.138 }, 00:35:28.138 "claimed": true, 00:35:28.138 "claim_type": "exclusive_write", 00:35:28.138 "zoned": false, 00:35:28.138 "supported_io_types": { 00:35:28.138 "read": true, 00:35:28.138 "write": true, 00:35:28.138 "unmap": true, 00:35:28.138 "write_zeroes": true, 00:35:28.138 "flush": true, 00:35:28.138 "reset": true, 00:35:28.138 "compare": false, 00:35:28.138 "compare_and_write": false, 00:35:28.138 "abort": true, 00:35:28.138 "nvme_admin": false, 00:35:28.138 "nvme_io": false 00:35:28.138 }, 00:35:28.138 "memory_domains": [ 00:35:28.138 { 00:35:28.138 "dma_device_id": "system", 00:35:28.138 "dma_device_type": 1 00:35:28.138 }, 00:35:28.138 { 00:35:28.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:28.138 "dma_device_type": 2 00:35:28.138 } 00:35:28.138 ], 00:35:28.138 "driver_specific": { 00:35:28.138 "passthru": { 00:35:28.138 "name": "pt1", 00:35:28.138 "base_bdev_name": "malloc1" 00:35:28.138 } 00:35:28.138 } 00:35:28.138 }' 00:35:28.138 11:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:28.138 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:28.138 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:28.138 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:28.138 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:28.138 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:28.138 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:28.396 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:28.396 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:28.396 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:28.396 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:28.396 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:28.396 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:28.396 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:28.396 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:28.654 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:28.654 "name": "pt2", 00:35:28.654 "aliases": [ 00:35:28.654 "00000000-0000-0000-0000-000000000002" 00:35:28.654 ], 00:35:28.654 "product_name": "passthru", 00:35:28.654 "block_size": 512, 00:35:28.654 "num_blocks": 65536, 00:35:28.654 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:28.654 "assigned_rate_limits": { 00:35:28.654 "rw_ios_per_sec": 0, 00:35:28.654 "rw_mbytes_per_sec": 0, 00:35:28.654 "r_mbytes_per_sec": 0, 00:35:28.654 "w_mbytes_per_sec": 0 00:35:28.654 }, 00:35:28.654 "claimed": true, 00:35:28.655 "claim_type": "exclusive_write", 00:35:28.655 "zoned": false, 00:35:28.655 "supported_io_types": { 00:35:28.655 "read": true, 00:35:28.655 "write": true, 00:35:28.655 "unmap": true, 00:35:28.655 "write_zeroes": true, 00:35:28.655 "flush": true, 00:35:28.655 "reset": true, 00:35:28.655 "compare": false, 00:35:28.655 "compare_and_write": false, 00:35:28.655 "abort": true, 00:35:28.655 "nvme_admin": false, 00:35:28.655 "nvme_io": false 00:35:28.655 }, 00:35:28.655 "memory_domains": [ 00:35:28.655 { 00:35:28.655 "dma_device_id": "system", 00:35:28.655 "dma_device_type": 1 00:35:28.655 }, 00:35:28.655 { 00:35:28.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:28.655 "dma_device_type": 2 00:35:28.655 } 00:35:28.655 ], 00:35:28.655 "driver_specific": { 00:35:28.655 "passthru": { 00:35:28.655 "name": "pt2", 00:35:28.655 "base_bdev_name": "malloc2" 00:35:28.655 } 00:35:28.655 } 00:35:28.655 }' 00:35:28.655 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:28.655 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:28.655 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:28.655 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:28.655 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:28.914 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:28.914 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:28.914 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:28.914 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:28.914 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:28.914 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:28.914 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:28.914 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:28.914 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:28.914 11:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:35:29.172 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:29.172 "name": "pt3", 00:35:29.172 "aliases": [ 00:35:29.172 "00000000-0000-0000-0000-000000000003" 00:35:29.172 ], 00:35:29.172 "product_name": "passthru", 00:35:29.172 "block_size": 512, 00:35:29.172 "num_blocks": 65536, 00:35:29.172 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:29.172 "assigned_rate_limits": { 00:35:29.172 "rw_ios_per_sec": 0, 00:35:29.172 "rw_mbytes_per_sec": 0, 00:35:29.172 "r_mbytes_per_sec": 0, 00:35:29.172 "w_mbytes_per_sec": 0 00:35:29.172 }, 00:35:29.172 "claimed": true, 00:35:29.172 "claim_type": "exclusive_write", 00:35:29.172 "zoned": false, 00:35:29.172 "supported_io_types": { 00:35:29.172 "read": true, 00:35:29.172 "write": true, 00:35:29.172 "unmap": true, 00:35:29.172 "write_zeroes": true, 00:35:29.172 "flush": true, 00:35:29.172 "reset": true, 00:35:29.172 "compare": false, 00:35:29.172 "compare_and_write": false, 00:35:29.172 "abort": true, 00:35:29.172 "nvme_admin": false, 00:35:29.172 "nvme_io": false 00:35:29.172 }, 00:35:29.172 "memory_domains": [ 00:35:29.172 { 00:35:29.172 "dma_device_id": "system", 00:35:29.172 "dma_device_type": 1 00:35:29.172 }, 00:35:29.172 { 00:35:29.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:29.172 "dma_device_type": 2 00:35:29.172 } 00:35:29.172 ], 00:35:29.172 "driver_specific": { 00:35:29.172 "passthru": { 00:35:29.172 "name": "pt3", 00:35:29.172 "base_bdev_name": "malloc3" 00:35:29.172 } 00:35:29.172 } 00:35:29.172 }' 00:35:29.172 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:29.172 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:29.431 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:29.431 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:29.431 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:29.431 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:29.431 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:29.431 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:29.431 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:29.431 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:29.431 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:29.690 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:29.690 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:35:29.690 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:29.949 [2024-06-10 11:59:01.824448] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:29.949 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 4fc00fdc-5bbb-478f-bb66-273586c54eb0 '!=' 4fc00fdc-5bbb-478f-bb66-273586c54eb0 ']' 00:35:29.949 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:35:29.949 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:35:29.949 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:35:29.949 11:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:30.208 [2024-06-10 11:59:02.111119] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:35:30.208 11:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:35:30.208 11:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:30.208 11:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:30.208 11:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:30.208 11:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:30.208 11:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:30.209 11:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:30.209 11:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:30.209 11:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:30.209 11:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:30.209 11:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:30.209 11:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:30.468 11:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:30.468 "name": "raid_bdev1", 00:35:30.468 "uuid": "4fc00fdc-5bbb-478f-bb66-273586c54eb0", 00:35:30.468 "strip_size_kb": 64, 00:35:30.468 "state": "online", 00:35:30.468 "raid_level": "raid5f", 00:35:30.468 "superblock": true, 00:35:30.468 "num_base_bdevs": 3, 00:35:30.468 "num_base_bdevs_discovered": 2, 00:35:30.468 "num_base_bdevs_operational": 2, 00:35:30.468 "base_bdevs_list": [ 00:35:30.468 { 00:35:30.468 "name": null, 00:35:30.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:30.468 "is_configured": false, 00:35:30.468 "data_offset": 2048, 00:35:30.468 "data_size": 63488 00:35:30.468 }, 00:35:30.468 { 00:35:30.468 "name": "pt2", 00:35:30.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:30.468 "is_configured": true, 00:35:30.468 "data_offset": 2048, 00:35:30.468 "data_size": 63488 00:35:30.468 }, 00:35:30.468 { 00:35:30.468 "name": "pt3", 00:35:30.468 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:30.468 "is_configured": true, 00:35:30.468 "data_offset": 2048, 00:35:30.468 "data_size": 63488 00:35:30.468 } 00:35:30.468 ] 00:35:30.468 }' 00:35:30.468 11:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:30.468 11:59:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:31.036 11:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:31.294 [2024-06-10 11:59:03.203488] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:31.294 [2024-06-10 11:59:03.203530] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:31.294 [2024-06-10 11:59:03.203618] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:31.294 [2024-06-10 11:59:03.203691] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:31.294 [2024-06-10 11:59:03.203702] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:35:31.294 11:59:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:35:31.294 11:59:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:31.552 11:59:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:35:31.553 11:59:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:35:31.553 11:59:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:35:31.553 11:59:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:31.553 11:59:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:31.811 11:59:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:35:31.811 11:59:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:31.811 11:59:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:35:32.070 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:35:32.071 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:32.071 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:35:32.071 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:35:32.071 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:32.329 [2024-06-10 11:59:04.211721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:32.329 [2024-06-10 11:59:04.211810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:32.329 [2024-06-10 11:59:04.211866] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:35:32.329 [2024-06-10 11:59:04.211896] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:32.329 [2024-06-10 11:59:04.214490] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:32.329 [2024-06-10 11:59:04.214544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:32.329 [2024-06-10 11:59:04.214677] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:32.329 [2024-06-10 11:59:04.214732] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:32.329 pt2 00:35:32.329 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:35:32.329 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:32.329 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:32.329 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:32.329 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:32.329 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:32.329 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:32.329 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:32.329 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:32.329 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:32.329 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:32.329 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:32.587 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:32.587 "name": "raid_bdev1", 00:35:32.587 "uuid": "4fc00fdc-5bbb-478f-bb66-273586c54eb0", 00:35:32.587 "strip_size_kb": 64, 00:35:32.587 "state": "configuring", 00:35:32.587 "raid_level": "raid5f", 00:35:32.587 "superblock": true, 00:35:32.587 "num_base_bdevs": 3, 00:35:32.587 "num_base_bdevs_discovered": 1, 00:35:32.587 "num_base_bdevs_operational": 2, 00:35:32.587 "base_bdevs_list": [ 00:35:32.587 { 00:35:32.587 "name": null, 00:35:32.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:32.587 "is_configured": false, 00:35:32.587 "data_offset": 2048, 00:35:32.587 "data_size": 63488 00:35:32.587 }, 00:35:32.587 { 00:35:32.587 "name": "pt2", 00:35:32.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:32.587 "is_configured": true, 00:35:32.587 "data_offset": 2048, 00:35:32.587 "data_size": 63488 00:35:32.587 }, 00:35:32.587 { 00:35:32.587 "name": null, 00:35:32.587 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:32.587 "is_configured": false, 00:35:32.587 "data_offset": 2048, 00:35:32.587 "data_size": 63488 00:35:32.587 } 00:35:32.587 ] 00:35:32.587 }' 00:35:32.587 11:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:32.587 11:59:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.153 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:35:33.153 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:35:33.153 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:35:33.153 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:33.153 [2024-06-10 11:59:05.211204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:33.153 [2024-06-10 11:59:05.211291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:33.153 [2024-06-10 11:59:05.211337] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:35:33.153 [2024-06-10 11:59:05.211361] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:33.153 [2024-06-10 11:59:05.211884] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:33.153 [2024-06-10 11:59:05.211919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:33.153 [2024-06-10 11:59:05.212043] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:35:33.153 [2024-06-10 11:59:05.212066] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:33.153 [2024-06-10 11:59:05.212181] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:35:33.153 [2024-06-10 11:59:05.212191] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:33.153 [2024-06-10 11:59:05.212294] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:35:33.411 [2024-06-10 11:59:05.217879] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:35:33.411 [2024-06-10 11:59:05.217909] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:35:33.411 [2024-06-10 11:59:05.218206] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:33.411 pt3 00:35:33.411 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:35:33.411 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:33.411 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:33.411 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:33.411 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:33.411 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:33.411 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:33.411 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:33.411 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:33.411 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:33.411 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:33.411 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:33.411 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:33.411 "name": "raid_bdev1", 00:35:33.411 "uuid": "4fc00fdc-5bbb-478f-bb66-273586c54eb0", 00:35:33.411 "strip_size_kb": 64, 00:35:33.411 "state": "online", 00:35:33.411 "raid_level": "raid5f", 00:35:33.411 "superblock": true, 00:35:33.411 "num_base_bdevs": 3, 00:35:33.411 "num_base_bdevs_discovered": 2, 00:35:33.411 "num_base_bdevs_operational": 2, 00:35:33.411 "base_bdevs_list": [ 00:35:33.411 { 00:35:33.411 "name": null, 00:35:33.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.411 "is_configured": false, 00:35:33.411 "data_offset": 2048, 00:35:33.411 "data_size": 63488 00:35:33.411 }, 00:35:33.411 { 00:35:33.411 "name": "pt2", 00:35:33.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:33.411 "is_configured": true, 00:35:33.411 "data_offset": 2048, 00:35:33.411 "data_size": 63488 00:35:33.411 }, 00:35:33.411 { 00:35:33.411 "name": "pt3", 00:35:33.411 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:33.411 "is_configured": true, 00:35:33.411 "data_offset": 2048, 00:35:33.411 "data_size": 63488 00:35:33.411 } 00:35:33.411 ] 00:35:33.411 }' 00:35:33.411 11:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:33.411 11:59:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:34.347 11:59:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:34.347 [2024-06-10 11:59:06.358583] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:34.347 [2024-06-10 11:59:06.358628] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:34.347 [2024-06-10 11:59:06.358715] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:34.347 [2024-06-10 11:59:06.358790] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:34.347 [2024-06-10 11:59:06.358802] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:35:34.347 11:59:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:34.347 11:59:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:35:34.914 11:59:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:35:34.914 11:59:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:35:34.914 11:59:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:35:34.914 11:59:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:35:34.914 11:59:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:35:34.914 11:59:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:35.173 [2024-06-10 11:59:07.162770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:35.173 [2024-06-10 11:59:07.162863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:35.173 [2024-06-10 11:59:07.162905] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:35:35.173 [2024-06-10 11:59:07.162941] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:35.173 [2024-06-10 11:59:07.165850] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:35.173 [2024-06-10 11:59:07.165911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:35.173 [2024-06-10 11:59:07.166055] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:35.173 [2024-06-10 11:59:07.166106] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:35.173 [2024-06-10 11:59:07.166281] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:35:35.173 [2024-06-10 11:59:07.166303] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:35.173 [2024-06-10 11:59:07.166333] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:35:35.173 [2024-06-10 11:59:07.166425] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:35.173 pt1 00:35:35.173 11:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:35:35.173 11:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:35:35.173 11:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:35.173 11:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:35.173 11:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:35.173 11:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:35.173 11:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:35.173 11:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:35.173 11:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:35.174 11:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:35.174 11:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:35.174 11:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:35.174 11:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:35.433 11:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:35.433 "name": "raid_bdev1", 00:35:35.433 "uuid": "4fc00fdc-5bbb-478f-bb66-273586c54eb0", 00:35:35.433 "strip_size_kb": 64, 00:35:35.433 "state": "configuring", 00:35:35.433 "raid_level": "raid5f", 00:35:35.433 "superblock": true, 00:35:35.433 "num_base_bdevs": 3, 00:35:35.433 "num_base_bdevs_discovered": 1, 00:35:35.433 "num_base_bdevs_operational": 2, 00:35:35.433 "base_bdevs_list": [ 00:35:35.433 { 00:35:35.433 "name": null, 00:35:35.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:35.433 "is_configured": false, 00:35:35.433 "data_offset": 2048, 00:35:35.433 "data_size": 63488 00:35:35.433 }, 00:35:35.433 { 00:35:35.433 "name": "pt2", 00:35:35.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:35.433 "is_configured": true, 00:35:35.433 "data_offset": 2048, 00:35:35.433 "data_size": 63488 00:35:35.433 }, 00:35:35.433 { 00:35:35.433 "name": null, 00:35:35.433 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:35.433 "is_configured": false, 00:35:35.433 "data_offset": 2048, 00:35:35.433 "data_size": 63488 00:35:35.433 } 00:35:35.433 ] 00:35:35.433 }' 00:35:35.433 11:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:35.433 11:59:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.005 11:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:35:36.005 11:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:36.267 11:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:35:36.267 11:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:36.528 [2024-06-10 11:59:08.439089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:36.528 [2024-06-10 11:59:08.439194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:36.528 [2024-06-10 11:59:08.439228] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:35:36.528 [2024-06-10 11:59:08.439263] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:36.528 [2024-06-10 11:59:08.439786] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:36.528 [2024-06-10 11:59:08.439834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:36.528 [2024-06-10 11:59:08.439949] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:35:36.528 [2024-06-10 11:59:08.439973] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:36.528 [2024-06-10 11:59:08.440093] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:35:36.528 [2024-06-10 11:59:08.440112] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:36.528 [2024-06-10 11:59:08.440206] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:35:36.528 [2024-06-10 11:59:08.446780] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:35:36.528 [2024-06-10 11:59:08.446806] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:35:36.528 [2024-06-10 11:59:08.447031] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:36.528 pt3 00:35:36.528 11:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:35:36.528 11:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:36.528 11:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:36.528 11:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:36.528 11:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:36.528 11:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:36.528 11:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:36.528 11:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:36.528 11:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:36.528 11:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:36.528 11:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:36.528 11:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:36.787 11:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:36.787 "name": "raid_bdev1", 00:35:36.787 "uuid": "4fc00fdc-5bbb-478f-bb66-273586c54eb0", 00:35:36.787 "strip_size_kb": 64, 00:35:36.787 "state": "online", 00:35:36.787 "raid_level": "raid5f", 00:35:36.787 "superblock": true, 00:35:36.787 "num_base_bdevs": 3, 00:35:36.787 "num_base_bdevs_discovered": 2, 00:35:36.787 "num_base_bdevs_operational": 2, 00:35:36.787 "base_bdevs_list": [ 00:35:36.787 { 00:35:36.787 "name": null, 00:35:36.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:36.787 "is_configured": false, 00:35:36.787 "data_offset": 2048, 00:35:36.787 "data_size": 63488 00:35:36.787 }, 00:35:36.787 { 00:35:36.787 "name": "pt2", 00:35:36.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:36.787 "is_configured": true, 00:35:36.787 "data_offset": 2048, 00:35:36.787 "data_size": 63488 00:35:36.787 }, 00:35:36.787 { 00:35:36.787 "name": "pt3", 00:35:36.787 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:36.787 "is_configured": true, 00:35:36.787 "data_offset": 2048, 00:35:36.787 "data_size": 63488 00:35:36.787 } 00:35:36.787 ] 00:35:36.787 }' 00:35:36.787 11:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:36.787 11:59:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.354 11:59:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:35:37.354 11:59:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:37.613 11:59:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:35:37.613 11:59:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:35:37.613 11:59:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:37.871 [2024-06-10 11:59:09.831524] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:37.871 11:59:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4fc00fdc-5bbb-478f-bb66-273586c54eb0 '!=' 4fc00fdc-5bbb-478f-bb66-273586c54eb0 ']' 00:35:37.871 11:59:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 153784 00:35:37.871 11:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 153784 ']' 00:35:37.871 11:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # kill -0 153784 00:35:37.871 11:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # uname 00:35:37.871 11:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:37.871 11:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 153784 00:35:37.871 11:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:37.871 11:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:37.871 11:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 153784' 00:35:37.871 killing process with pid 153784 00:35:37.871 11:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # kill 153784 00:35:37.871 [2024-06-10 11:59:09.880150] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:37.871 [2024-06-10 11:59:09.880238] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:37.871 [2024-06-10 11:59:09.880313] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:37.871 [2024-06-10 11:59:09.880325] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:35:37.871 11:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # wait 153784 00:35:38.439 [2024-06-10 11:59:10.209286] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:39.817 11:59:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:35:39.817 00:35:39.817 real 0m23.960s 00:35:39.817 user 0m42.981s 00:35:39.817 sys 0m3.545s 00:35:39.817 11:59:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:39.817 11:59:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.817 ************************************ 00:35:39.817 END TEST raid5f_superblock_test 00:35:39.817 ************************************ 00:35:39.817 11:59:11 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:35:39.817 11:59:11 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:35:39.817 11:59:11 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:35:39.817 11:59:11 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:39.818 11:59:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:39.818 ************************************ 00:35:39.818 START TEST raid5f_rebuild_test 00:35:39.818 ************************************ 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid5f 3 false false true 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=154534 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 154534 /var/tmp/spdk-raid.sock 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@830 -- # '[' -z 154534 ']' 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:39.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:39.818 11:59:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.818 [2024-06-10 11:59:11.803696] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:35:39.818 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:39.818 Zero copy mechanism will not be used. 00:35:39.818 [2024-06-10 11:59:11.803915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154534 ] 00:35:40.077 [2024-06-10 11:59:11.986046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.334 [2024-06-10 11:59:12.277656] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.592 [2024-06-10 11:59:12.577988] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:40.851 11:59:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:40.851 11:59:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@863 -- # return 0 00:35:40.851 11:59:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:40.851 11:59:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:41.110 BaseBdev1_malloc 00:35:41.110 11:59:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:41.370 [2024-06-10 11:59:13.417689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:41.370 [2024-06-10 11:59:13.417822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:41.370 [2024-06-10 11:59:13.417885] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:35:41.370 [2024-06-10 11:59:13.417914] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:41.370 [2024-06-10 11:59:13.420604] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:41.370 [2024-06-10 11:59:13.420666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:41.370 BaseBdev1 00:35:41.628 11:59:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:41.628 11:59:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:41.887 BaseBdev2_malloc 00:35:41.887 11:59:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:42.146 [2024-06-10 11:59:14.020105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:42.146 [2024-06-10 11:59:14.020237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:42.146 [2024-06-10 11:59:14.020297] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:35:42.146 [2024-06-10 11:59:14.020320] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:42.146 [2024-06-10 11:59:14.022905] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:42.146 [2024-06-10 11:59:14.022962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:42.146 BaseBdev2 00:35:42.146 11:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:42.146 11:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:42.412 BaseBdev3_malloc 00:35:42.412 11:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:35:42.675 [2024-06-10 11:59:14.585909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:35:42.675 [2024-06-10 11:59:14.586028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:42.675 [2024-06-10 11:59:14.586068] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:35:42.675 [2024-06-10 11:59:14.586101] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:42.675 [2024-06-10 11:59:14.588664] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:42.675 [2024-06-10 11:59:14.588726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:42.675 BaseBdev3 00:35:42.675 11:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:35:42.933 spare_malloc 00:35:42.933 11:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:43.191 spare_delay 00:35:43.191 11:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:43.450 [2024-06-10 11:59:15.321838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:43.450 [2024-06-10 11:59:15.321964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:43.450 [2024-06-10 11:59:15.322007] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:35:43.450 [2024-06-10 11:59:15.322039] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:43.450 [2024-06-10 11:59:15.324616] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:43.450 [2024-06-10 11:59:15.324673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:43.450 spare 00:35:43.450 11:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:35:43.708 [2024-06-10 11:59:15.641978] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:43.708 [2024-06-10 11:59:15.644176] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:43.708 [2024-06-10 11:59:15.644245] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:43.708 [2024-06-10 11:59:15.644367] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:35:43.708 [2024-06-10 11:59:15.644377] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:35:43.708 [2024-06-10 11:59:15.644517] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:35:43.708 [2024-06-10 11:59:15.651924] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:35:43.708 [2024-06-10 11:59:15.651955] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:35:43.708 [2024-06-10 11:59:15.652205] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:43.708 11:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:43.708 11:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:43.708 11:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:43.708 11:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:43.708 11:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:43.708 11:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:43.708 11:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:43.708 11:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:43.708 11:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:43.708 11:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:43.708 11:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:43.708 11:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:43.967 11:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:43.967 "name": "raid_bdev1", 00:35:43.967 "uuid": "f94826b5-75c5-4de6-b47e-c2f4295d9184", 00:35:43.967 "strip_size_kb": 64, 00:35:43.967 "state": "online", 00:35:43.967 "raid_level": "raid5f", 00:35:43.967 "superblock": false, 00:35:43.967 "num_base_bdevs": 3, 00:35:43.967 "num_base_bdevs_discovered": 3, 00:35:43.967 "num_base_bdevs_operational": 3, 00:35:43.967 "base_bdevs_list": [ 00:35:43.967 { 00:35:43.967 "name": "BaseBdev1", 00:35:43.967 "uuid": "262ae5a3-efc0-50a3-a4e3-6c026ba59c98", 00:35:43.967 "is_configured": true, 00:35:43.967 "data_offset": 0, 00:35:43.967 "data_size": 65536 00:35:43.968 }, 00:35:43.968 { 00:35:43.968 "name": "BaseBdev2", 00:35:43.968 "uuid": "899e73bb-49b5-59b0-a92f-fcf040dff2c5", 00:35:43.968 "is_configured": true, 00:35:43.968 "data_offset": 0, 00:35:43.968 "data_size": 65536 00:35:43.968 }, 00:35:43.968 { 00:35:43.968 "name": "BaseBdev3", 00:35:43.968 "uuid": "a936404d-caaf-5031-a7ab-4d7f991f72e2", 00:35:43.968 "is_configured": true, 00:35:43.968 "data_offset": 0, 00:35:43.968 "data_size": 65536 00:35:43.968 } 00:35:43.968 ] 00:35:43.968 }' 00:35:43.968 11:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:43.968 11:59:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:44.536 11:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:44.536 11:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:35:44.794 [2024-06-10 11:59:16.769210] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:44.794 11:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=131072 00:35:44.794 11:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:44.794 11:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:45.053 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:35:45.053 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:35:45.053 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:35:45.053 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:35:45.053 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:35:45.053 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:45.053 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:35:45.053 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:45.053 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:35:45.053 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:45.053 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:35:45.053 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:45.053 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:45.053 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:35:45.312 [2024-06-10 11:59:17.337152] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:35:45.312 /dev/nbd0 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # break 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:45.571 1+0 records in 00:35:45.571 1+0 records out 00:35:45.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240474 s, 17.0 MB/s 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 128 00:35:45.571 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:35:46.139 512+0 records in 00:35:46.139 512+0 records out 00:35:46.139 67108864 bytes (67 MB, 64 MiB) copied, 0.497149 s, 135 MB/s 00:35:46.139 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:35:46.139 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:46.139 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:46.139 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:46.139 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:35:46.139 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:46.139 11:59:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:46.398 [2024-06-10 11:59:18.265240] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:46.398 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:46.398 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:46.398 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:46.398 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:46.398 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:46.398 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:46.398 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:35:46.398 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:35:46.398 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:35:46.657 [2024-06-10 11:59:18.531770] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:46.657 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:35:46.657 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:46.657 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:46.657 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:46.657 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:46.657 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:46.657 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:46.657 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:46.657 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:46.657 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:46.657 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:46.657 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:46.915 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:46.915 "name": "raid_bdev1", 00:35:46.915 "uuid": "f94826b5-75c5-4de6-b47e-c2f4295d9184", 00:35:46.915 "strip_size_kb": 64, 00:35:46.915 "state": "online", 00:35:46.915 "raid_level": "raid5f", 00:35:46.915 "superblock": false, 00:35:46.915 "num_base_bdevs": 3, 00:35:46.915 "num_base_bdevs_discovered": 2, 00:35:46.915 "num_base_bdevs_operational": 2, 00:35:46.915 "base_bdevs_list": [ 00:35:46.915 { 00:35:46.915 "name": null, 00:35:46.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.915 "is_configured": false, 00:35:46.915 "data_offset": 0, 00:35:46.915 "data_size": 65536 00:35:46.915 }, 00:35:46.915 { 00:35:46.915 "name": "BaseBdev2", 00:35:46.915 "uuid": "899e73bb-49b5-59b0-a92f-fcf040dff2c5", 00:35:46.915 "is_configured": true, 00:35:46.915 "data_offset": 0, 00:35:46.915 "data_size": 65536 00:35:46.915 }, 00:35:46.915 { 00:35:46.915 "name": "BaseBdev3", 00:35:46.915 "uuid": "a936404d-caaf-5031-a7ab-4d7f991f72e2", 00:35:46.915 "is_configured": true, 00:35:46.915 "data_offset": 0, 00:35:46.915 "data_size": 65536 00:35:46.915 } 00:35:46.915 ] 00:35:46.915 }' 00:35:46.915 11:59:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:46.915 11:59:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:47.480 11:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:47.738 [2024-06-10 11:59:19.640098] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:47.738 [2024-06-10 11:59:19.661694] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:35:47.738 [2024-06-10 11:59:19.672610] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:47.738 11:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:35:48.674 11:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:48.674 11:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:48.674 11:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:48.674 11:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:48.674 11:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:48.674 11:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:48.674 11:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:48.933 11:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:48.933 "name": "raid_bdev1", 00:35:48.933 "uuid": "f94826b5-75c5-4de6-b47e-c2f4295d9184", 00:35:48.933 "strip_size_kb": 64, 00:35:48.933 "state": "online", 00:35:48.933 "raid_level": "raid5f", 00:35:48.933 "superblock": false, 00:35:48.933 "num_base_bdevs": 3, 00:35:48.933 "num_base_bdevs_discovered": 3, 00:35:48.933 "num_base_bdevs_operational": 3, 00:35:48.933 "process": { 00:35:48.933 "type": "rebuild", 00:35:48.933 "target": "spare", 00:35:48.933 "progress": { 00:35:48.933 "blocks": 22528, 00:35:48.933 "percent": 17 00:35:48.933 } 00:35:48.933 }, 00:35:48.933 "base_bdevs_list": [ 00:35:48.933 { 00:35:48.933 "name": "spare", 00:35:48.933 "uuid": "7a45725d-5e94-5f0d-920f-0649476da83b", 00:35:48.933 "is_configured": true, 00:35:48.933 "data_offset": 0, 00:35:48.933 "data_size": 65536 00:35:48.933 }, 00:35:48.933 { 00:35:48.933 "name": "BaseBdev2", 00:35:48.933 "uuid": "899e73bb-49b5-59b0-a92f-fcf040dff2c5", 00:35:48.933 "is_configured": true, 00:35:48.933 "data_offset": 0, 00:35:48.933 "data_size": 65536 00:35:48.933 }, 00:35:48.933 { 00:35:48.933 "name": "BaseBdev3", 00:35:48.933 "uuid": "a936404d-caaf-5031-a7ab-4d7f991f72e2", 00:35:48.933 "is_configured": true, 00:35:48.933 "data_offset": 0, 00:35:48.933 "data_size": 65536 00:35:48.933 } 00:35:48.933 ] 00:35:48.933 }' 00:35:48.933 11:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:48.933 11:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:48.933 11:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:48.933 11:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:48.933 11:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:49.191 [2024-06-10 11:59:21.198402] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:49.450 [2024-06-10 11:59:21.289654] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:49.450 [2024-06-10 11:59:21.289764] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:49.450 [2024-06-10 11:59:21.289787] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:49.450 [2024-06-10 11:59:21.289798] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:49.450 11:59:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:35:49.450 11:59:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:49.450 11:59:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:49.450 11:59:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:49.450 11:59:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:49.450 11:59:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:49.450 11:59:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:49.450 11:59:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:49.450 11:59:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:49.450 11:59:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:49.450 11:59:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:49.450 11:59:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:49.710 11:59:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:49.710 "name": "raid_bdev1", 00:35:49.710 "uuid": "f94826b5-75c5-4de6-b47e-c2f4295d9184", 00:35:49.710 "strip_size_kb": 64, 00:35:49.710 "state": "online", 00:35:49.710 "raid_level": "raid5f", 00:35:49.710 "superblock": false, 00:35:49.710 "num_base_bdevs": 3, 00:35:49.710 "num_base_bdevs_discovered": 2, 00:35:49.710 "num_base_bdevs_operational": 2, 00:35:49.710 "base_bdevs_list": [ 00:35:49.710 { 00:35:49.710 "name": null, 00:35:49.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.710 "is_configured": false, 00:35:49.710 "data_offset": 0, 00:35:49.710 "data_size": 65536 00:35:49.710 }, 00:35:49.710 { 00:35:49.710 "name": "BaseBdev2", 00:35:49.710 "uuid": "899e73bb-49b5-59b0-a92f-fcf040dff2c5", 00:35:49.710 "is_configured": true, 00:35:49.710 "data_offset": 0, 00:35:49.710 "data_size": 65536 00:35:49.710 }, 00:35:49.710 { 00:35:49.710 "name": "BaseBdev3", 00:35:49.710 "uuid": "a936404d-caaf-5031-a7ab-4d7f991f72e2", 00:35:49.710 "is_configured": true, 00:35:49.710 "data_offset": 0, 00:35:49.710 "data_size": 65536 00:35:49.710 } 00:35:49.710 ] 00:35:49.710 }' 00:35:49.710 11:59:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:49.710 11:59:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:50.277 11:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:50.277 11:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:50.277 11:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:50.277 11:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:50.277 11:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:50.277 11:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:50.277 11:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:50.535 11:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:50.535 "name": "raid_bdev1", 00:35:50.535 "uuid": "f94826b5-75c5-4de6-b47e-c2f4295d9184", 00:35:50.535 "strip_size_kb": 64, 00:35:50.535 "state": "online", 00:35:50.535 "raid_level": "raid5f", 00:35:50.535 "superblock": false, 00:35:50.535 "num_base_bdevs": 3, 00:35:50.535 "num_base_bdevs_discovered": 2, 00:35:50.535 "num_base_bdevs_operational": 2, 00:35:50.535 "base_bdevs_list": [ 00:35:50.535 { 00:35:50.535 "name": null, 00:35:50.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:50.535 "is_configured": false, 00:35:50.535 "data_offset": 0, 00:35:50.535 "data_size": 65536 00:35:50.535 }, 00:35:50.535 { 00:35:50.535 "name": "BaseBdev2", 00:35:50.535 "uuid": "899e73bb-49b5-59b0-a92f-fcf040dff2c5", 00:35:50.535 "is_configured": true, 00:35:50.535 "data_offset": 0, 00:35:50.535 "data_size": 65536 00:35:50.535 }, 00:35:50.535 { 00:35:50.535 "name": "BaseBdev3", 00:35:50.535 "uuid": "a936404d-caaf-5031-a7ab-4d7f991f72e2", 00:35:50.535 "is_configured": true, 00:35:50.535 "data_offset": 0, 00:35:50.535 "data_size": 65536 00:35:50.535 } 00:35:50.535 ] 00:35:50.535 }' 00:35:50.535 11:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:50.535 11:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:50.535 11:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:50.535 11:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:50.535 11:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:50.793 [2024-06-10 11:59:22.656328] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:50.793 [2024-06-10 11:59:22.677043] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:35:50.793 [2024-06-10 11:59:22.687005] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:50.793 11:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:51.727 11:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:51.727 11:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:51.727 11:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:51.727 11:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:51.727 11:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:51.727 11:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:51.728 11:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:51.986 11:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:51.986 "name": "raid_bdev1", 00:35:51.986 "uuid": "f94826b5-75c5-4de6-b47e-c2f4295d9184", 00:35:51.986 "strip_size_kb": 64, 00:35:51.986 "state": "online", 00:35:51.986 "raid_level": "raid5f", 00:35:51.986 "superblock": false, 00:35:51.986 "num_base_bdevs": 3, 00:35:51.986 "num_base_bdevs_discovered": 3, 00:35:51.986 "num_base_bdevs_operational": 3, 00:35:51.986 "process": { 00:35:51.986 "type": "rebuild", 00:35:51.986 "target": "spare", 00:35:51.986 "progress": { 00:35:51.986 "blocks": 24576, 00:35:51.986 "percent": 18 00:35:51.986 } 00:35:51.986 }, 00:35:51.986 "base_bdevs_list": [ 00:35:51.986 { 00:35:51.986 "name": "spare", 00:35:51.986 "uuid": "7a45725d-5e94-5f0d-920f-0649476da83b", 00:35:51.986 "is_configured": true, 00:35:51.986 "data_offset": 0, 00:35:51.986 "data_size": 65536 00:35:51.986 }, 00:35:51.986 { 00:35:51.986 "name": "BaseBdev2", 00:35:51.986 "uuid": "899e73bb-49b5-59b0-a92f-fcf040dff2c5", 00:35:51.986 "is_configured": true, 00:35:51.986 "data_offset": 0, 00:35:51.986 "data_size": 65536 00:35:51.986 }, 00:35:51.986 { 00:35:51.986 "name": "BaseBdev3", 00:35:51.986 "uuid": "a936404d-caaf-5031-a7ab-4d7f991f72e2", 00:35:51.986 "is_configured": true, 00:35:51.986 "data_offset": 0, 00:35:51.986 "data_size": 65536 00:35:51.986 } 00:35:51.986 ] 00:35:51.986 }' 00:35:51.986 11:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:51.986 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:51.986 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:52.244 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:52.244 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:35:52.244 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:35:52.244 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:35:52.244 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1224 00:35:52.244 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:52.244 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:52.244 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:52.244 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:52.244 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:52.244 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:52.244 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:52.244 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:52.503 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:52.503 "name": "raid_bdev1", 00:35:52.503 "uuid": "f94826b5-75c5-4de6-b47e-c2f4295d9184", 00:35:52.503 "strip_size_kb": 64, 00:35:52.503 "state": "online", 00:35:52.503 "raid_level": "raid5f", 00:35:52.503 "superblock": false, 00:35:52.503 "num_base_bdevs": 3, 00:35:52.503 "num_base_bdevs_discovered": 3, 00:35:52.503 "num_base_bdevs_operational": 3, 00:35:52.503 "process": { 00:35:52.503 "type": "rebuild", 00:35:52.503 "target": "spare", 00:35:52.503 "progress": { 00:35:52.503 "blocks": 32768, 00:35:52.503 "percent": 25 00:35:52.503 } 00:35:52.503 }, 00:35:52.503 "base_bdevs_list": [ 00:35:52.503 { 00:35:52.503 "name": "spare", 00:35:52.503 "uuid": "7a45725d-5e94-5f0d-920f-0649476da83b", 00:35:52.503 "is_configured": true, 00:35:52.503 "data_offset": 0, 00:35:52.503 "data_size": 65536 00:35:52.503 }, 00:35:52.503 { 00:35:52.503 "name": "BaseBdev2", 00:35:52.503 "uuid": "899e73bb-49b5-59b0-a92f-fcf040dff2c5", 00:35:52.503 "is_configured": true, 00:35:52.503 "data_offset": 0, 00:35:52.503 "data_size": 65536 00:35:52.503 }, 00:35:52.503 { 00:35:52.503 "name": "BaseBdev3", 00:35:52.503 "uuid": "a936404d-caaf-5031-a7ab-4d7f991f72e2", 00:35:52.503 "is_configured": true, 00:35:52.503 "data_offset": 0, 00:35:52.503 "data_size": 65536 00:35:52.503 } 00:35:52.503 ] 00:35:52.503 }' 00:35:52.503 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:52.503 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:52.503 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:52.503 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:52.503 11:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:53.440 11:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:53.440 11:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:53.440 11:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:53.440 11:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:53.440 11:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:53.440 11:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:53.440 11:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:53.440 11:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:53.698 11:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:53.698 "name": "raid_bdev1", 00:35:53.698 "uuid": "f94826b5-75c5-4de6-b47e-c2f4295d9184", 00:35:53.698 "strip_size_kb": 64, 00:35:53.698 "state": "online", 00:35:53.698 "raid_level": "raid5f", 00:35:53.698 "superblock": false, 00:35:53.698 "num_base_bdevs": 3, 00:35:53.698 "num_base_bdevs_discovered": 3, 00:35:53.698 "num_base_bdevs_operational": 3, 00:35:53.698 "process": { 00:35:53.698 "type": "rebuild", 00:35:53.698 "target": "spare", 00:35:53.698 "progress": { 00:35:53.698 "blocks": 59392, 00:35:53.698 "percent": 45 00:35:53.698 } 00:35:53.698 }, 00:35:53.698 "base_bdevs_list": [ 00:35:53.698 { 00:35:53.698 "name": "spare", 00:35:53.698 "uuid": "7a45725d-5e94-5f0d-920f-0649476da83b", 00:35:53.698 "is_configured": true, 00:35:53.698 "data_offset": 0, 00:35:53.698 "data_size": 65536 00:35:53.698 }, 00:35:53.698 { 00:35:53.698 "name": "BaseBdev2", 00:35:53.698 "uuid": "899e73bb-49b5-59b0-a92f-fcf040dff2c5", 00:35:53.698 "is_configured": true, 00:35:53.698 "data_offset": 0, 00:35:53.698 "data_size": 65536 00:35:53.698 }, 00:35:53.698 { 00:35:53.698 "name": "BaseBdev3", 00:35:53.698 "uuid": "a936404d-caaf-5031-a7ab-4d7f991f72e2", 00:35:53.698 "is_configured": true, 00:35:53.698 "data_offset": 0, 00:35:53.698 "data_size": 65536 00:35:53.698 } 00:35:53.698 ] 00:35:53.698 }' 00:35:53.698 11:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:53.698 11:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:53.698 11:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:53.956 11:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:53.956 11:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:54.889 11:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:54.889 11:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:54.889 11:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:54.889 11:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:54.889 11:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:54.889 11:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:54.889 11:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:54.889 11:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:55.147 11:59:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:55.147 "name": "raid_bdev1", 00:35:55.147 "uuid": "f94826b5-75c5-4de6-b47e-c2f4295d9184", 00:35:55.147 "strip_size_kb": 64, 00:35:55.147 "state": "online", 00:35:55.147 "raid_level": "raid5f", 00:35:55.147 "superblock": false, 00:35:55.147 "num_base_bdevs": 3, 00:35:55.147 "num_base_bdevs_discovered": 3, 00:35:55.147 "num_base_bdevs_operational": 3, 00:35:55.147 "process": { 00:35:55.147 "type": "rebuild", 00:35:55.147 "target": "spare", 00:35:55.147 "progress": { 00:35:55.147 "blocks": 88064, 00:35:55.148 "percent": 67 00:35:55.148 } 00:35:55.148 }, 00:35:55.148 "base_bdevs_list": [ 00:35:55.148 { 00:35:55.148 "name": "spare", 00:35:55.148 "uuid": "7a45725d-5e94-5f0d-920f-0649476da83b", 00:35:55.148 "is_configured": true, 00:35:55.148 "data_offset": 0, 00:35:55.148 "data_size": 65536 00:35:55.148 }, 00:35:55.148 { 00:35:55.148 "name": "BaseBdev2", 00:35:55.148 "uuid": "899e73bb-49b5-59b0-a92f-fcf040dff2c5", 00:35:55.148 "is_configured": true, 00:35:55.148 "data_offset": 0, 00:35:55.148 "data_size": 65536 00:35:55.148 }, 00:35:55.148 { 00:35:55.148 "name": "BaseBdev3", 00:35:55.148 "uuid": "a936404d-caaf-5031-a7ab-4d7f991f72e2", 00:35:55.148 "is_configured": true, 00:35:55.148 "data_offset": 0, 00:35:55.148 "data_size": 65536 00:35:55.148 } 00:35:55.148 ] 00:35:55.148 }' 00:35:55.148 11:59:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:55.148 11:59:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:55.148 11:59:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:55.148 11:59:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:55.148 11:59:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:56.522 11:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:56.522 11:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:56.522 11:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:56.522 11:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:56.522 11:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:56.522 11:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:56.523 11:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:56.523 11:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:56.523 11:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:56.523 "name": "raid_bdev1", 00:35:56.523 "uuid": "f94826b5-75c5-4de6-b47e-c2f4295d9184", 00:35:56.523 "strip_size_kb": 64, 00:35:56.523 "state": "online", 00:35:56.523 "raid_level": "raid5f", 00:35:56.523 "superblock": false, 00:35:56.523 "num_base_bdevs": 3, 00:35:56.523 "num_base_bdevs_discovered": 3, 00:35:56.523 "num_base_bdevs_operational": 3, 00:35:56.523 "process": { 00:35:56.523 "type": "rebuild", 00:35:56.523 "target": "spare", 00:35:56.523 "progress": { 00:35:56.523 "blocks": 116736, 00:35:56.523 "percent": 89 00:35:56.523 } 00:35:56.523 }, 00:35:56.523 "base_bdevs_list": [ 00:35:56.523 { 00:35:56.523 "name": "spare", 00:35:56.523 "uuid": "7a45725d-5e94-5f0d-920f-0649476da83b", 00:35:56.523 "is_configured": true, 00:35:56.523 "data_offset": 0, 00:35:56.523 "data_size": 65536 00:35:56.523 }, 00:35:56.523 { 00:35:56.523 "name": "BaseBdev2", 00:35:56.523 "uuid": "899e73bb-49b5-59b0-a92f-fcf040dff2c5", 00:35:56.523 "is_configured": true, 00:35:56.523 "data_offset": 0, 00:35:56.523 "data_size": 65536 00:35:56.523 }, 00:35:56.523 { 00:35:56.523 "name": "BaseBdev3", 00:35:56.523 "uuid": "a936404d-caaf-5031-a7ab-4d7f991f72e2", 00:35:56.523 "is_configured": true, 00:35:56.523 "data_offset": 0, 00:35:56.523 "data_size": 65536 00:35:56.523 } 00:35:56.523 ] 00:35:56.523 }' 00:35:56.523 11:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:56.523 11:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:56.523 11:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:56.781 11:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:56.781 11:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:57.347 [2024-06-10 11:59:29.158386] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:57.347 [2024-06-10 11:59:29.158497] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:57.347 [2024-06-10 11:59:29.158597] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:57.605 11:59:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:57.605 11:59:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:57.605 11:59:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:57.605 11:59:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:57.605 11:59:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:57.605 11:59:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:57.605 11:59:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:57.605 11:59:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:58.174 11:59:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:58.174 "name": "raid_bdev1", 00:35:58.174 "uuid": "f94826b5-75c5-4de6-b47e-c2f4295d9184", 00:35:58.174 "strip_size_kb": 64, 00:35:58.174 "state": "online", 00:35:58.174 "raid_level": "raid5f", 00:35:58.174 "superblock": false, 00:35:58.174 "num_base_bdevs": 3, 00:35:58.174 "num_base_bdevs_discovered": 3, 00:35:58.174 "num_base_bdevs_operational": 3, 00:35:58.174 "base_bdevs_list": [ 00:35:58.174 { 00:35:58.174 "name": "spare", 00:35:58.174 "uuid": "7a45725d-5e94-5f0d-920f-0649476da83b", 00:35:58.174 "is_configured": true, 00:35:58.174 "data_offset": 0, 00:35:58.174 "data_size": 65536 00:35:58.174 }, 00:35:58.174 { 00:35:58.174 "name": "BaseBdev2", 00:35:58.174 "uuid": "899e73bb-49b5-59b0-a92f-fcf040dff2c5", 00:35:58.174 "is_configured": true, 00:35:58.174 "data_offset": 0, 00:35:58.174 "data_size": 65536 00:35:58.174 }, 00:35:58.174 { 00:35:58.174 "name": "BaseBdev3", 00:35:58.174 "uuid": "a936404d-caaf-5031-a7ab-4d7f991f72e2", 00:35:58.174 "is_configured": true, 00:35:58.174 "data_offset": 0, 00:35:58.174 "data_size": 65536 00:35:58.174 } 00:35:58.174 ] 00:35:58.174 }' 00:35:58.174 11:59:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:58.174 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:58.174 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:58.174 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:35:58.174 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:35:58.174 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:58.174 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:58.174 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:58.174 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:58.174 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:58.174 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:58.174 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:58.438 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:58.438 "name": "raid_bdev1", 00:35:58.438 "uuid": "f94826b5-75c5-4de6-b47e-c2f4295d9184", 00:35:58.438 "strip_size_kb": 64, 00:35:58.438 "state": "online", 00:35:58.438 "raid_level": "raid5f", 00:35:58.438 "superblock": false, 00:35:58.438 "num_base_bdevs": 3, 00:35:58.438 "num_base_bdevs_discovered": 3, 00:35:58.438 "num_base_bdevs_operational": 3, 00:35:58.438 "base_bdevs_list": [ 00:35:58.438 { 00:35:58.438 "name": "spare", 00:35:58.438 "uuid": "7a45725d-5e94-5f0d-920f-0649476da83b", 00:35:58.438 "is_configured": true, 00:35:58.438 "data_offset": 0, 00:35:58.438 "data_size": 65536 00:35:58.438 }, 00:35:58.438 { 00:35:58.438 "name": "BaseBdev2", 00:35:58.438 "uuid": "899e73bb-49b5-59b0-a92f-fcf040dff2c5", 00:35:58.438 "is_configured": true, 00:35:58.438 "data_offset": 0, 00:35:58.438 "data_size": 65536 00:35:58.438 }, 00:35:58.438 { 00:35:58.438 "name": "BaseBdev3", 00:35:58.438 "uuid": "a936404d-caaf-5031-a7ab-4d7f991f72e2", 00:35:58.438 "is_configured": true, 00:35:58.438 "data_offset": 0, 00:35:58.438 "data_size": 65536 00:35:58.438 } 00:35:58.438 ] 00:35:58.438 }' 00:35:58.438 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:58.438 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:58.438 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:58.438 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:58.438 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:58.438 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:58.438 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:58.438 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:58.438 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:58.438 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:58.438 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:58.438 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:58.438 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:58.438 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:58.438 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:58.438 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:59.012 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:59.012 "name": "raid_bdev1", 00:35:59.012 "uuid": "f94826b5-75c5-4de6-b47e-c2f4295d9184", 00:35:59.012 "strip_size_kb": 64, 00:35:59.012 "state": "online", 00:35:59.012 "raid_level": "raid5f", 00:35:59.012 "superblock": false, 00:35:59.012 "num_base_bdevs": 3, 00:35:59.012 "num_base_bdevs_discovered": 3, 00:35:59.012 "num_base_bdevs_operational": 3, 00:35:59.012 "base_bdevs_list": [ 00:35:59.012 { 00:35:59.012 "name": "spare", 00:35:59.012 "uuid": "7a45725d-5e94-5f0d-920f-0649476da83b", 00:35:59.012 "is_configured": true, 00:35:59.012 "data_offset": 0, 00:35:59.012 "data_size": 65536 00:35:59.012 }, 00:35:59.012 { 00:35:59.012 "name": "BaseBdev2", 00:35:59.012 "uuid": "899e73bb-49b5-59b0-a92f-fcf040dff2c5", 00:35:59.012 "is_configured": true, 00:35:59.012 "data_offset": 0, 00:35:59.012 "data_size": 65536 00:35:59.012 }, 00:35:59.012 { 00:35:59.012 "name": "BaseBdev3", 00:35:59.012 "uuid": "a936404d-caaf-5031-a7ab-4d7f991f72e2", 00:35:59.012 "is_configured": true, 00:35:59.012 "data_offset": 0, 00:35:59.012 "data_size": 65536 00:35:59.012 } 00:35:59.012 ] 00:35:59.012 }' 00:35:59.012 11:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:59.012 11:59:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.581 11:59:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:59.839 [2024-06-10 11:59:31.757183] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:59.839 [2024-06-10 11:59:31.757240] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:59.840 [2024-06-10 11:59:31.757340] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:59.840 [2024-06-10 11:59:31.757438] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:59.840 [2024-06-10 11:59:31.757451] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:35:59.840 11:59:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:35:59.840 11:59:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:00.099 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:36:00.099 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:36:00.099 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:36:00.099 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:36:00.099 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:00.099 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:36:00.099 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:00.099 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:00.099 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:00.099 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:36:00.099 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:00.099 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:00.099 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:36:00.357 /dev/nbd0 00:36:00.615 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # break 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:00.616 1+0 records in 00:36:00.616 1+0 records out 00:36:00.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00069076 s, 5.9 MB/s 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:00.616 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:36:00.873 /dev/nbd1 00:36:00.873 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:00.873 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:00.873 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:36:00.873 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:36:00.873 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:36:00.873 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:36:00.873 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:36:00.873 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # break 00:36:00.873 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:36:00.873 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:36:00.873 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:00.873 1+0 records in 00:36:00.873 1+0 records out 00:36:00.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103916 s, 3.9 MB/s 00:36:00.873 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:00.873 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:36:00.873 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:00.873 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:36:00.874 11:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:36:00.874 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:00.874 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:00.874 11:59:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:36:01.132 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:36:01.132 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:01.132 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:01.132 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:01.132 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:36:01.132 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:01.132 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:01.391 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:01.391 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:01.391 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:01.391 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:01.391 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:01.391 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:01.391 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:36:01.391 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:36:01.391 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:01.391 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:36:01.650 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:01.918 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:01.918 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:01.918 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:01.918 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:01.918 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:01.918 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:36:01.919 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:36:01.919 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:36:01.919 11:59:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 154534 00:36:01.919 11:59:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@949 -- # '[' -z 154534 ']' 00:36:01.919 11:59:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # kill -0 154534 00:36:01.919 11:59:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # uname 00:36:01.919 11:59:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:01.919 11:59:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 154534 00:36:01.919 11:59:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:36:01.919 11:59:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:36:01.919 11:59:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 154534' 00:36:01.919 killing process with pid 154534 00:36:01.919 11:59:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # kill 154534 00:36:01.919 Received shutdown signal, test time was about 60.000000 seconds 00:36:01.919 00:36:01.919 Latency(us) 00:36:01.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:01.919 =================================================================================================================== 00:36:01.919 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:01.919 [2024-06-10 11:59:33.747780] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:01.919 11:59:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # wait 154534 00:36:02.485 [2024-06-10 11:59:34.276468] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:04.392 ************************************ 00:36:04.392 END TEST raid5f_rebuild_test 00:36:04.392 ************************************ 00:36:04.392 11:59:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:36:04.392 00:36:04.392 real 0m24.276s 00:36:04.392 user 0m36.021s 00:36:04.392 sys 0m3.458s 00:36:04.392 11:59:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:04.392 11:59:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:04.392 11:59:36 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:36:04.392 11:59:36 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:36:04.392 11:59:36 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:04.392 11:59:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:04.392 ************************************ 00:36:04.392 START TEST raid5f_rebuild_test_sb 00:36:04.392 ************************************ 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid5f 3 true false true 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:36:04.392 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:36:04.393 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:36:04.393 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:36:04.393 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:36:04.393 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:36:04.393 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:36:04.393 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:36:04.393 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:36:04.393 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:36:04.393 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=155114 00:36:04.393 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:36:04.393 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 155114 /var/tmp/spdk-raid.sock 00:36:04.393 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@830 -- # '[' -z 155114 ']' 00:36:04.393 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:04.393 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:04.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:04.393 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:04.393 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:04.393 11:59:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:04.393 [2024-06-10 11:59:36.173535] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:36:04.393 I/O size of 3145728 is greater than zero copy threshold (65536). 00:36:04.393 Zero copy mechanism will not be used. 00:36:04.393 [2024-06-10 11:59:36.173839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155114 ] 00:36:04.393 [2024-06-10 11:59:36.355913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:04.713 [2024-06-10 11:59:36.676495] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.987 [2024-06-10 11:59:36.984274] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:05.245 11:59:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:05.245 11:59:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@863 -- # return 0 00:36:05.245 11:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:05.245 11:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:36:05.503 BaseBdev1_malloc 00:36:05.503 11:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:05.762 [2024-06-10 11:59:37.674227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:05.762 [2024-06-10 11:59:37.674344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:05.762 [2024-06-10 11:59:37.674396] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:36:05.762 [2024-06-10 11:59:37.674419] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:05.762 [2024-06-10 11:59:37.676971] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:05.762 [2024-06-10 11:59:37.677023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:05.762 BaseBdev1 00:36:05.762 11:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:05.762 11:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:36:06.021 BaseBdev2_malloc 00:36:06.021 11:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:36:06.279 [2024-06-10 11:59:38.244457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:36:06.279 [2024-06-10 11:59:38.244591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:06.279 [2024-06-10 11:59:38.244650] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:36:06.279 [2024-06-10 11:59:38.244673] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:06.280 [2024-06-10 11:59:38.247313] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:06.280 [2024-06-10 11:59:38.247374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:06.280 BaseBdev2 00:36:06.280 11:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:06.280 11:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:36:06.538 BaseBdev3_malloc 00:36:06.797 11:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:36:07.055 [2024-06-10 11:59:38.911910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:36:07.055 [2024-06-10 11:59:38.912028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:07.055 [2024-06-10 11:59:38.912063] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:36:07.055 [2024-06-10 11:59:38.912094] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:07.055 [2024-06-10 11:59:38.914649] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:07.055 [2024-06-10 11:59:38.914749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:36:07.055 BaseBdev3 00:36:07.055 11:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:36:07.314 spare_malloc 00:36:07.314 11:59:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:36:07.572 spare_delay 00:36:07.572 11:59:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:07.830 [2024-06-10 11:59:39.820299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:07.830 [2024-06-10 11:59:39.820414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:07.830 [2024-06-10 11:59:39.820461] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:36:07.830 [2024-06-10 11:59:39.820501] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:07.830 [2024-06-10 11:59:39.823235] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:07.830 [2024-06-10 11:59:39.823303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:07.830 spare 00:36:07.830 11:59:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:36:08.088 [2024-06-10 11:59:40.064445] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:08.088 [2024-06-10 11:59:40.067483] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:08.088 [2024-06-10 11:59:40.067632] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:08.088 [2024-06-10 11:59:40.067992] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:36:08.088 [2024-06-10 11:59:40.068029] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:36:08.088 [2024-06-10 11:59:40.068225] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:36:08.088 [2024-06-10 11:59:40.075915] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:36:08.088 [2024-06-10 11:59:40.075975] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:36:08.088 [2024-06-10 11:59:40.076350] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:08.088 11:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:08.089 11:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:08.089 11:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:08.089 11:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:08.089 11:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:08.089 11:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:08.089 11:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:08.089 11:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:08.089 11:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:08.089 11:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:08.089 11:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:08.089 11:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:08.380 11:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:08.380 "name": "raid_bdev1", 00:36:08.380 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:08.380 "strip_size_kb": 64, 00:36:08.380 "state": "online", 00:36:08.380 "raid_level": "raid5f", 00:36:08.380 "superblock": true, 00:36:08.380 "num_base_bdevs": 3, 00:36:08.380 "num_base_bdevs_discovered": 3, 00:36:08.380 "num_base_bdevs_operational": 3, 00:36:08.380 "base_bdevs_list": [ 00:36:08.380 { 00:36:08.380 "name": "BaseBdev1", 00:36:08.380 "uuid": "d081b0cb-45f8-5cda-842d-be94455ed2f5", 00:36:08.380 "is_configured": true, 00:36:08.380 "data_offset": 2048, 00:36:08.380 "data_size": 63488 00:36:08.380 }, 00:36:08.380 { 00:36:08.380 "name": "BaseBdev2", 00:36:08.380 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:08.380 "is_configured": true, 00:36:08.380 "data_offset": 2048, 00:36:08.380 "data_size": 63488 00:36:08.380 }, 00:36:08.380 { 00:36:08.380 "name": "BaseBdev3", 00:36:08.380 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:08.380 "is_configured": true, 00:36:08.380 "data_offset": 2048, 00:36:08.380 "data_size": 63488 00:36:08.380 } 00:36:08.380 ] 00:36:08.380 }' 00:36:08.380 11:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:08.380 11:59:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.316 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:09.316 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:36:09.316 [2024-06-10 11:59:41.304400] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:09.316 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=126976 00:36:09.316 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:09.316 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:36:09.575 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:36:09.575 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:36:09.575 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:36:09.575 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:36:09.575 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:36:09.575 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:09.575 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:36:09.575 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:09.575 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:36:09.575 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:09.575 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:36:09.575 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:09.575 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:09.575 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:36:09.835 [2024-06-10 11:59:41.820368] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:09.835 /dev/nbd0 00:36:09.835 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:09.835 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:09.835 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:36:09.835 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:36:09.835 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:36:09.835 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:36:09.835 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:36:09.835 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:36:09.835 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:36:09.835 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:36:09.835 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:09.835 1+0 records in 00:36:09.835 1+0 records out 00:36:09.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000808786 s, 5.1 MB/s 00:36:09.835 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:10.094 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:36:10.094 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:10.094 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:36:10.094 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:36:10.094 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:10.094 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:10.094 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:36:10.094 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:36:10.094 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 128 00:36:10.094 11:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:36:10.352 496+0 records in 00:36:10.352 496+0 records out 00:36:10.352 65011712 bytes (65 MB, 62 MiB) copied, 0.48355 s, 134 MB/s 00:36:10.352 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:36:10.352 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:10.352 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:10.352 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:10.352 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:36:10.352 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:10.352 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:10.609 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:10.609 [2024-06-10 11:59:42.668439] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:36:10.867 [2024-06-10 11:59:42.866769] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:10.867 11:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:11.126 11:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:11.126 "name": "raid_bdev1", 00:36:11.126 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:11.126 "strip_size_kb": 64, 00:36:11.126 "state": "online", 00:36:11.126 "raid_level": "raid5f", 00:36:11.126 "superblock": true, 00:36:11.126 "num_base_bdevs": 3, 00:36:11.126 "num_base_bdevs_discovered": 2, 00:36:11.126 "num_base_bdevs_operational": 2, 00:36:11.126 "base_bdevs_list": [ 00:36:11.126 { 00:36:11.126 "name": null, 00:36:11.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:11.126 "is_configured": false, 00:36:11.126 "data_offset": 2048, 00:36:11.126 "data_size": 63488 00:36:11.126 }, 00:36:11.126 { 00:36:11.126 "name": "BaseBdev2", 00:36:11.126 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:11.126 "is_configured": true, 00:36:11.126 "data_offset": 2048, 00:36:11.126 "data_size": 63488 00:36:11.126 }, 00:36:11.126 { 00:36:11.126 "name": "BaseBdev3", 00:36:11.126 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:11.126 "is_configured": true, 00:36:11.126 "data_offset": 2048, 00:36:11.126 "data_size": 63488 00:36:11.126 } 00:36:11.126 ] 00:36:11.126 }' 00:36:11.126 11:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:11.126 11:59:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:12.105 11:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:12.105 [2024-06-10 11:59:44.131164] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:12.105 [2024-06-10 11:59:44.154863] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:36:12.364 [2024-06-10 11:59:44.166015] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:12.364 11:59:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:36:13.299 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:13.299 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:13.299 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:13.299 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:13.299 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:13.299 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:13.299 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:13.557 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:13.557 "name": "raid_bdev1", 00:36:13.557 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:13.557 "strip_size_kb": 64, 00:36:13.557 "state": "online", 00:36:13.557 "raid_level": "raid5f", 00:36:13.557 "superblock": true, 00:36:13.557 "num_base_bdevs": 3, 00:36:13.557 "num_base_bdevs_discovered": 3, 00:36:13.557 "num_base_bdevs_operational": 3, 00:36:13.557 "process": { 00:36:13.557 "type": "rebuild", 00:36:13.557 "target": "spare", 00:36:13.557 "progress": { 00:36:13.557 "blocks": 24576, 00:36:13.557 "percent": 19 00:36:13.557 } 00:36:13.557 }, 00:36:13.557 "base_bdevs_list": [ 00:36:13.557 { 00:36:13.557 "name": "spare", 00:36:13.557 "uuid": "5454fb13-a3ce-55aa-aa5f-d0ee920a187a", 00:36:13.557 "is_configured": true, 00:36:13.557 "data_offset": 2048, 00:36:13.557 "data_size": 63488 00:36:13.557 }, 00:36:13.557 { 00:36:13.557 "name": "BaseBdev2", 00:36:13.557 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:13.557 "is_configured": true, 00:36:13.557 "data_offset": 2048, 00:36:13.557 "data_size": 63488 00:36:13.557 }, 00:36:13.557 { 00:36:13.557 "name": "BaseBdev3", 00:36:13.557 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:13.557 "is_configured": true, 00:36:13.557 "data_offset": 2048, 00:36:13.557 "data_size": 63488 00:36:13.557 } 00:36:13.557 ] 00:36:13.557 }' 00:36:13.557 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:13.557 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:13.557 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:13.557 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:13.557 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:13.815 [2024-06-10 11:59:45.839816] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:14.074 [2024-06-10 11:59:45.884784] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:14.074 [2024-06-10 11:59:45.884892] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:14.074 [2024-06-10 11:59:45.884913] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:14.074 [2024-06-10 11:59:45.884922] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:14.074 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:14.074 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:14.074 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:14.074 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:14.074 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:14.074 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:14.074 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:14.074 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:14.074 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:14.074 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:14.074 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:14.074 11:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:14.333 11:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:14.333 "name": "raid_bdev1", 00:36:14.333 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:14.333 "strip_size_kb": 64, 00:36:14.333 "state": "online", 00:36:14.333 "raid_level": "raid5f", 00:36:14.333 "superblock": true, 00:36:14.333 "num_base_bdevs": 3, 00:36:14.333 "num_base_bdevs_discovered": 2, 00:36:14.333 "num_base_bdevs_operational": 2, 00:36:14.333 "base_bdevs_list": [ 00:36:14.333 { 00:36:14.333 "name": null, 00:36:14.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.333 "is_configured": false, 00:36:14.333 "data_offset": 2048, 00:36:14.333 "data_size": 63488 00:36:14.333 }, 00:36:14.333 { 00:36:14.333 "name": "BaseBdev2", 00:36:14.333 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:14.333 "is_configured": true, 00:36:14.333 "data_offset": 2048, 00:36:14.333 "data_size": 63488 00:36:14.333 }, 00:36:14.333 { 00:36:14.333 "name": "BaseBdev3", 00:36:14.333 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:14.333 "is_configured": true, 00:36:14.333 "data_offset": 2048, 00:36:14.333 "data_size": 63488 00:36:14.333 } 00:36:14.333 ] 00:36:14.333 }' 00:36:14.333 11:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:14.333 11:59:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:14.899 11:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:14.900 11:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:14.900 11:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:14.900 11:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:14.900 11:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:14.900 11:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:14.900 11:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:15.230 11:59:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:15.230 "name": "raid_bdev1", 00:36:15.230 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:15.230 "strip_size_kb": 64, 00:36:15.230 "state": "online", 00:36:15.230 "raid_level": "raid5f", 00:36:15.230 "superblock": true, 00:36:15.230 "num_base_bdevs": 3, 00:36:15.230 "num_base_bdevs_discovered": 2, 00:36:15.230 "num_base_bdevs_operational": 2, 00:36:15.230 "base_bdevs_list": [ 00:36:15.230 { 00:36:15.230 "name": null, 00:36:15.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:15.230 "is_configured": false, 00:36:15.230 "data_offset": 2048, 00:36:15.230 "data_size": 63488 00:36:15.230 }, 00:36:15.230 { 00:36:15.230 "name": "BaseBdev2", 00:36:15.230 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:15.230 "is_configured": true, 00:36:15.230 "data_offset": 2048, 00:36:15.230 "data_size": 63488 00:36:15.230 }, 00:36:15.230 { 00:36:15.230 "name": "BaseBdev3", 00:36:15.230 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:15.230 "is_configured": true, 00:36:15.230 "data_offset": 2048, 00:36:15.230 "data_size": 63488 00:36:15.230 } 00:36:15.230 ] 00:36:15.230 }' 00:36:15.230 11:59:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:15.492 11:59:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:15.492 11:59:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:15.492 11:59:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:15.492 11:59:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:15.750 [2024-06-10 11:59:47.625423] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:15.750 [2024-06-10 11:59:47.645614] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:36:15.750 [2024-06-10 11:59:47.655601] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:15.751 11:59:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:36:16.685 11:59:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:16.685 11:59:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:16.685 11:59:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:16.685 11:59:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:16.685 11:59:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:16.685 11:59:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:16.685 11:59:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:16.944 11:59:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:16.944 "name": "raid_bdev1", 00:36:16.944 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:16.944 "strip_size_kb": 64, 00:36:16.944 "state": "online", 00:36:16.944 "raid_level": "raid5f", 00:36:16.944 "superblock": true, 00:36:16.944 "num_base_bdevs": 3, 00:36:16.944 "num_base_bdevs_discovered": 3, 00:36:16.944 "num_base_bdevs_operational": 3, 00:36:16.944 "process": { 00:36:16.944 "type": "rebuild", 00:36:16.944 "target": "spare", 00:36:16.944 "progress": { 00:36:16.944 "blocks": 24576, 00:36:16.944 "percent": 19 00:36:16.944 } 00:36:16.944 }, 00:36:16.944 "base_bdevs_list": [ 00:36:16.944 { 00:36:16.944 "name": "spare", 00:36:16.944 "uuid": "5454fb13-a3ce-55aa-aa5f-d0ee920a187a", 00:36:16.944 "is_configured": true, 00:36:16.944 "data_offset": 2048, 00:36:16.944 "data_size": 63488 00:36:16.944 }, 00:36:16.944 { 00:36:16.944 "name": "BaseBdev2", 00:36:16.944 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:16.944 "is_configured": true, 00:36:16.944 "data_offset": 2048, 00:36:16.944 "data_size": 63488 00:36:16.944 }, 00:36:16.944 { 00:36:16.944 "name": "BaseBdev3", 00:36:16.944 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:16.944 "is_configured": true, 00:36:16.944 "data_offset": 2048, 00:36:16.944 "data_size": 63488 00:36:16.944 } 00:36:16.944 ] 00:36:16.944 }' 00:36:16.944 11:59:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:17.203 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:17.203 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:17.203 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:17.203 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:36:17.203 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:36:17.203 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:36:17.203 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:36:17.203 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:36:17.203 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1249 00:36:17.203 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:17.203 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:17.203 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:17.203 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:17.203 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:17.203 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:17.203 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:17.203 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:17.462 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:17.462 "name": "raid_bdev1", 00:36:17.462 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:17.462 "strip_size_kb": 64, 00:36:17.462 "state": "online", 00:36:17.462 "raid_level": "raid5f", 00:36:17.462 "superblock": true, 00:36:17.462 "num_base_bdevs": 3, 00:36:17.462 "num_base_bdevs_discovered": 3, 00:36:17.462 "num_base_bdevs_operational": 3, 00:36:17.462 "process": { 00:36:17.462 "type": "rebuild", 00:36:17.462 "target": "spare", 00:36:17.462 "progress": { 00:36:17.462 "blocks": 32768, 00:36:17.462 "percent": 25 00:36:17.462 } 00:36:17.462 }, 00:36:17.462 "base_bdevs_list": [ 00:36:17.462 { 00:36:17.462 "name": "spare", 00:36:17.462 "uuid": "5454fb13-a3ce-55aa-aa5f-d0ee920a187a", 00:36:17.462 "is_configured": true, 00:36:17.462 "data_offset": 2048, 00:36:17.462 "data_size": 63488 00:36:17.462 }, 00:36:17.462 { 00:36:17.462 "name": "BaseBdev2", 00:36:17.462 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:17.462 "is_configured": true, 00:36:17.462 "data_offset": 2048, 00:36:17.462 "data_size": 63488 00:36:17.462 }, 00:36:17.462 { 00:36:17.462 "name": "BaseBdev3", 00:36:17.462 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:17.462 "is_configured": true, 00:36:17.462 "data_offset": 2048, 00:36:17.462 "data_size": 63488 00:36:17.462 } 00:36:17.462 ] 00:36:17.462 }' 00:36:17.462 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:17.462 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:17.462 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:17.462 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:17.462 11:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:18.401 11:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:18.401 11:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:18.401 11:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:18.401 11:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:18.401 11:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:18.401 11:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:18.401 11:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:18.401 11:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:18.968 11:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:18.968 "name": "raid_bdev1", 00:36:18.968 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:18.968 "strip_size_kb": 64, 00:36:18.968 "state": "online", 00:36:18.968 "raid_level": "raid5f", 00:36:18.968 "superblock": true, 00:36:18.968 "num_base_bdevs": 3, 00:36:18.968 "num_base_bdevs_discovered": 3, 00:36:18.968 "num_base_bdevs_operational": 3, 00:36:18.968 "process": { 00:36:18.968 "type": "rebuild", 00:36:18.968 "target": "spare", 00:36:18.968 "progress": { 00:36:18.968 "blocks": 61440, 00:36:18.968 "percent": 48 00:36:18.968 } 00:36:18.968 }, 00:36:18.968 "base_bdevs_list": [ 00:36:18.968 { 00:36:18.968 "name": "spare", 00:36:18.968 "uuid": "5454fb13-a3ce-55aa-aa5f-d0ee920a187a", 00:36:18.968 "is_configured": true, 00:36:18.968 "data_offset": 2048, 00:36:18.968 "data_size": 63488 00:36:18.968 }, 00:36:18.968 { 00:36:18.968 "name": "BaseBdev2", 00:36:18.968 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:18.968 "is_configured": true, 00:36:18.968 "data_offset": 2048, 00:36:18.968 "data_size": 63488 00:36:18.968 }, 00:36:18.968 { 00:36:18.968 "name": "BaseBdev3", 00:36:18.968 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:18.968 "is_configured": true, 00:36:18.968 "data_offset": 2048, 00:36:18.968 "data_size": 63488 00:36:18.968 } 00:36:18.968 ] 00:36:18.968 }' 00:36:18.968 11:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:18.968 11:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:18.968 11:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:18.968 11:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:18.968 11:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:19.900 11:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:19.900 11:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:19.900 11:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:19.900 11:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:19.900 11:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:19.900 11:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:19.900 11:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:19.900 11:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:20.159 11:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:20.159 "name": "raid_bdev1", 00:36:20.159 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:20.159 "strip_size_kb": 64, 00:36:20.159 "state": "online", 00:36:20.159 "raid_level": "raid5f", 00:36:20.159 "superblock": true, 00:36:20.159 "num_base_bdevs": 3, 00:36:20.159 "num_base_bdevs_discovered": 3, 00:36:20.159 "num_base_bdevs_operational": 3, 00:36:20.159 "process": { 00:36:20.159 "type": "rebuild", 00:36:20.159 "target": "spare", 00:36:20.159 "progress": { 00:36:20.159 "blocks": 88064, 00:36:20.159 "percent": 69 00:36:20.159 } 00:36:20.159 }, 00:36:20.159 "base_bdevs_list": [ 00:36:20.159 { 00:36:20.159 "name": "spare", 00:36:20.159 "uuid": "5454fb13-a3ce-55aa-aa5f-d0ee920a187a", 00:36:20.159 "is_configured": true, 00:36:20.159 "data_offset": 2048, 00:36:20.159 "data_size": 63488 00:36:20.159 }, 00:36:20.159 { 00:36:20.159 "name": "BaseBdev2", 00:36:20.159 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:20.159 "is_configured": true, 00:36:20.159 "data_offset": 2048, 00:36:20.159 "data_size": 63488 00:36:20.159 }, 00:36:20.159 { 00:36:20.159 "name": "BaseBdev3", 00:36:20.159 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:20.159 "is_configured": true, 00:36:20.159 "data_offset": 2048, 00:36:20.159 "data_size": 63488 00:36:20.159 } 00:36:20.159 ] 00:36:20.159 }' 00:36:20.159 11:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:20.159 11:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:20.159 11:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:20.159 11:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:20.159 11:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:21.535 11:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:21.535 11:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:21.535 11:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:21.535 11:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:21.535 11:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:21.536 11:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:21.536 11:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:21.536 11:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:21.536 11:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:21.536 "name": "raid_bdev1", 00:36:21.536 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:21.536 "strip_size_kb": 64, 00:36:21.536 "state": "online", 00:36:21.536 "raid_level": "raid5f", 00:36:21.536 "superblock": true, 00:36:21.536 "num_base_bdevs": 3, 00:36:21.536 "num_base_bdevs_discovered": 3, 00:36:21.536 "num_base_bdevs_operational": 3, 00:36:21.536 "process": { 00:36:21.536 "type": "rebuild", 00:36:21.536 "target": "spare", 00:36:21.536 "progress": { 00:36:21.536 "blocks": 116736, 00:36:21.536 "percent": 91 00:36:21.536 } 00:36:21.536 }, 00:36:21.536 "base_bdevs_list": [ 00:36:21.536 { 00:36:21.536 "name": "spare", 00:36:21.536 "uuid": "5454fb13-a3ce-55aa-aa5f-d0ee920a187a", 00:36:21.536 "is_configured": true, 00:36:21.536 "data_offset": 2048, 00:36:21.536 "data_size": 63488 00:36:21.536 }, 00:36:21.536 { 00:36:21.536 "name": "BaseBdev2", 00:36:21.536 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:21.536 "is_configured": true, 00:36:21.536 "data_offset": 2048, 00:36:21.536 "data_size": 63488 00:36:21.536 }, 00:36:21.536 { 00:36:21.536 "name": "BaseBdev3", 00:36:21.536 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:21.536 "is_configured": true, 00:36:21.536 "data_offset": 2048, 00:36:21.536 "data_size": 63488 00:36:21.536 } 00:36:21.536 ] 00:36:21.536 }' 00:36:21.536 11:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:21.536 11:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:21.536 11:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:21.536 11:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:21.536 11:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:22.125 [2024-06-10 11:59:53.919786] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:36:22.125 [2024-06-10 11:59:53.919877] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:36:22.125 [2024-06-10 11:59:53.920024] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:22.691 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:22.691 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:22.691 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:22.691 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:22.691 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:22.691 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:22.691 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:22.691 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:22.949 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:22.949 "name": "raid_bdev1", 00:36:22.949 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:22.949 "strip_size_kb": 64, 00:36:22.949 "state": "online", 00:36:22.949 "raid_level": "raid5f", 00:36:22.949 "superblock": true, 00:36:22.949 "num_base_bdevs": 3, 00:36:22.949 "num_base_bdevs_discovered": 3, 00:36:22.949 "num_base_bdevs_operational": 3, 00:36:22.949 "base_bdevs_list": [ 00:36:22.949 { 00:36:22.949 "name": "spare", 00:36:22.949 "uuid": "5454fb13-a3ce-55aa-aa5f-d0ee920a187a", 00:36:22.949 "is_configured": true, 00:36:22.949 "data_offset": 2048, 00:36:22.949 "data_size": 63488 00:36:22.950 }, 00:36:22.950 { 00:36:22.950 "name": "BaseBdev2", 00:36:22.950 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:22.950 "is_configured": true, 00:36:22.950 "data_offset": 2048, 00:36:22.950 "data_size": 63488 00:36:22.950 }, 00:36:22.950 { 00:36:22.950 "name": "BaseBdev3", 00:36:22.950 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:22.950 "is_configured": true, 00:36:22.950 "data_offset": 2048, 00:36:22.950 "data_size": 63488 00:36:22.950 } 00:36:22.950 ] 00:36:22.950 }' 00:36:22.950 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:22.950 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:36:22.950 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:22.950 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:36:22.950 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:36:22.950 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:22.950 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:22.950 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:22.950 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:22.950 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:22.950 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:22.950 11:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:23.517 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:23.517 "name": "raid_bdev1", 00:36:23.517 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:23.517 "strip_size_kb": 64, 00:36:23.517 "state": "online", 00:36:23.517 "raid_level": "raid5f", 00:36:23.517 "superblock": true, 00:36:23.517 "num_base_bdevs": 3, 00:36:23.517 "num_base_bdevs_discovered": 3, 00:36:23.517 "num_base_bdevs_operational": 3, 00:36:23.517 "base_bdevs_list": [ 00:36:23.517 { 00:36:23.517 "name": "spare", 00:36:23.517 "uuid": "5454fb13-a3ce-55aa-aa5f-d0ee920a187a", 00:36:23.517 "is_configured": true, 00:36:23.517 "data_offset": 2048, 00:36:23.517 "data_size": 63488 00:36:23.517 }, 00:36:23.517 { 00:36:23.517 "name": "BaseBdev2", 00:36:23.517 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:23.517 "is_configured": true, 00:36:23.517 "data_offset": 2048, 00:36:23.517 "data_size": 63488 00:36:23.517 }, 00:36:23.517 { 00:36:23.517 "name": "BaseBdev3", 00:36:23.517 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:23.517 "is_configured": true, 00:36:23.517 "data_offset": 2048, 00:36:23.517 "data_size": 63488 00:36:23.517 } 00:36:23.517 ] 00:36:23.517 }' 00:36:23.517 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:23.517 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:23.517 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:23.517 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:23.517 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:23.517 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:23.517 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:23.517 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:23.517 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:23.517 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:23.517 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:23.517 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:23.517 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:23.517 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:23.517 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:23.517 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:23.776 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:23.776 "name": "raid_bdev1", 00:36:23.776 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:23.776 "strip_size_kb": 64, 00:36:23.776 "state": "online", 00:36:23.776 "raid_level": "raid5f", 00:36:23.776 "superblock": true, 00:36:23.776 "num_base_bdevs": 3, 00:36:23.776 "num_base_bdevs_discovered": 3, 00:36:23.776 "num_base_bdevs_operational": 3, 00:36:23.776 "base_bdevs_list": [ 00:36:23.776 { 00:36:23.776 "name": "spare", 00:36:23.776 "uuid": "5454fb13-a3ce-55aa-aa5f-d0ee920a187a", 00:36:23.776 "is_configured": true, 00:36:23.776 "data_offset": 2048, 00:36:23.776 "data_size": 63488 00:36:23.776 }, 00:36:23.776 { 00:36:23.776 "name": "BaseBdev2", 00:36:23.776 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:23.776 "is_configured": true, 00:36:23.776 "data_offset": 2048, 00:36:23.776 "data_size": 63488 00:36:23.776 }, 00:36:23.776 { 00:36:23.776 "name": "BaseBdev3", 00:36:23.776 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:23.776 "is_configured": true, 00:36:23.776 "data_offset": 2048, 00:36:23.776 "data_size": 63488 00:36:23.776 } 00:36:23.776 ] 00:36:23.776 }' 00:36:23.776 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:23.776 11:59:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:24.344 11:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:24.602 [2024-06-10 11:59:56.483119] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:24.602 [2024-06-10 11:59:56.483163] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:24.602 [2024-06-10 11:59:56.483246] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:24.602 [2024-06-10 11:59:56.483332] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:24.602 [2024-06-10 11:59:56.483344] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:36:24.602 11:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:24.602 11:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:36:24.860 11:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:36:24.860 11:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:36:24.860 11:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:36:24.860 11:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:36:24.860 11:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:24.860 11:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:36:24.860 11:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:24.860 11:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:24.860 11:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:24.860 11:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:36:24.860 11:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:24.860 11:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:24.860 11:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:36:25.118 /dev/nbd0 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:25.118 1+0 records in 00:36:25.118 1+0 records out 00:36:25.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431766 s, 9.5 MB/s 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:25.118 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:36:25.381 /dev/nbd1 00:36:25.381 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:25.381 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:25.381 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:36:25.381 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:36:25.381 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:36:25.381 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:36:25.381 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:36:25.381 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:36:25.381 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:36:25.381 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:36:25.381 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:25.381 1+0 records in 00:36:25.381 1+0 records out 00:36:25.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550682 s, 7.4 MB/s 00:36:25.381 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:25.640 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:36:25.640 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:25.640 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:36:25.640 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:36:25.640 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:25.640 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:25.640 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:36:25.640 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:36:25.640 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:25.640 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:25.640 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:25.640 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:36:25.640 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:25.640 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:25.898 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:25.898 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:25.898 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:25.898 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:25.898 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:25.898 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:25.898 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:36:25.898 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:36:25.898 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:25.898 11:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:36:26.156 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:26.156 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:26.156 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:26.156 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:26.156 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:26.156 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:26.156 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:36:26.156 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:36:26.156 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:36:26.156 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:26.414 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:26.672 [2024-06-10 11:59:58.626291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:26.672 [2024-06-10 11:59:58.626383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:26.672 [2024-06-10 11:59:58.626442] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:36:26.672 [2024-06-10 11:59:58.626473] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:26.672 [2024-06-10 11:59:58.629335] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:26.672 [2024-06-10 11:59:58.629403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:26.672 [2024-06-10 11:59:58.629561] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:26.672 [2024-06-10 11:59:58.629623] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:26.672 [2024-06-10 11:59:58.629746] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:26.672 [2024-06-10 11:59:58.629849] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:26.672 spare 00:36:26.672 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:26.672 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:26.672 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:26.672 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:26.672 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:26.672 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:26.672 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:26.672 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:26.672 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:26.672 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:26.672 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:26.672 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:26.672 [2024-06-10 11:59:58.729950] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:36:26.672 [2024-06-10 11:59:58.729976] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:36:26.672 [2024-06-10 11:59:58.730145] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:36:26.930 [2024-06-10 11:59:58.737442] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:36:26.930 [2024-06-10 11:59:58.737465] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:36:26.930 [2024-06-10 11:59:58.737692] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:26.930 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:26.930 "name": "raid_bdev1", 00:36:26.930 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:26.930 "strip_size_kb": 64, 00:36:26.930 "state": "online", 00:36:26.930 "raid_level": "raid5f", 00:36:26.930 "superblock": true, 00:36:26.930 "num_base_bdevs": 3, 00:36:26.930 "num_base_bdevs_discovered": 3, 00:36:26.930 "num_base_bdevs_operational": 3, 00:36:26.930 "base_bdevs_list": [ 00:36:26.930 { 00:36:26.930 "name": "spare", 00:36:26.930 "uuid": "5454fb13-a3ce-55aa-aa5f-d0ee920a187a", 00:36:26.930 "is_configured": true, 00:36:26.930 "data_offset": 2048, 00:36:26.930 "data_size": 63488 00:36:26.930 }, 00:36:26.930 { 00:36:26.930 "name": "BaseBdev2", 00:36:26.930 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:26.930 "is_configured": true, 00:36:26.930 "data_offset": 2048, 00:36:26.930 "data_size": 63488 00:36:26.930 }, 00:36:26.930 { 00:36:26.930 "name": "BaseBdev3", 00:36:26.930 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:26.930 "is_configured": true, 00:36:26.930 "data_offset": 2048, 00:36:26.930 "data_size": 63488 00:36:26.930 } 00:36:26.930 ] 00:36:26.930 }' 00:36:26.930 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:26.930 11:59:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.498 11:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:27.498 11:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:27.498 11:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:27.498 11:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:27.498 11:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:27.498 11:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:27.498 11:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:27.762 11:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:27.762 "name": "raid_bdev1", 00:36:27.762 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:27.762 "strip_size_kb": 64, 00:36:27.762 "state": "online", 00:36:27.762 "raid_level": "raid5f", 00:36:27.762 "superblock": true, 00:36:27.762 "num_base_bdevs": 3, 00:36:27.762 "num_base_bdevs_discovered": 3, 00:36:27.762 "num_base_bdevs_operational": 3, 00:36:27.762 "base_bdevs_list": [ 00:36:27.762 { 00:36:27.762 "name": "spare", 00:36:27.762 "uuid": "5454fb13-a3ce-55aa-aa5f-d0ee920a187a", 00:36:27.762 "is_configured": true, 00:36:27.762 "data_offset": 2048, 00:36:27.762 "data_size": 63488 00:36:27.762 }, 00:36:27.762 { 00:36:27.762 "name": "BaseBdev2", 00:36:27.762 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:27.762 "is_configured": true, 00:36:27.762 "data_offset": 2048, 00:36:27.762 "data_size": 63488 00:36:27.762 }, 00:36:27.762 { 00:36:27.762 "name": "BaseBdev3", 00:36:27.762 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:27.762 "is_configured": true, 00:36:27.762 "data_offset": 2048, 00:36:27.762 "data_size": 63488 00:36:27.762 } 00:36:27.762 ] 00:36:27.762 }' 00:36:27.762 11:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:27.762 11:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:27.762 11:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:28.021 11:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:28.021 11:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:28.021 11:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:36:28.021 12:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:36:28.021 12:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:28.278 [2024-06-10 12:00:00.303571] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:28.278 12:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:28.278 12:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:28.278 12:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:28.279 12:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:28.279 12:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:28.279 12:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:28.279 12:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:28.279 12:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:28.279 12:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:28.279 12:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:28.279 12:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:28.279 12:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:28.534 12:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:28.534 "name": "raid_bdev1", 00:36:28.534 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:28.534 "strip_size_kb": 64, 00:36:28.534 "state": "online", 00:36:28.534 "raid_level": "raid5f", 00:36:28.534 "superblock": true, 00:36:28.534 "num_base_bdevs": 3, 00:36:28.534 "num_base_bdevs_discovered": 2, 00:36:28.534 "num_base_bdevs_operational": 2, 00:36:28.534 "base_bdevs_list": [ 00:36:28.534 { 00:36:28.534 "name": null, 00:36:28.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:28.534 "is_configured": false, 00:36:28.534 "data_offset": 2048, 00:36:28.534 "data_size": 63488 00:36:28.534 }, 00:36:28.534 { 00:36:28.534 "name": "BaseBdev2", 00:36:28.534 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:28.534 "is_configured": true, 00:36:28.534 "data_offset": 2048, 00:36:28.534 "data_size": 63488 00:36:28.534 }, 00:36:28.534 { 00:36:28.534 "name": "BaseBdev3", 00:36:28.534 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:28.534 "is_configured": true, 00:36:28.534 "data_offset": 2048, 00:36:28.534 "data_size": 63488 00:36:28.534 } 00:36:28.534 ] 00:36:28.534 }' 00:36:28.534 12:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:28.534 12:00:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.466 12:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:29.466 [2024-06-10 12:00:01.435837] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:29.466 [2024-06-10 12:00:01.436039] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:29.466 [2024-06-10 12:00:01.436055] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:29.466 [2024-06-10 12:00:01.436128] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:29.466 [2024-06-10 12:00:01.455856] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047a40 00:36:29.466 [2024-06-10 12:00:01.466048] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:29.466 12:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:36:30.839 12:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:30.839 12:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:30.839 12:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:30.839 12:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:30.839 12:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:30.839 12:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:30.839 12:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:30.839 12:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:30.839 "name": "raid_bdev1", 00:36:30.839 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:30.839 "strip_size_kb": 64, 00:36:30.839 "state": "online", 00:36:30.839 "raid_level": "raid5f", 00:36:30.839 "superblock": true, 00:36:30.839 "num_base_bdevs": 3, 00:36:30.839 "num_base_bdevs_discovered": 3, 00:36:30.839 "num_base_bdevs_operational": 3, 00:36:30.839 "process": { 00:36:30.839 "type": "rebuild", 00:36:30.839 "target": "spare", 00:36:30.839 "progress": { 00:36:30.839 "blocks": 24576, 00:36:30.839 "percent": 19 00:36:30.839 } 00:36:30.839 }, 00:36:30.839 "base_bdevs_list": [ 00:36:30.839 { 00:36:30.839 "name": "spare", 00:36:30.839 "uuid": "5454fb13-a3ce-55aa-aa5f-d0ee920a187a", 00:36:30.839 "is_configured": true, 00:36:30.839 "data_offset": 2048, 00:36:30.839 "data_size": 63488 00:36:30.839 }, 00:36:30.839 { 00:36:30.839 "name": "BaseBdev2", 00:36:30.839 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:30.839 "is_configured": true, 00:36:30.839 "data_offset": 2048, 00:36:30.839 "data_size": 63488 00:36:30.839 }, 00:36:30.839 { 00:36:30.839 "name": "BaseBdev3", 00:36:30.839 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:30.839 "is_configured": true, 00:36:30.839 "data_offset": 2048, 00:36:30.839 "data_size": 63488 00:36:30.839 } 00:36:30.839 ] 00:36:30.839 }' 00:36:30.839 12:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:30.839 12:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:30.839 12:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:30.839 12:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:30.839 12:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:31.096 [2024-06-10 12:00:03.064101] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:31.096 [2024-06-10 12:00:03.082141] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:31.096 [2024-06-10 12:00:03.082232] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:31.096 [2024-06-10 12:00:03.082255] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:31.096 [2024-06-10 12:00:03.082264] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:31.096 12:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:31.096 12:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:31.096 12:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:31.096 12:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:31.096 12:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:31.096 12:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:31.096 12:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:31.096 12:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:31.096 12:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:31.096 12:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:31.096 12:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:31.096 12:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:31.353 12:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:31.354 "name": "raid_bdev1", 00:36:31.354 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:31.354 "strip_size_kb": 64, 00:36:31.354 "state": "online", 00:36:31.354 "raid_level": "raid5f", 00:36:31.354 "superblock": true, 00:36:31.354 "num_base_bdevs": 3, 00:36:31.354 "num_base_bdevs_discovered": 2, 00:36:31.354 "num_base_bdevs_operational": 2, 00:36:31.354 "base_bdevs_list": [ 00:36:31.354 { 00:36:31.354 "name": null, 00:36:31.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:31.354 "is_configured": false, 00:36:31.354 "data_offset": 2048, 00:36:31.354 "data_size": 63488 00:36:31.354 }, 00:36:31.354 { 00:36:31.354 "name": "BaseBdev2", 00:36:31.354 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:31.354 "is_configured": true, 00:36:31.354 "data_offset": 2048, 00:36:31.354 "data_size": 63488 00:36:31.354 }, 00:36:31.354 { 00:36:31.354 "name": "BaseBdev3", 00:36:31.354 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:31.354 "is_configured": true, 00:36:31.354 "data_offset": 2048, 00:36:31.354 "data_size": 63488 00:36:31.354 } 00:36:31.354 ] 00:36:31.354 }' 00:36:31.354 12:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:31.354 12:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.919 12:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:32.483 [2024-06-10 12:00:04.255138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:32.483 [2024-06-10 12:00:04.255233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:32.483 [2024-06-10 12:00:04.255272] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:36:32.483 [2024-06-10 12:00:04.255304] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:32.483 [2024-06-10 12:00:04.255887] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:32.483 [2024-06-10 12:00:04.255932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:32.483 [2024-06-10 12:00:04.256092] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:32.483 [2024-06-10 12:00:04.256108] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:32.483 [2024-06-10 12:00:04.256119] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:32.483 [2024-06-10 12:00:04.256168] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:32.483 [2024-06-10 12:00:04.274767] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047d80 00:36:32.483 spare 00:36:32.484 [2024-06-10 12:00:04.284138] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:32.484 12:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:36:33.417 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:33.417 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:33.417 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:33.417 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:33.417 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:33.417 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:33.417 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:33.675 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:33.675 "name": "raid_bdev1", 00:36:33.675 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:33.675 "strip_size_kb": 64, 00:36:33.675 "state": "online", 00:36:33.675 "raid_level": "raid5f", 00:36:33.675 "superblock": true, 00:36:33.675 "num_base_bdevs": 3, 00:36:33.675 "num_base_bdevs_discovered": 3, 00:36:33.675 "num_base_bdevs_operational": 3, 00:36:33.675 "process": { 00:36:33.675 "type": "rebuild", 00:36:33.675 "target": "spare", 00:36:33.675 "progress": { 00:36:33.675 "blocks": 24576, 00:36:33.675 "percent": 19 00:36:33.675 } 00:36:33.675 }, 00:36:33.675 "base_bdevs_list": [ 00:36:33.675 { 00:36:33.675 "name": "spare", 00:36:33.675 "uuid": "5454fb13-a3ce-55aa-aa5f-d0ee920a187a", 00:36:33.675 "is_configured": true, 00:36:33.675 "data_offset": 2048, 00:36:33.675 "data_size": 63488 00:36:33.675 }, 00:36:33.675 { 00:36:33.675 "name": "BaseBdev2", 00:36:33.675 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:33.675 "is_configured": true, 00:36:33.675 "data_offset": 2048, 00:36:33.675 "data_size": 63488 00:36:33.675 }, 00:36:33.675 { 00:36:33.675 "name": "BaseBdev3", 00:36:33.675 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:33.675 "is_configured": true, 00:36:33.675 "data_offset": 2048, 00:36:33.675 "data_size": 63488 00:36:33.675 } 00:36:33.675 ] 00:36:33.675 }' 00:36:33.675 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:33.675 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:33.676 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:33.676 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:33.676 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:33.933 [2024-06-10 12:00:05.810465] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:33.933 [2024-06-10 12:00:05.900543] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:33.933 [2024-06-10 12:00:05.900633] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:33.933 [2024-06-10 12:00:05.900670] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:33.933 [2024-06-10 12:00:05.900680] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:33.933 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:33.933 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:33.933 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:33.933 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:33.933 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:33.933 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:33.933 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:33.933 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:33.933 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:33.933 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:33.933 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:33.933 12:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:34.190 12:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:34.190 "name": "raid_bdev1", 00:36:34.190 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:34.190 "strip_size_kb": 64, 00:36:34.190 "state": "online", 00:36:34.190 "raid_level": "raid5f", 00:36:34.190 "superblock": true, 00:36:34.190 "num_base_bdevs": 3, 00:36:34.190 "num_base_bdevs_discovered": 2, 00:36:34.190 "num_base_bdevs_operational": 2, 00:36:34.190 "base_bdevs_list": [ 00:36:34.190 { 00:36:34.190 "name": null, 00:36:34.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:34.190 "is_configured": false, 00:36:34.190 "data_offset": 2048, 00:36:34.190 "data_size": 63488 00:36:34.190 }, 00:36:34.190 { 00:36:34.190 "name": "BaseBdev2", 00:36:34.190 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:34.190 "is_configured": true, 00:36:34.190 "data_offset": 2048, 00:36:34.190 "data_size": 63488 00:36:34.190 }, 00:36:34.190 { 00:36:34.190 "name": "BaseBdev3", 00:36:34.190 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:34.190 "is_configured": true, 00:36:34.190 "data_offset": 2048, 00:36:34.190 "data_size": 63488 00:36:34.190 } 00:36:34.190 ] 00:36:34.190 }' 00:36:34.190 12:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:34.190 12:00:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.755 12:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:34.755 12:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:34.755 12:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:34.755 12:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:34.755 12:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:34.755 12:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:34.755 12:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:35.012 12:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:35.012 "name": "raid_bdev1", 00:36:35.012 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:35.012 "strip_size_kb": 64, 00:36:35.012 "state": "online", 00:36:35.012 "raid_level": "raid5f", 00:36:35.012 "superblock": true, 00:36:35.012 "num_base_bdevs": 3, 00:36:35.012 "num_base_bdevs_discovered": 2, 00:36:35.012 "num_base_bdevs_operational": 2, 00:36:35.012 "base_bdevs_list": [ 00:36:35.012 { 00:36:35.012 "name": null, 00:36:35.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:35.012 "is_configured": false, 00:36:35.012 "data_offset": 2048, 00:36:35.012 "data_size": 63488 00:36:35.012 }, 00:36:35.012 { 00:36:35.012 "name": "BaseBdev2", 00:36:35.012 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:35.013 "is_configured": true, 00:36:35.013 "data_offset": 2048, 00:36:35.013 "data_size": 63488 00:36:35.013 }, 00:36:35.013 { 00:36:35.013 "name": "BaseBdev3", 00:36:35.013 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:35.013 "is_configured": true, 00:36:35.013 "data_offset": 2048, 00:36:35.013 "data_size": 63488 00:36:35.013 } 00:36:35.013 ] 00:36:35.013 }' 00:36:35.013 12:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:35.013 12:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:35.013 12:00:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:35.013 12:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:35.013 12:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:36:35.272 12:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:35.531 [2024-06-10 12:00:07.574256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:35.531 [2024-06-10 12:00:07.574363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:35.531 [2024-06-10 12:00:07.574409] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:36:35.531 [2024-06-10 12:00:07.574439] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:35.531 [2024-06-10 12:00:07.575023] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:35.531 [2024-06-10 12:00:07.575074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:35.531 [2024-06-10 12:00:07.575226] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:36:35.531 [2024-06-10 12:00:07.575242] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:35.531 [2024-06-10 12:00:07.575268] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:35.531 BaseBdev1 00:36:35.790 12:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:36:36.726 12:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:36.726 12:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:36.726 12:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:36.726 12:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:36.726 12:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:36.726 12:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:36.726 12:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:36.726 12:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:36.726 12:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:36.726 12:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:36.726 12:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:36.726 12:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:36.985 12:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:36.985 "name": "raid_bdev1", 00:36:36.985 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:36.985 "strip_size_kb": 64, 00:36:36.985 "state": "online", 00:36:36.985 "raid_level": "raid5f", 00:36:36.985 "superblock": true, 00:36:36.985 "num_base_bdevs": 3, 00:36:36.985 "num_base_bdevs_discovered": 2, 00:36:36.985 "num_base_bdevs_operational": 2, 00:36:36.985 "base_bdevs_list": [ 00:36:36.985 { 00:36:36.985 "name": null, 00:36:36.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:36.985 "is_configured": false, 00:36:36.985 "data_offset": 2048, 00:36:36.985 "data_size": 63488 00:36:36.985 }, 00:36:36.985 { 00:36:36.985 "name": "BaseBdev2", 00:36:36.985 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:36.985 "is_configured": true, 00:36:36.985 "data_offset": 2048, 00:36:36.985 "data_size": 63488 00:36:36.985 }, 00:36:36.986 { 00:36:36.986 "name": "BaseBdev3", 00:36:36.986 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:36.986 "is_configured": true, 00:36:36.986 "data_offset": 2048, 00:36:36.986 "data_size": 63488 00:36:36.986 } 00:36:36.986 ] 00:36:36.986 }' 00:36:36.986 12:00:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:36.986 12:00:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:37.554 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:37.554 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:37.554 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:37.554 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:37.554 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:37.554 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:37.554 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:37.813 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:37.813 "name": "raid_bdev1", 00:36:37.813 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:37.813 "strip_size_kb": 64, 00:36:37.813 "state": "online", 00:36:37.813 "raid_level": "raid5f", 00:36:37.813 "superblock": true, 00:36:37.813 "num_base_bdevs": 3, 00:36:37.813 "num_base_bdevs_discovered": 2, 00:36:37.814 "num_base_bdevs_operational": 2, 00:36:37.814 "base_bdevs_list": [ 00:36:37.814 { 00:36:37.814 "name": null, 00:36:37.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:37.814 "is_configured": false, 00:36:37.814 "data_offset": 2048, 00:36:37.814 "data_size": 63488 00:36:37.814 }, 00:36:37.814 { 00:36:37.814 "name": "BaseBdev2", 00:36:37.814 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:37.814 "is_configured": true, 00:36:37.814 "data_offset": 2048, 00:36:37.814 "data_size": 63488 00:36:37.814 }, 00:36:37.814 { 00:36:37.814 "name": "BaseBdev3", 00:36:37.814 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:37.814 "is_configured": true, 00:36:37.814 "data_offset": 2048, 00:36:37.814 "data_size": 63488 00:36:37.814 } 00:36:37.814 ] 00:36:37.814 }' 00:36:37.814 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:37.814 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:37.814 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:37.814 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:37.814 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:37.814 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@649 -- # local es=0 00:36:37.814 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:37.814 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:37.814 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:37.814 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:37.814 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:37.814 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:37.814 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:37.814 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:37.814 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:37.814 12:00:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:38.072 [2024-06-10 12:00:10.107040] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:38.072 [2024-06-10 12:00:10.107317] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:38.072 [2024-06-10 12:00:10.107332] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:38.072 request: 00:36:38.072 { 00:36:38.072 "base_bdev": "BaseBdev1", 00:36:38.072 "raid_bdev": "raid_bdev1", 00:36:38.072 "method": "bdev_raid_add_base_bdev", 00:36:38.072 "req_id": 1 00:36:38.072 } 00:36:38.072 Got JSON-RPC error response 00:36:38.072 response: 00:36:38.072 { 00:36:38.072 "code": -22, 00:36:38.072 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:36:38.072 } 00:36:38.072 12:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # es=1 00:36:38.072 12:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:38.072 12:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:38.072 12:00:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:38.072 12:00:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:36:39.449 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:39.449 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:39.449 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:39.450 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:39.450 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:39.450 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:39.450 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:39.450 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:39.450 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:39.450 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:39.450 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:39.450 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:39.450 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:39.450 "name": "raid_bdev1", 00:36:39.450 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:39.450 "strip_size_kb": 64, 00:36:39.450 "state": "online", 00:36:39.450 "raid_level": "raid5f", 00:36:39.450 "superblock": true, 00:36:39.450 "num_base_bdevs": 3, 00:36:39.450 "num_base_bdevs_discovered": 2, 00:36:39.450 "num_base_bdevs_operational": 2, 00:36:39.450 "base_bdevs_list": [ 00:36:39.450 { 00:36:39.450 "name": null, 00:36:39.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:39.450 "is_configured": false, 00:36:39.450 "data_offset": 2048, 00:36:39.450 "data_size": 63488 00:36:39.450 }, 00:36:39.450 { 00:36:39.450 "name": "BaseBdev2", 00:36:39.450 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:39.450 "is_configured": true, 00:36:39.450 "data_offset": 2048, 00:36:39.450 "data_size": 63488 00:36:39.450 }, 00:36:39.450 { 00:36:39.450 "name": "BaseBdev3", 00:36:39.450 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:39.450 "is_configured": true, 00:36:39.450 "data_offset": 2048, 00:36:39.450 "data_size": 63488 00:36:39.450 } 00:36:39.450 ] 00:36:39.450 }' 00:36:39.450 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:39.450 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:40.083 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:40.083 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:40.083 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:40.083 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:40.083 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:40.083 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:40.084 12:00:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:40.341 12:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:40.341 "name": "raid_bdev1", 00:36:40.341 "uuid": "7a9e226e-b1cf-4688-ad75-6d6e423b9155", 00:36:40.341 "strip_size_kb": 64, 00:36:40.341 "state": "online", 00:36:40.341 "raid_level": "raid5f", 00:36:40.341 "superblock": true, 00:36:40.341 "num_base_bdevs": 3, 00:36:40.341 "num_base_bdevs_discovered": 2, 00:36:40.341 "num_base_bdevs_operational": 2, 00:36:40.341 "base_bdevs_list": [ 00:36:40.341 { 00:36:40.341 "name": null, 00:36:40.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:40.341 "is_configured": false, 00:36:40.341 "data_offset": 2048, 00:36:40.341 "data_size": 63488 00:36:40.341 }, 00:36:40.341 { 00:36:40.341 "name": "BaseBdev2", 00:36:40.341 "uuid": "15da699a-fc60-52dc-8e5d-4e8fb6853ea4", 00:36:40.341 "is_configured": true, 00:36:40.341 "data_offset": 2048, 00:36:40.341 "data_size": 63488 00:36:40.341 }, 00:36:40.341 { 00:36:40.341 "name": "BaseBdev3", 00:36:40.341 "uuid": "4ed3c1b2-f20c-524d-9014-f95cd32601f6", 00:36:40.341 "is_configured": true, 00:36:40.341 "data_offset": 2048, 00:36:40.341 "data_size": 63488 00:36:40.341 } 00:36:40.341 ] 00:36:40.341 }' 00:36:40.341 12:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:40.341 12:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:40.341 12:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:40.341 12:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:40.341 12:00:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 155114 00:36:40.341 12:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@949 -- # '[' -z 155114 ']' 00:36:40.341 12:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # kill -0 155114 00:36:40.341 12:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # uname 00:36:40.341 12:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:40.341 12:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 155114 00:36:40.341 12:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:36:40.341 12:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:36:40.341 12:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 155114' 00:36:40.341 killing process with pid 155114 00:36:40.341 12:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # kill 155114 00:36:40.341 Received shutdown signal, test time was about 60.000000 seconds 00:36:40.341 00:36:40.341 Latency(us) 00:36:40.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:40.341 =================================================================================================================== 00:36:40.341 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:40.341 [2024-06-10 12:00:12.361641] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:40.341 [2024-06-10 12:00:12.361778] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:40.341 [2024-06-10 12:00:12.361860] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:40.341 [2024-06-10 12:00:12.361876] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:36:40.341 12:00:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # wait 155114 00:36:40.907 [2024-06-10 12:00:12.862848] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:42.831 12:00:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:36:42.831 00:36:42.831 real 0m38.411s 00:36:42.831 user 0m59.352s 00:36:42.831 sys 0m4.981s 00:36:42.831 12:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:42.831 12:00:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:42.831 ************************************ 00:36:42.831 END TEST raid5f_rebuild_test_sb 00:36:42.831 ************************************ 00:36:42.831 12:00:14 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:36:42.831 12:00:14 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:36:42.831 12:00:14 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:36:42.831 12:00:14 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:42.831 12:00:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:42.831 ************************************ 00:36:42.831 START TEST raid5f_state_function_test 00:36:42.831 ************************************ 00:36:42.831 12:00:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid5f 4 false 00:36:42.831 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:36:42.831 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:36:42.831 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:36:42.831 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:36:42.831 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:36:42.831 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:42.831 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:36:42.831 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:42.831 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:42.831 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:36:42.831 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:42.831 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:42.831 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=156074 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 156074' 00:36:42.832 Process raid pid: 156074 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 156074 /var/tmp/spdk-raid.sock 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 156074 ']' 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:42.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:42.832 12:00:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.832 [2024-06-10 12:00:14.624637] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:36:42.832 [2024-06-10 12:00:14.624825] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:42.832 [2024-06-10 12:00:14.789867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.128 [2024-06-10 12:00:15.003445] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:36:43.409 [2024-06-10 12:00:15.235253] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:43.667 12:00:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:43.667 12:00:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:36:43.667 12:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:36:43.925 [2024-06-10 12:00:15.841123] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:43.925 [2024-06-10 12:00:15.841202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:43.925 [2024-06-10 12:00:15.841214] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:43.925 [2024-06-10 12:00:15.841254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:43.925 [2024-06-10 12:00:15.841262] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:43.925 [2024-06-10 12:00:15.841279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:43.925 [2024-06-10 12:00:15.841287] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:43.925 [2024-06-10 12:00:15.841310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:43.925 12:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:43.925 12:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:43.925 12:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:43.925 12:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:43.925 12:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:43.925 12:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:43.925 12:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:43.925 12:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:43.925 12:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:43.925 12:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:43.925 12:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:43.925 12:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:44.182 12:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:44.182 "name": "Existed_Raid", 00:36:44.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:44.182 "strip_size_kb": 64, 00:36:44.182 "state": "configuring", 00:36:44.182 "raid_level": "raid5f", 00:36:44.182 "superblock": false, 00:36:44.182 "num_base_bdevs": 4, 00:36:44.182 "num_base_bdevs_discovered": 0, 00:36:44.182 "num_base_bdevs_operational": 4, 00:36:44.182 "base_bdevs_list": [ 00:36:44.182 { 00:36:44.182 "name": "BaseBdev1", 00:36:44.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:44.182 "is_configured": false, 00:36:44.182 "data_offset": 0, 00:36:44.182 "data_size": 0 00:36:44.182 }, 00:36:44.182 { 00:36:44.182 "name": "BaseBdev2", 00:36:44.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:44.182 "is_configured": false, 00:36:44.182 "data_offset": 0, 00:36:44.182 "data_size": 0 00:36:44.182 }, 00:36:44.182 { 00:36:44.182 "name": "BaseBdev3", 00:36:44.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:44.183 "is_configured": false, 00:36:44.183 "data_offset": 0, 00:36:44.183 "data_size": 0 00:36:44.183 }, 00:36:44.183 { 00:36:44.183 "name": "BaseBdev4", 00:36:44.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:44.183 "is_configured": false, 00:36:44.183 "data_offset": 0, 00:36:44.183 "data_size": 0 00:36:44.183 } 00:36:44.183 ] 00:36:44.183 }' 00:36:44.183 12:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:44.183 12:00:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.747 12:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:45.004 [2024-06-10 12:00:16.857221] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:45.005 [2024-06-10 12:00:16.857257] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:36:45.005 12:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:36:45.263 [2024-06-10 12:00:17.129307] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:45.263 [2024-06-10 12:00:17.129375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:45.263 [2024-06-10 12:00:17.129385] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:45.263 [2024-06-10 12:00:17.129433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:45.263 [2024-06-10 12:00:17.129442] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:45.263 [2024-06-10 12:00:17.129478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:45.263 [2024-06-10 12:00:17.129486] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:45.263 [2024-06-10 12:00:17.129508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:45.263 12:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:36:45.520 [2024-06-10 12:00:17.374384] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:45.520 BaseBdev1 00:36:45.520 12:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:36:45.520 12:00:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:36:45.520 12:00:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:36:45.520 12:00:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local i 00:36:45.520 12:00:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:36:45.520 12:00:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:36:45.520 12:00:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:45.778 12:00:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:46.036 [ 00:36:46.036 { 00:36:46.036 "name": "BaseBdev1", 00:36:46.036 "aliases": [ 00:36:46.036 "d6bcd29e-b21c-4e23-8754-94c2d87ac25e" 00:36:46.036 ], 00:36:46.036 "product_name": "Malloc disk", 00:36:46.036 "block_size": 512, 00:36:46.036 "num_blocks": 65536, 00:36:46.036 "uuid": "d6bcd29e-b21c-4e23-8754-94c2d87ac25e", 00:36:46.036 "assigned_rate_limits": { 00:36:46.036 "rw_ios_per_sec": 0, 00:36:46.036 "rw_mbytes_per_sec": 0, 00:36:46.036 "r_mbytes_per_sec": 0, 00:36:46.036 "w_mbytes_per_sec": 0 00:36:46.036 }, 00:36:46.036 "claimed": true, 00:36:46.036 "claim_type": "exclusive_write", 00:36:46.036 "zoned": false, 00:36:46.036 "supported_io_types": { 00:36:46.037 "read": true, 00:36:46.037 "write": true, 00:36:46.037 "unmap": true, 00:36:46.037 "write_zeroes": true, 00:36:46.037 "flush": true, 00:36:46.037 "reset": true, 00:36:46.037 "compare": false, 00:36:46.037 "compare_and_write": false, 00:36:46.037 "abort": true, 00:36:46.037 "nvme_admin": false, 00:36:46.037 "nvme_io": false 00:36:46.037 }, 00:36:46.037 "memory_domains": [ 00:36:46.037 { 00:36:46.037 "dma_device_id": "system", 00:36:46.037 "dma_device_type": 1 00:36:46.037 }, 00:36:46.037 { 00:36:46.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:46.037 "dma_device_type": 2 00:36:46.037 } 00:36:46.037 ], 00:36:46.037 "driver_specific": {} 00:36:46.037 } 00:36:46.037 ] 00:36:46.037 12:00:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:36:46.037 12:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:46.037 12:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:46.037 12:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:46.037 12:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:46.037 12:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:46.037 12:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:46.037 12:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:46.037 12:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:46.037 12:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:46.037 12:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:46.037 12:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:46.037 12:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:46.295 12:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:46.295 "name": "Existed_Raid", 00:36:46.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:46.295 "strip_size_kb": 64, 00:36:46.295 "state": "configuring", 00:36:46.295 "raid_level": "raid5f", 00:36:46.295 "superblock": false, 00:36:46.295 "num_base_bdevs": 4, 00:36:46.295 "num_base_bdevs_discovered": 1, 00:36:46.295 "num_base_bdevs_operational": 4, 00:36:46.295 "base_bdevs_list": [ 00:36:46.295 { 00:36:46.295 "name": "BaseBdev1", 00:36:46.295 "uuid": "d6bcd29e-b21c-4e23-8754-94c2d87ac25e", 00:36:46.295 "is_configured": true, 00:36:46.295 "data_offset": 0, 00:36:46.295 "data_size": 65536 00:36:46.295 }, 00:36:46.295 { 00:36:46.295 "name": "BaseBdev2", 00:36:46.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:46.295 "is_configured": false, 00:36:46.295 "data_offset": 0, 00:36:46.295 "data_size": 0 00:36:46.295 }, 00:36:46.295 { 00:36:46.295 "name": "BaseBdev3", 00:36:46.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:46.295 "is_configured": false, 00:36:46.295 "data_offset": 0, 00:36:46.295 "data_size": 0 00:36:46.295 }, 00:36:46.295 { 00:36:46.295 "name": "BaseBdev4", 00:36:46.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:46.295 "is_configured": false, 00:36:46.295 "data_offset": 0, 00:36:46.295 "data_size": 0 00:36:46.295 } 00:36:46.295 ] 00:36:46.295 }' 00:36:46.295 12:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:46.295 12:00:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:46.861 12:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:47.119 [2024-06-10 12:00:18.986745] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:47.119 [2024-06-10 12:00:18.986801] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:36:47.119 12:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:36:47.377 [2024-06-10 12:00:19.294882] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:47.378 [2024-06-10 12:00:19.297090] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:47.378 [2024-06-10 12:00:19.297167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:47.378 [2024-06-10 12:00:19.297179] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:47.378 [2024-06-10 12:00:19.297206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:47.378 [2024-06-10 12:00:19.297216] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:47.378 [2024-06-10 12:00:19.297238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:47.378 12:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:36:47.378 12:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:47.378 12:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:47.378 12:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:47.378 12:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:47.378 12:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:47.378 12:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:47.378 12:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:47.378 12:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:47.378 12:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:47.378 12:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:47.378 12:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:47.378 12:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:47.378 12:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:47.635 12:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:47.635 "name": "Existed_Raid", 00:36:47.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:47.635 "strip_size_kb": 64, 00:36:47.635 "state": "configuring", 00:36:47.635 "raid_level": "raid5f", 00:36:47.635 "superblock": false, 00:36:47.635 "num_base_bdevs": 4, 00:36:47.635 "num_base_bdevs_discovered": 1, 00:36:47.635 "num_base_bdevs_operational": 4, 00:36:47.635 "base_bdevs_list": [ 00:36:47.635 { 00:36:47.635 "name": "BaseBdev1", 00:36:47.635 "uuid": "d6bcd29e-b21c-4e23-8754-94c2d87ac25e", 00:36:47.635 "is_configured": true, 00:36:47.635 "data_offset": 0, 00:36:47.635 "data_size": 65536 00:36:47.635 }, 00:36:47.635 { 00:36:47.635 "name": "BaseBdev2", 00:36:47.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:47.635 "is_configured": false, 00:36:47.635 "data_offset": 0, 00:36:47.635 "data_size": 0 00:36:47.635 }, 00:36:47.635 { 00:36:47.635 "name": "BaseBdev3", 00:36:47.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:47.635 "is_configured": false, 00:36:47.635 "data_offset": 0, 00:36:47.635 "data_size": 0 00:36:47.635 }, 00:36:47.635 { 00:36:47.635 "name": "BaseBdev4", 00:36:47.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:47.635 "is_configured": false, 00:36:47.635 "data_offset": 0, 00:36:47.635 "data_size": 0 00:36:47.635 } 00:36:47.635 ] 00:36:47.635 }' 00:36:47.635 12:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:47.635 12:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.201 12:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:36:48.459 [2024-06-10 12:00:20.499211] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:48.459 BaseBdev2 00:36:48.717 12:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:36:48.717 12:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:36:48.717 12:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:36:48.717 12:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local i 00:36:48.717 12:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:36:48.717 12:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:36:48.717 12:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:48.974 12:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:49.233 [ 00:36:49.233 { 00:36:49.233 "name": "BaseBdev2", 00:36:49.233 "aliases": [ 00:36:49.233 "7482df80-7b1d-44cf-8c74-d13243c9a8a1" 00:36:49.233 ], 00:36:49.233 "product_name": "Malloc disk", 00:36:49.233 "block_size": 512, 00:36:49.233 "num_blocks": 65536, 00:36:49.233 "uuid": "7482df80-7b1d-44cf-8c74-d13243c9a8a1", 00:36:49.233 "assigned_rate_limits": { 00:36:49.233 "rw_ios_per_sec": 0, 00:36:49.233 "rw_mbytes_per_sec": 0, 00:36:49.233 "r_mbytes_per_sec": 0, 00:36:49.233 "w_mbytes_per_sec": 0 00:36:49.233 }, 00:36:49.233 "claimed": true, 00:36:49.233 "claim_type": "exclusive_write", 00:36:49.233 "zoned": false, 00:36:49.233 "supported_io_types": { 00:36:49.233 "read": true, 00:36:49.233 "write": true, 00:36:49.233 "unmap": true, 00:36:49.233 "write_zeroes": true, 00:36:49.233 "flush": true, 00:36:49.233 "reset": true, 00:36:49.233 "compare": false, 00:36:49.233 "compare_and_write": false, 00:36:49.233 "abort": true, 00:36:49.233 "nvme_admin": false, 00:36:49.233 "nvme_io": false 00:36:49.233 }, 00:36:49.233 "memory_domains": [ 00:36:49.233 { 00:36:49.233 "dma_device_id": "system", 00:36:49.233 "dma_device_type": 1 00:36:49.233 }, 00:36:49.233 { 00:36:49.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:49.233 "dma_device_type": 2 00:36:49.233 } 00:36:49.233 ], 00:36:49.233 "driver_specific": {} 00:36:49.233 } 00:36:49.233 ] 00:36:49.233 12:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:36:49.233 12:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:36:49.233 12:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:49.233 12:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:49.233 12:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:49.233 12:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:49.233 12:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:49.233 12:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:49.233 12:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:49.233 12:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:49.233 12:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:49.233 12:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:49.233 12:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:49.233 12:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:49.233 12:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:49.491 12:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:49.491 "name": "Existed_Raid", 00:36:49.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:49.491 "strip_size_kb": 64, 00:36:49.491 "state": "configuring", 00:36:49.491 "raid_level": "raid5f", 00:36:49.491 "superblock": false, 00:36:49.491 "num_base_bdevs": 4, 00:36:49.491 "num_base_bdevs_discovered": 2, 00:36:49.491 "num_base_bdevs_operational": 4, 00:36:49.491 "base_bdevs_list": [ 00:36:49.491 { 00:36:49.491 "name": "BaseBdev1", 00:36:49.491 "uuid": "d6bcd29e-b21c-4e23-8754-94c2d87ac25e", 00:36:49.491 "is_configured": true, 00:36:49.491 "data_offset": 0, 00:36:49.491 "data_size": 65536 00:36:49.491 }, 00:36:49.491 { 00:36:49.491 "name": "BaseBdev2", 00:36:49.491 "uuid": "7482df80-7b1d-44cf-8c74-d13243c9a8a1", 00:36:49.491 "is_configured": true, 00:36:49.491 "data_offset": 0, 00:36:49.491 "data_size": 65536 00:36:49.491 }, 00:36:49.491 { 00:36:49.491 "name": "BaseBdev3", 00:36:49.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:49.491 "is_configured": false, 00:36:49.491 "data_offset": 0, 00:36:49.491 "data_size": 0 00:36:49.491 }, 00:36:49.491 { 00:36:49.491 "name": "BaseBdev4", 00:36:49.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:49.491 "is_configured": false, 00:36:49.491 "data_offset": 0, 00:36:49.491 "data_size": 0 00:36:49.491 } 00:36:49.491 ] 00:36:49.491 }' 00:36:49.491 12:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:49.491 12:00:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:50.058 12:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:36:50.397 [2024-06-10 12:00:22.245105] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:50.397 BaseBdev3 00:36:50.397 12:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:36:50.397 12:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:36:50.397 12:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:36:50.397 12:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local i 00:36:50.397 12:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:36:50.397 12:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:36:50.397 12:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:50.655 12:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:50.912 [ 00:36:50.912 { 00:36:50.912 "name": "BaseBdev3", 00:36:50.912 "aliases": [ 00:36:50.912 "1135b4a5-e557-43a6-a2ee-16d43375a644" 00:36:50.912 ], 00:36:50.912 "product_name": "Malloc disk", 00:36:50.912 "block_size": 512, 00:36:50.912 "num_blocks": 65536, 00:36:50.912 "uuid": "1135b4a5-e557-43a6-a2ee-16d43375a644", 00:36:50.912 "assigned_rate_limits": { 00:36:50.912 "rw_ios_per_sec": 0, 00:36:50.912 "rw_mbytes_per_sec": 0, 00:36:50.912 "r_mbytes_per_sec": 0, 00:36:50.912 "w_mbytes_per_sec": 0 00:36:50.912 }, 00:36:50.912 "claimed": true, 00:36:50.912 "claim_type": "exclusive_write", 00:36:50.912 "zoned": false, 00:36:50.912 "supported_io_types": { 00:36:50.912 "read": true, 00:36:50.912 "write": true, 00:36:50.912 "unmap": true, 00:36:50.912 "write_zeroes": true, 00:36:50.912 "flush": true, 00:36:50.912 "reset": true, 00:36:50.912 "compare": false, 00:36:50.912 "compare_and_write": false, 00:36:50.912 "abort": true, 00:36:50.912 "nvme_admin": false, 00:36:50.912 "nvme_io": false 00:36:50.912 }, 00:36:50.912 "memory_domains": [ 00:36:50.912 { 00:36:50.912 "dma_device_id": "system", 00:36:50.912 "dma_device_type": 1 00:36:50.912 }, 00:36:50.912 { 00:36:50.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:50.912 "dma_device_type": 2 00:36:50.912 } 00:36:50.912 ], 00:36:50.912 "driver_specific": {} 00:36:50.912 } 00:36:50.912 ] 00:36:50.912 12:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:36:50.912 12:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:36:50.912 12:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:50.912 12:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:50.913 12:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:50.913 12:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:50.913 12:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:50.913 12:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:50.913 12:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:50.913 12:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:50.913 12:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:50.913 12:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:50.913 12:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:50.913 12:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:50.913 12:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:51.171 12:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:51.171 "name": "Existed_Raid", 00:36:51.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:51.171 "strip_size_kb": 64, 00:36:51.171 "state": "configuring", 00:36:51.171 "raid_level": "raid5f", 00:36:51.171 "superblock": false, 00:36:51.171 "num_base_bdevs": 4, 00:36:51.171 "num_base_bdevs_discovered": 3, 00:36:51.171 "num_base_bdevs_operational": 4, 00:36:51.171 "base_bdevs_list": [ 00:36:51.171 { 00:36:51.171 "name": "BaseBdev1", 00:36:51.171 "uuid": "d6bcd29e-b21c-4e23-8754-94c2d87ac25e", 00:36:51.171 "is_configured": true, 00:36:51.171 "data_offset": 0, 00:36:51.171 "data_size": 65536 00:36:51.171 }, 00:36:51.171 { 00:36:51.171 "name": "BaseBdev2", 00:36:51.171 "uuid": "7482df80-7b1d-44cf-8c74-d13243c9a8a1", 00:36:51.171 "is_configured": true, 00:36:51.171 "data_offset": 0, 00:36:51.171 "data_size": 65536 00:36:51.171 }, 00:36:51.171 { 00:36:51.171 "name": "BaseBdev3", 00:36:51.171 "uuid": "1135b4a5-e557-43a6-a2ee-16d43375a644", 00:36:51.171 "is_configured": true, 00:36:51.171 "data_offset": 0, 00:36:51.171 "data_size": 65536 00:36:51.171 }, 00:36:51.171 { 00:36:51.171 "name": "BaseBdev4", 00:36:51.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:51.171 "is_configured": false, 00:36:51.171 "data_offset": 0, 00:36:51.171 "data_size": 0 00:36:51.171 } 00:36:51.171 ] 00:36:51.171 }' 00:36:51.171 12:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:51.171 12:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:51.743 12:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:36:52.007 [2024-06-10 12:00:23.942882] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:52.007 [2024-06-10 12:00:23.943142] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:36:52.007 [2024-06-10 12:00:23.943194] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:36:52.007 [2024-06-10 12:00:23.943434] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:36:52.007 [2024-06-10 12:00:23.951880] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:36:52.007 [2024-06-10 12:00:23.952061] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:36:52.007 [2024-06-10 12:00:23.952509] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:52.007 BaseBdev4 00:36:52.007 12:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:36:52.007 12:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:36:52.007 12:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:36:52.007 12:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local i 00:36:52.007 12:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:36:52.007 12:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:36:52.007 12:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:52.264 12:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:36:52.521 [ 00:36:52.521 { 00:36:52.521 "name": "BaseBdev4", 00:36:52.521 "aliases": [ 00:36:52.521 "68add2ad-9c5b-44be-9752-f68e0af43934" 00:36:52.521 ], 00:36:52.521 "product_name": "Malloc disk", 00:36:52.521 "block_size": 512, 00:36:52.521 "num_blocks": 65536, 00:36:52.521 "uuid": "68add2ad-9c5b-44be-9752-f68e0af43934", 00:36:52.521 "assigned_rate_limits": { 00:36:52.521 "rw_ios_per_sec": 0, 00:36:52.521 "rw_mbytes_per_sec": 0, 00:36:52.521 "r_mbytes_per_sec": 0, 00:36:52.521 "w_mbytes_per_sec": 0 00:36:52.521 }, 00:36:52.521 "claimed": true, 00:36:52.521 "claim_type": "exclusive_write", 00:36:52.521 "zoned": false, 00:36:52.521 "supported_io_types": { 00:36:52.521 "read": true, 00:36:52.521 "write": true, 00:36:52.521 "unmap": true, 00:36:52.521 "write_zeroes": true, 00:36:52.521 "flush": true, 00:36:52.521 "reset": true, 00:36:52.521 "compare": false, 00:36:52.521 "compare_and_write": false, 00:36:52.521 "abort": true, 00:36:52.521 "nvme_admin": false, 00:36:52.521 "nvme_io": false 00:36:52.521 }, 00:36:52.521 "memory_domains": [ 00:36:52.521 { 00:36:52.521 "dma_device_id": "system", 00:36:52.521 "dma_device_type": 1 00:36:52.521 }, 00:36:52.521 { 00:36:52.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:52.521 "dma_device_type": 2 00:36:52.521 } 00:36:52.521 ], 00:36:52.521 "driver_specific": {} 00:36:52.521 } 00:36:52.521 ] 00:36:52.521 12:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:36:52.521 12:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:36:52.521 12:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:52.521 12:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:36:52.521 12:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:52.521 12:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:52.521 12:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:52.521 12:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:52.521 12:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:52.521 12:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:52.521 12:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:52.521 12:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:52.521 12:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:52.521 12:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:52.521 12:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:52.778 12:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:52.778 "name": "Existed_Raid", 00:36:52.778 "uuid": "5c2356a3-6cfd-41a5-803d-35b5fe4c2b35", 00:36:52.779 "strip_size_kb": 64, 00:36:52.779 "state": "online", 00:36:52.779 "raid_level": "raid5f", 00:36:52.779 "superblock": false, 00:36:52.779 "num_base_bdevs": 4, 00:36:52.779 "num_base_bdevs_discovered": 4, 00:36:52.779 "num_base_bdevs_operational": 4, 00:36:52.779 "base_bdevs_list": [ 00:36:52.779 { 00:36:52.779 "name": "BaseBdev1", 00:36:52.779 "uuid": "d6bcd29e-b21c-4e23-8754-94c2d87ac25e", 00:36:52.779 "is_configured": true, 00:36:52.779 "data_offset": 0, 00:36:52.779 "data_size": 65536 00:36:52.779 }, 00:36:52.779 { 00:36:52.779 "name": "BaseBdev2", 00:36:52.779 "uuid": "7482df80-7b1d-44cf-8c74-d13243c9a8a1", 00:36:52.779 "is_configured": true, 00:36:52.779 "data_offset": 0, 00:36:52.779 "data_size": 65536 00:36:52.779 }, 00:36:52.779 { 00:36:52.779 "name": "BaseBdev3", 00:36:52.779 "uuid": "1135b4a5-e557-43a6-a2ee-16d43375a644", 00:36:52.779 "is_configured": true, 00:36:52.779 "data_offset": 0, 00:36:52.779 "data_size": 65536 00:36:52.779 }, 00:36:52.779 { 00:36:52.779 "name": "BaseBdev4", 00:36:52.779 "uuid": "68add2ad-9c5b-44be-9752-f68e0af43934", 00:36:52.779 "is_configured": true, 00:36:52.779 "data_offset": 0, 00:36:52.779 "data_size": 65536 00:36:52.779 } 00:36:52.779 ] 00:36:52.779 }' 00:36:52.779 12:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:52.779 12:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:53.349 12:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:36:53.350 12:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:36:53.350 12:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:53.350 12:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:53.350 12:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:53.350 12:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:36:53.609 12:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:36:53.609 12:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:53.609 [2024-06-10 12:00:25.606691] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:53.609 12:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:53.609 "name": "Existed_Raid", 00:36:53.609 "aliases": [ 00:36:53.609 "5c2356a3-6cfd-41a5-803d-35b5fe4c2b35" 00:36:53.609 ], 00:36:53.609 "product_name": "Raid Volume", 00:36:53.609 "block_size": 512, 00:36:53.609 "num_blocks": 196608, 00:36:53.609 "uuid": "5c2356a3-6cfd-41a5-803d-35b5fe4c2b35", 00:36:53.609 "assigned_rate_limits": { 00:36:53.609 "rw_ios_per_sec": 0, 00:36:53.609 "rw_mbytes_per_sec": 0, 00:36:53.609 "r_mbytes_per_sec": 0, 00:36:53.609 "w_mbytes_per_sec": 0 00:36:53.609 }, 00:36:53.609 "claimed": false, 00:36:53.609 "zoned": false, 00:36:53.609 "supported_io_types": { 00:36:53.609 "read": true, 00:36:53.609 "write": true, 00:36:53.609 "unmap": false, 00:36:53.609 "write_zeroes": true, 00:36:53.609 "flush": false, 00:36:53.609 "reset": true, 00:36:53.609 "compare": false, 00:36:53.609 "compare_and_write": false, 00:36:53.609 "abort": false, 00:36:53.609 "nvme_admin": false, 00:36:53.609 "nvme_io": false 00:36:53.609 }, 00:36:53.609 "driver_specific": { 00:36:53.609 "raid": { 00:36:53.609 "uuid": "5c2356a3-6cfd-41a5-803d-35b5fe4c2b35", 00:36:53.609 "strip_size_kb": 64, 00:36:53.609 "state": "online", 00:36:53.609 "raid_level": "raid5f", 00:36:53.609 "superblock": false, 00:36:53.609 "num_base_bdevs": 4, 00:36:53.609 "num_base_bdevs_discovered": 4, 00:36:53.609 "num_base_bdevs_operational": 4, 00:36:53.609 "base_bdevs_list": [ 00:36:53.609 { 00:36:53.609 "name": "BaseBdev1", 00:36:53.610 "uuid": "d6bcd29e-b21c-4e23-8754-94c2d87ac25e", 00:36:53.610 "is_configured": true, 00:36:53.610 "data_offset": 0, 00:36:53.610 "data_size": 65536 00:36:53.610 }, 00:36:53.610 { 00:36:53.610 "name": "BaseBdev2", 00:36:53.610 "uuid": "7482df80-7b1d-44cf-8c74-d13243c9a8a1", 00:36:53.610 "is_configured": true, 00:36:53.610 "data_offset": 0, 00:36:53.610 "data_size": 65536 00:36:53.610 }, 00:36:53.610 { 00:36:53.610 "name": "BaseBdev3", 00:36:53.610 "uuid": "1135b4a5-e557-43a6-a2ee-16d43375a644", 00:36:53.610 "is_configured": true, 00:36:53.610 "data_offset": 0, 00:36:53.610 "data_size": 65536 00:36:53.610 }, 00:36:53.610 { 00:36:53.610 "name": "BaseBdev4", 00:36:53.610 "uuid": "68add2ad-9c5b-44be-9752-f68e0af43934", 00:36:53.610 "is_configured": true, 00:36:53.610 "data_offset": 0, 00:36:53.610 "data_size": 65536 00:36:53.610 } 00:36:53.610 ] 00:36:53.610 } 00:36:53.610 } 00:36:53.610 }' 00:36:53.610 12:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:53.610 12:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:36:53.610 BaseBdev2 00:36:53.610 BaseBdev3 00:36:53.610 BaseBdev4' 00:36:53.610 12:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:53.867 12:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:36:53.868 12:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:54.125 12:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:54.125 "name": "BaseBdev1", 00:36:54.125 "aliases": [ 00:36:54.125 "d6bcd29e-b21c-4e23-8754-94c2d87ac25e" 00:36:54.125 ], 00:36:54.126 "product_name": "Malloc disk", 00:36:54.126 "block_size": 512, 00:36:54.126 "num_blocks": 65536, 00:36:54.126 "uuid": "d6bcd29e-b21c-4e23-8754-94c2d87ac25e", 00:36:54.126 "assigned_rate_limits": { 00:36:54.126 "rw_ios_per_sec": 0, 00:36:54.126 "rw_mbytes_per_sec": 0, 00:36:54.126 "r_mbytes_per_sec": 0, 00:36:54.126 "w_mbytes_per_sec": 0 00:36:54.126 }, 00:36:54.126 "claimed": true, 00:36:54.126 "claim_type": "exclusive_write", 00:36:54.126 "zoned": false, 00:36:54.126 "supported_io_types": { 00:36:54.126 "read": true, 00:36:54.126 "write": true, 00:36:54.126 "unmap": true, 00:36:54.126 "write_zeroes": true, 00:36:54.126 "flush": true, 00:36:54.126 "reset": true, 00:36:54.126 "compare": false, 00:36:54.126 "compare_and_write": false, 00:36:54.126 "abort": true, 00:36:54.126 "nvme_admin": false, 00:36:54.126 "nvme_io": false 00:36:54.126 }, 00:36:54.126 "memory_domains": [ 00:36:54.126 { 00:36:54.126 "dma_device_id": "system", 00:36:54.126 "dma_device_type": 1 00:36:54.126 }, 00:36:54.126 { 00:36:54.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:54.126 "dma_device_type": 2 00:36:54.126 } 00:36:54.126 ], 00:36:54.126 "driver_specific": {} 00:36:54.126 }' 00:36:54.126 12:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:54.126 12:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:54.126 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:36:54.126 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:54.126 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:54.126 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:54.126 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:54.461 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:54.461 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:54.461 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:54.461 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:54.461 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:54.461 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:54.461 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:36:54.461 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:54.719 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:54.719 "name": "BaseBdev2", 00:36:54.719 "aliases": [ 00:36:54.719 "7482df80-7b1d-44cf-8c74-d13243c9a8a1" 00:36:54.719 ], 00:36:54.719 "product_name": "Malloc disk", 00:36:54.719 "block_size": 512, 00:36:54.719 "num_blocks": 65536, 00:36:54.719 "uuid": "7482df80-7b1d-44cf-8c74-d13243c9a8a1", 00:36:54.719 "assigned_rate_limits": { 00:36:54.719 "rw_ios_per_sec": 0, 00:36:54.719 "rw_mbytes_per_sec": 0, 00:36:54.719 "r_mbytes_per_sec": 0, 00:36:54.719 "w_mbytes_per_sec": 0 00:36:54.719 }, 00:36:54.719 "claimed": true, 00:36:54.719 "claim_type": "exclusive_write", 00:36:54.719 "zoned": false, 00:36:54.719 "supported_io_types": { 00:36:54.719 "read": true, 00:36:54.719 "write": true, 00:36:54.719 "unmap": true, 00:36:54.719 "write_zeroes": true, 00:36:54.719 "flush": true, 00:36:54.719 "reset": true, 00:36:54.719 "compare": false, 00:36:54.719 "compare_and_write": false, 00:36:54.719 "abort": true, 00:36:54.719 "nvme_admin": false, 00:36:54.719 "nvme_io": false 00:36:54.719 }, 00:36:54.719 "memory_domains": [ 00:36:54.719 { 00:36:54.719 "dma_device_id": "system", 00:36:54.719 "dma_device_type": 1 00:36:54.719 }, 00:36:54.719 { 00:36:54.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:54.719 "dma_device_type": 2 00:36:54.719 } 00:36:54.719 ], 00:36:54.719 "driver_specific": {} 00:36:54.719 }' 00:36:54.719 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:54.719 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:54.719 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:36:54.719 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:54.719 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:54.977 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:54.977 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:54.977 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:54.977 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:54.977 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:54.977 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:54.977 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:54.977 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:54.977 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:36:54.977 12:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:55.234 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:55.234 "name": "BaseBdev3", 00:36:55.234 "aliases": [ 00:36:55.234 "1135b4a5-e557-43a6-a2ee-16d43375a644" 00:36:55.234 ], 00:36:55.234 "product_name": "Malloc disk", 00:36:55.234 "block_size": 512, 00:36:55.234 "num_blocks": 65536, 00:36:55.234 "uuid": "1135b4a5-e557-43a6-a2ee-16d43375a644", 00:36:55.234 "assigned_rate_limits": { 00:36:55.234 "rw_ios_per_sec": 0, 00:36:55.234 "rw_mbytes_per_sec": 0, 00:36:55.234 "r_mbytes_per_sec": 0, 00:36:55.234 "w_mbytes_per_sec": 0 00:36:55.234 }, 00:36:55.234 "claimed": true, 00:36:55.234 "claim_type": "exclusive_write", 00:36:55.234 "zoned": false, 00:36:55.234 "supported_io_types": { 00:36:55.234 "read": true, 00:36:55.234 "write": true, 00:36:55.234 "unmap": true, 00:36:55.235 "write_zeroes": true, 00:36:55.235 "flush": true, 00:36:55.235 "reset": true, 00:36:55.235 "compare": false, 00:36:55.235 "compare_and_write": false, 00:36:55.235 "abort": true, 00:36:55.235 "nvme_admin": false, 00:36:55.235 "nvme_io": false 00:36:55.235 }, 00:36:55.235 "memory_domains": [ 00:36:55.235 { 00:36:55.235 "dma_device_id": "system", 00:36:55.235 "dma_device_type": 1 00:36:55.235 }, 00:36:55.235 { 00:36:55.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:55.235 "dma_device_type": 2 00:36:55.235 } 00:36:55.235 ], 00:36:55.235 "driver_specific": {} 00:36:55.235 }' 00:36:55.235 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:55.492 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:55.492 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:36:55.492 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:55.492 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:55.492 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:55.492 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:55.492 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:55.749 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:55.749 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:55.749 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:55.749 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:55.749 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:55.749 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:36:55.749 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:56.012 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:56.012 "name": "BaseBdev4", 00:36:56.012 "aliases": [ 00:36:56.012 "68add2ad-9c5b-44be-9752-f68e0af43934" 00:36:56.012 ], 00:36:56.012 "product_name": "Malloc disk", 00:36:56.012 "block_size": 512, 00:36:56.012 "num_blocks": 65536, 00:36:56.012 "uuid": "68add2ad-9c5b-44be-9752-f68e0af43934", 00:36:56.012 "assigned_rate_limits": { 00:36:56.012 "rw_ios_per_sec": 0, 00:36:56.012 "rw_mbytes_per_sec": 0, 00:36:56.012 "r_mbytes_per_sec": 0, 00:36:56.012 "w_mbytes_per_sec": 0 00:36:56.012 }, 00:36:56.012 "claimed": true, 00:36:56.012 "claim_type": "exclusive_write", 00:36:56.012 "zoned": false, 00:36:56.012 "supported_io_types": { 00:36:56.012 "read": true, 00:36:56.012 "write": true, 00:36:56.012 "unmap": true, 00:36:56.012 "write_zeroes": true, 00:36:56.012 "flush": true, 00:36:56.012 "reset": true, 00:36:56.012 "compare": false, 00:36:56.012 "compare_and_write": false, 00:36:56.012 "abort": true, 00:36:56.012 "nvme_admin": false, 00:36:56.012 "nvme_io": false 00:36:56.012 }, 00:36:56.012 "memory_domains": [ 00:36:56.012 { 00:36:56.012 "dma_device_id": "system", 00:36:56.012 "dma_device_type": 1 00:36:56.012 }, 00:36:56.012 { 00:36:56.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:56.012 "dma_device_type": 2 00:36:56.012 } 00:36:56.012 ], 00:36:56.012 "driver_specific": {} 00:36:56.012 }' 00:36:56.013 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:56.013 12:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:56.013 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:36:56.013 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:56.013 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:56.270 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:56.270 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:56.270 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:56.270 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:56.270 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:56.270 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:56.270 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:56.270 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:56.528 [2024-06-10 12:00:28.539683] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:56.785 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:36:56.785 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:36:56.785 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:36:56.785 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:36:56.785 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:36:56.785 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:36:56.785 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:56.785 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:56.785 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:56.785 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:56.785 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:56.785 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:56.785 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:56.785 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:56.785 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:56.785 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:56.785 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:57.044 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:57.044 "name": "Existed_Raid", 00:36:57.044 "uuid": "5c2356a3-6cfd-41a5-803d-35b5fe4c2b35", 00:36:57.044 "strip_size_kb": 64, 00:36:57.044 "state": "online", 00:36:57.044 "raid_level": "raid5f", 00:36:57.044 "superblock": false, 00:36:57.044 "num_base_bdevs": 4, 00:36:57.044 "num_base_bdevs_discovered": 3, 00:36:57.044 "num_base_bdevs_operational": 3, 00:36:57.044 "base_bdevs_list": [ 00:36:57.044 { 00:36:57.044 "name": null, 00:36:57.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:57.044 "is_configured": false, 00:36:57.044 "data_offset": 0, 00:36:57.044 "data_size": 65536 00:36:57.044 }, 00:36:57.044 { 00:36:57.044 "name": "BaseBdev2", 00:36:57.044 "uuid": "7482df80-7b1d-44cf-8c74-d13243c9a8a1", 00:36:57.044 "is_configured": true, 00:36:57.044 "data_offset": 0, 00:36:57.044 "data_size": 65536 00:36:57.044 }, 00:36:57.044 { 00:36:57.044 "name": "BaseBdev3", 00:36:57.044 "uuid": "1135b4a5-e557-43a6-a2ee-16d43375a644", 00:36:57.044 "is_configured": true, 00:36:57.044 "data_offset": 0, 00:36:57.044 "data_size": 65536 00:36:57.044 }, 00:36:57.044 { 00:36:57.044 "name": "BaseBdev4", 00:36:57.044 "uuid": "68add2ad-9c5b-44be-9752-f68e0af43934", 00:36:57.044 "is_configured": true, 00:36:57.044 "data_offset": 0, 00:36:57.044 "data_size": 65536 00:36:57.044 } 00:36:57.044 ] 00:36:57.044 }' 00:36:57.044 12:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:57.044 12:00:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.611 12:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:36:57.611 12:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:57.611 12:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:57.611 12:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:36:57.870 12:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:36:57.870 12:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:57.870 12:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:36:58.128 [2024-06-10 12:00:30.068910] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:58.128 [2024-06-10 12:00:30.069228] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:58.128 [2024-06-10 12:00:30.179343] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:58.385 12:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:36:58.385 12:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:58.385 12:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:58.385 12:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:36:58.385 12:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:36:58.385 12:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:58.385 12:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:36:58.644 [2024-06-10 12:00:30.623509] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:58.902 12:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:36:58.902 12:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:58.903 12:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:58.903 12:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:36:59.160 12:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:36:59.160 12:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:59.160 12:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:36:59.418 [2024-06-10 12:00:31.311520] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:36:59.418 [2024-06-10 12:00:31.311737] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:36:59.418 12:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:36:59.418 12:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:59.418 12:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:59.418 12:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:36:59.674 12:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:36:59.674 12:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:36:59.674 12:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:36:59.674 12:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:36:59.674 12:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:36:59.674 12:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:37:00.246 BaseBdev2 00:37:00.246 12:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:37:00.246 12:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:37:00.246 12:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:37:00.246 12:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local i 00:37:00.246 12:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:37:00.246 12:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:37:00.246 12:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:00.246 12:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:00.503 [ 00:37:00.503 { 00:37:00.503 "name": "BaseBdev2", 00:37:00.503 "aliases": [ 00:37:00.503 "95699549-b14a-4f4c-972e-b0c3678f7c65" 00:37:00.503 ], 00:37:00.503 "product_name": "Malloc disk", 00:37:00.503 "block_size": 512, 00:37:00.503 "num_blocks": 65536, 00:37:00.503 "uuid": "95699549-b14a-4f4c-972e-b0c3678f7c65", 00:37:00.503 "assigned_rate_limits": { 00:37:00.503 "rw_ios_per_sec": 0, 00:37:00.503 "rw_mbytes_per_sec": 0, 00:37:00.503 "r_mbytes_per_sec": 0, 00:37:00.503 "w_mbytes_per_sec": 0 00:37:00.503 }, 00:37:00.503 "claimed": false, 00:37:00.503 "zoned": false, 00:37:00.503 "supported_io_types": { 00:37:00.503 "read": true, 00:37:00.503 "write": true, 00:37:00.503 "unmap": true, 00:37:00.503 "write_zeroes": true, 00:37:00.503 "flush": true, 00:37:00.503 "reset": true, 00:37:00.503 "compare": false, 00:37:00.503 "compare_and_write": false, 00:37:00.503 "abort": true, 00:37:00.503 "nvme_admin": false, 00:37:00.503 "nvme_io": false 00:37:00.503 }, 00:37:00.503 "memory_domains": [ 00:37:00.503 { 00:37:00.503 "dma_device_id": "system", 00:37:00.503 "dma_device_type": 1 00:37:00.503 }, 00:37:00.503 { 00:37:00.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:00.503 "dma_device_type": 2 00:37:00.503 } 00:37:00.503 ], 00:37:00.503 "driver_specific": {} 00:37:00.503 } 00:37:00.503 ] 00:37:00.503 12:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:37:00.503 12:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:37:00.503 12:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:37:00.503 12:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:37:00.760 BaseBdev3 00:37:00.760 12:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:37:00.760 12:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:37:00.760 12:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:37:00.760 12:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local i 00:37:00.760 12:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:37:00.760 12:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:37:00.760 12:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:01.018 12:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:01.274 [ 00:37:01.274 { 00:37:01.274 "name": "BaseBdev3", 00:37:01.274 "aliases": [ 00:37:01.274 "4db55e9d-258f-4d34-8132-3784dae67ec3" 00:37:01.274 ], 00:37:01.274 "product_name": "Malloc disk", 00:37:01.274 "block_size": 512, 00:37:01.274 "num_blocks": 65536, 00:37:01.274 "uuid": "4db55e9d-258f-4d34-8132-3784dae67ec3", 00:37:01.274 "assigned_rate_limits": { 00:37:01.274 "rw_ios_per_sec": 0, 00:37:01.274 "rw_mbytes_per_sec": 0, 00:37:01.274 "r_mbytes_per_sec": 0, 00:37:01.274 "w_mbytes_per_sec": 0 00:37:01.274 }, 00:37:01.274 "claimed": false, 00:37:01.274 "zoned": false, 00:37:01.274 "supported_io_types": { 00:37:01.274 "read": true, 00:37:01.274 "write": true, 00:37:01.274 "unmap": true, 00:37:01.274 "write_zeroes": true, 00:37:01.274 "flush": true, 00:37:01.274 "reset": true, 00:37:01.274 "compare": false, 00:37:01.274 "compare_and_write": false, 00:37:01.274 "abort": true, 00:37:01.274 "nvme_admin": false, 00:37:01.274 "nvme_io": false 00:37:01.274 }, 00:37:01.274 "memory_domains": [ 00:37:01.274 { 00:37:01.274 "dma_device_id": "system", 00:37:01.274 "dma_device_type": 1 00:37:01.274 }, 00:37:01.274 { 00:37:01.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:01.274 "dma_device_type": 2 00:37:01.274 } 00:37:01.274 ], 00:37:01.274 "driver_specific": {} 00:37:01.274 } 00:37:01.274 ] 00:37:01.274 12:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:37:01.274 12:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:37:01.274 12:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:37:01.274 12:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:37:01.532 BaseBdev4 00:37:01.532 12:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:37:01.532 12:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:37:01.532 12:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:37:01.532 12:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local i 00:37:01.532 12:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:37:01.532 12:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:37:01.532 12:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:01.789 12:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:37:02.046 [ 00:37:02.046 { 00:37:02.046 "name": "BaseBdev4", 00:37:02.046 "aliases": [ 00:37:02.046 "f534f69b-2b95-4d57-8444-87abda574052" 00:37:02.046 ], 00:37:02.046 "product_name": "Malloc disk", 00:37:02.046 "block_size": 512, 00:37:02.046 "num_blocks": 65536, 00:37:02.046 "uuid": "f534f69b-2b95-4d57-8444-87abda574052", 00:37:02.046 "assigned_rate_limits": { 00:37:02.046 "rw_ios_per_sec": 0, 00:37:02.046 "rw_mbytes_per_sec": 0, 00:37:02.046 "r_mbytes_per_sec": 0, 00:37:02.046 "w_mbytes_per_sec": 0 00:37:02.046 }, 00:37:02.046 "claimed": false, 00:37:02.046 "zoned": false, 00:37:02.046 "supported_io_types": { 00:37:02.046 "read": true, 00:37:02.046 "write": true, 00:37:02.046 "unmap": true, 00:37:02.046 "write_zeroes": true, 00:37:02.046 "flush": true, 00:37:02.046 "reset": true, 00:37:02.046 "compare": false, 00:37:02.046 "compare_and_write": false, 00:37:02.046 "abort": true, 00:37:02.046 "nvme_admin": false, 00:37:02.046 "nvme_io": false 00:37:02.046 }, 00:37:02.046 "memory_domains": [ 00:37:02.046 { 00:37:02.046 "dma_device_id": "system", 00:37:02.046 "dma_device_type": 1 00:37:02.046 }, 00:37:02.046 { 00:37:02.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:02.046 "dma_device_type": 2 00:37:02.046 } 00:37:02.046 ], 00:37:02.046 "driver_specific": {} 00:37:02.046 } 00:37:02.046 ] 00:37:02.046 12:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:37:02.046 12:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:37:02.046 12:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:37:02.046 12:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:37:02.304 [2024-06-10 12:00:34.168260] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:02.304 [2024-06-10 12:00:34.169130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:02.304 [2024-06-10 12:00:34.169336] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:02.304 [2024-06-10 12:00:34.171578] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:02.304 [2024-06-10 12:00:34.171771] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:02.304 12:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:02.304 12:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:02.304 12:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:02.304 12:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:02.304 12:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:02.304 12:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:02.304 12:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:02.304 12:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:02.304 12:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:02.304 12:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:02.304 12:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:02.304 12:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:02.562 12:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:02.562 "name": "Existed_Raid", 00:37:02.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:02.562 "strip_size_kb": 64, 00:37:02.562 "state": "configuring", 00:37:02.562 "raid_level": "raid5f", 00:37:02.562 "superblock": false, 00:37:02.562 "num_base_bdevs": 4, 00:37:02.562 "num_base_bdevs_discovered": 3, 00:37:02.562 "num_base_bdevs_operational": 4, 00:37:02.562 "base_bdevs_list": [ 00:37:02.562 { 00:37:02.562 "name": "BaseBdev1", 00:37:02.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:02.562 "is_configured": false, 00:37:02.562 "data_offset": 0, 00:37:02.562 "data_size": 0 00:37:02.562 }, 00:37:02.562 { 00:37:02.562 "name": "BaseBdev2", 00:37:02.562 "uuid": "95699549-b14a-4f4c-972e-b0c3678f7c65", 00:37:02.562 "is_configured": true, 00:37:02.562 "data_offset": 0, 00:37:02.562 "data_size": 65536 00:37:02.562 }, 00:37:02.562 { 00:37:02.562 "name": "BaseBdev3", 00:37:02.562 "uuid": "4db55e9d-258f-4d34-8132-3784dae67ec3", 00:37:02.562 "is_configured": true, 00:37:02.562 "data_offset": 0, 00:37:02.562 "data_size": 65536 00:37:02.562 }, 00:37:02.562 { 00:37:02.562 "name": "BaseBdev4", 00:37:02.562 "uuid": "f534f69b-2b95-4d57-8444-87abda574052", 00:37:02.562 "is_configured": true, 00:37:02.562 "data_offset": 0, 00:37:02.562 "data_size": 65536 00:37:02.562 } 00:37:02.562 ] 00:37:02.562 }' 00:37:02.562 12:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:02.562 12:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.127 12:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:37:03.394 [2024-06-10 12:00:35.416754] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:03.394 12:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:03.394 12:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:03.394 12:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:03.394 12:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:03.394 12:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:03.394 12:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:03.394 12:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:03.394 12:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:03.394 12:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:03.394 12:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:03.394 12:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:03.394 12:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:03.650 12:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:03.650 "name": "Existed_Raid", 00:37:03.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:03.650 "strip_size_kb": 64, 00:37:03.650 "state": "configuring", 00:37:03.650 "raid_level": "raid5f", 00:37:03.650 "superblock": false, 00:37:03.651 "num_base_bdevs": 4, 00:37:03.651 "num_base_bdevs_discovered": 2, 00:37:03.651 "num_base_bdevs_operational": 4, 00:37:03.651 "base_bdevs_list": [ 00:37:03.651 { 00:37:03.651 "name": "BaseBdev1", 00:37:03.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:03.651 "is_configured": false, 00:37:03.651 "data_offset": 0, 00:37:03.651 "data_size": 0 00:37:03.651 }, 00:37:03.651 { 00:37:03.651 "name": null, 00:37:03.651 "uuid": "95699549-b14a-4f4c-972e-b0c3678f7c65", 00:37:03.651 "is_configured": false, 00:37:03.651 "data_offset": 0, 00:37:03.651 "data_size": 65536 00:37:03.651 }, 00:37:03.651 { 00:37:03.651 "name": "BaseBdev3", 00:37:03.651 "uuid": "4db55e9d-258f-4d34-8132-3784dae67ec3", 00:37:03.651 "is_configured": true, 00:37:03.651 "data_offset": 0, 00:37:03.651 "data_size": 65536 00:37:03.651 }, 00:37:03.651 { 00:37:03.651 "name": "BaseBdev4", 00:37:03.651 "uuid": "f534f69b-2b95-4d57-8444-87abda574052", 00:37:03.651 "is_configured": true, 00:37:03.651 "data_offset": 0, 00:37:03.651 "data_size": 65536 00:37:03.651 } 00:37:03.651 ] 00:37:03.651 }' 00:37:03.651 12:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:03.651 12:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.582 12:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:04.582 12:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:04.582 12:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:37:04.582 12:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:37:05.147 [2024-06-10 12:00:36.951415] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:05.147 BaseBdev1 00:37:05.147 12:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:37:05.147 12:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:37:05.147 12:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:37:05.147 12:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local i 00:37:05.147 12:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:37:05.147 12:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:37:05.147 12:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:05.404 12:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:05.734 [ 00:37:05.734 { 00:37:05.734 "name": "BaseBdev1", 00:37:05.734 "aliases": [ 00:37:05.734 "899e5f49-3432-4179-a1d5-4dc4d79349c4" 00:37:05.734 ], 00:37:05.734 "product_name": "Malloc disk", 00:37:05.734 "block_size": 512, 00:37:05.734 "num_blocks": 65536, 00:37:05.734 "uuid": "899e5f49-3432-4179-a1d5-4dc4d79349c4", 00:37:05.734 "assigned_rate_limits": { 00:37:05.734 "rw_ios_per_sec": 0, 00:37:05.734 "rw_mbytes_per_sec": 0, 00:37:05.734 "r_mbytes_per_sec": 0, 00:37:05.734 "w_mbytes_per_sec": 0 00:37:05.734 }, 00:37:05.734 "claimed": true, 00:37:05.734 "claim_type": "exclusive_write", 00:37:05.734 "zoned": false, 00:37:05.734 "supported_io_types": { 00:37:05.734 "read": true, 00:37:05.734 "write": true, 00:37:05.734 "unmap": true, 00:37:05.734 "write_zeroes": true, 00:37:05.734 "flush": true, 00:37:05.734 "reset": true, 00:37:05.734 "compare": false, 00:37:05.734 "compare_and_write": false, 00:37:05.734 "abort": true, 00:37:05.734 "nvme_admin": false, 00:37:05.734 "nvme_io": false 00:37:05.734 }, 00:37:05.734 "memory_domains": [ 00:37:05.734 { 00:37:05.734 "dma_device_id": "system", 00:37:05.734 "dma_device_type": 1 00:37:05.734 }, 00:37:05.734 { 00:37:05.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:05.734 "dma_device_type": 2 00:37:05.734 } 00:37:05.734 ], 00:37:05.734 "driver_specific": {} 00:37:05.734 } 00:37:05.734 ] 00:37:05.734 12:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:37:05.734 12:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:05.734 12:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:05.734 12:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:05.734 12:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:05.734 12:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:05.734 12:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:05.734 12:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:05.734 12:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:05.734 12:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:05.734 12:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:05.734 12:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:05.734 12:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:05.992 12:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:05.992 "name": "Existed_Raid", 00:37:05.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:05.992 "strip_size_kb": 64, 00:37:05.992 "state": "configuring", 00:37:05.992 "raid_level": "raid5f", 00:37:05.992 "superblock": false, 00:37:05.992 "num_base_bdevs": 4, 00:37:05.992 "num_base_bdevs_discovered": 3, 00:37:05.992 "num_base_bdevs_operational": 4, 00:37:05.992 "base_bdevs_list": [ 00:37:05.992 { 00:37:05.992 "name": "BaseBdev1", 00:37:05.992 "uuid": "899e5f49-3432-4179-a1d5-4dc4d79349c4", 00:37:05.992 "is_configured": true, 00:37:05.992 "data_offset": 0, 00:37:05.992 "data_size": 65536 00:37:05.992 }, 00:37:05.992 { 00:37:05.992 "name": null, 00:37:05.992 "uuid": "95699549-b14a-4f4c-972e-b0c3678f7c65", 00:37:05.992 "is_configured": false, 00:37:05.992 "data_offset": 0, 00:37:05.992 "data_size": 65536 00:37:05.992 }, 00:37:05.992 { 00:37:05.992 "name": "BaseBdev3", 00:37:05.992 "uuid": "4db55e9d-258f-4d34-8132-3784dae67ec3", 00:37:05.992 "is_configured": true, 00:37:05.992 "data_offset": 0, 00:37:05.992 "data_size": 65536 00:37:05.992 }, 00:37:05.992 { 00:37:05.992 "name": "BaseBdev4", 00:37:05.992 "uuid": "f534f69b-2b95-4d57-8444-87abda574052", 00:37:05.992 "is_configured": true, 00:37:05.992 "data_offset": 0, 00:37:05.992 "data_size": 65536 00:37:05.992 } 00:37:05.992 ] 00:37:05.992 }' 00:37:05.992 12:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:05.992 12:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:06.560 12:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:06.560 12:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:06.818 12:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:37:06.818 12:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:37:07.076 [2024-06-10 12:00:38.987370] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:07.076 12:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:07.076 12:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:07.076 12:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:07.076 12:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:07.076 12:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:07.076 12:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:07.076 12:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:07.076 12:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:07.076 12:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:07.076 12:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:07.076 12:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:07.076 12:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:07.334 12:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:07.334 "name": "Existed_Raid", 00:37:07.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:07.334 "strip_size_kb": 64, 00:37:07.334 "state": "configuring", 00:37:07.334 "raid_level": "raid5f", 00:37:07.334 "superblock": false, 00:37:07.334 "num_base_bdevs": 4, 00:37:07.334 "num_base_bdevs_discovered": 2, 00:37:07.334 "num_base_bdevs_operational": 4, 00:37:07.334 "base_bdevs_list": [ 00:37:07.334 { 00:37:07.334 "name": "BaseBdev1", 00:37:07.334 "uuid": "899e5f49-3432-4179-a1d5-4dc4d79349c4", 00:37:07.334 "is_configured": true, 00:37:07.334 "data_offset": 0, 00:37:07.334 "data_size": 65536 00:37:07.334 }, 00:37:07.334 { 00:37:07.334 "name": null, 00:37:07.334 "uuid": "95699549-b14a-4f4c-972e-b0c3678f7c65", 00:37:07.334 "is_configured": false, 00:37:07.334 "data_offset": 0, 00:37:07.334 "data_size": 65536 00:37:07.334 }, 00:37:07.334 { 00:37:07.334 "name": null, 00:37:07.334 "uuid": "4db55e9d-258f-4d34-8132-3784dae67ec3", 00:37:07.334 "is_configured": false, 00:37:07.334 "data_offset": 0, 00:37:07.334 "data_size": 65536 00:37:07.334 }, 00:37:07.334 { 00:37:07.334 "name": "BaseBdev4", 00:37:07.334 "uuid": "f534f69b-2b95-4d57-8444-87abda574052", 00:37:07.334 "is_configured": true, 00:37:07.334 "data_offset": 0, 00:37:07.334 "data_size": 65536 00:37:07.334 } 00:37:07.334 ] 00:37:07.334 }' 00:37:07.334 12:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:07.334 12:00:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:07.898 12:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:07.898 12:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:08.155 12:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:37:08.155 12:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:37:08.413 [2024-06-10 12:00:40.347729] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:08.413 12:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:08.413 12:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:08.413 12:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:08.413 12:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:08.413 12:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:08.413 12:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:08.413 12:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:08.413 12:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:08.413 12:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:08.413 12:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:08.413 12:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:08.413 12:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:08.671 12:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:08.671 "name": "Existed_Raid", 00:37:08.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:08.671 "strip_size_kb": 64, 00:37:08.671 "state": "configuring", 00:37:08.671 "raid_level": "raid5f", 00:37:08.671 "superblock": false, 00:37:08.671 "num_base_bdevs": 4, 00:37:08.671 "num_base_bdevs_discovered": 3, 00:37:08.671 "num_base_bdevs_operational": 4, 00:37:08.671 "base_bdevs_list": [ 00:37:08.671 { 00:37:08.671 "name": "BaseBdev1", 00:37:08.671 "uuid": "899e5f49-3432-4179-a1d5-4dc4d79349c4", 00:37:08.671 "is_configured": true, 00:37:08.671 "data_offset": 0, 00:37:08.671 "data_size": 65536 00:37:08.671 }, 00:37:08.671 { 00:37:08.671 "name": null, 00:37:08.671 "uuid": "95699549-b14a-4f4c-972e-b0c3678f7c65", 00:37:08.671 "is_configured": false, 00:37:08.671 "data_offset": 0, 00:37:08.671 "data_size": 65536 00:37:08.671 }, 00:37:08.671 { 00:37:08.671 "name": "BaseBdev3", 00:37:08.671 "uuid": "4db55e9d-258f-4d34-8132-3784dae67ec3", 00:37:08.671 "is_configured": true, 00:37:08.671 "data_offset": 0, 00:37:08.671 "data_size": 65536 00:37:08.671 }, 00:37:08.671 { 00:37:08.671 "name": "BaseBdev4", 00:37:08.671 "uuid": "f534f69b-2b95-4d57-8444-87abda574052", 00:37:08.671 "is_configured": true, 00:37:08.671 "data_offset": 0, 00:37:08.671 "data_size": 65536 00:37:08.671 } 00:37:08.671 ] 00:37:08.671 }' 00:37:08.671 12:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:08.671 12:00:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:09.234 12:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:09.234 12:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:09.558 12:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:37:09.558 12:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:37:09.816 [2024-06-10 12:00:41.655982] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:09.816 12:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:09.816 12:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:09.816 12:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:09.816 12:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:09.816 12:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:09.816 12:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:09.816 12:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:09.816 12:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:09.816 12:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:09.816 12:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:09.816 12:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:09.816 12:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:10.156 12:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:10.156 "name": "Existed_Raid", 00:37:10.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:10.156 "strip_size_kb": 64, 00:37:10.156 "state": "configuring", 00:37:10.156 "raid_level": "raid5f", 00:37:10.156 "superblock": false, 00:37:10.156 "num_base_bdevs": 4, 00:37:10.156 "num_base_bdevs_discovered": 2, 00:37:10.156 "num_base_bdevs_operational": 4, 00:37:10.156 "base_bdevs_list": [ 00:37:10.156 { 00:37:10.156 "name": null, 00:37:10.156 "uuid": "899e5f49-3432-4179-a1d5-4dc4d79349c4", 00:37:10.156 "is_configured": false, 00:37:10.156 "data_offset": 0, 00:37:10.156 "data_size": 65536 00:37:10.156 }, 00:37:10.156 { 00:37:10.156 "name": null, 00:37:10.156 "uuid": "95699549-b14a-4f4c-972e-b0c3678f7c65", 00:37:10.156 "is_configured": false, 00:37:10.156 "data_offset": 0, 00:37:10.156 "data_size": 65536 00:37:10.156 }, 00:37:10.156 { 00:37:10.156 "name": "BaseBdev3", 00:37:10.156 "uuid": "4db55e9d-258f-4d34-8132-3784dae67ec3", 00:37:10.156 "is_configured": true, 00:37:10.156 "data_offset": 0, 00:37:10.156 "data_size": 65536 00:37:10.156 }, 00:37:10.156 { 00:37:10.156 "name": "BaseBdev4", 00:37:10.156 "uuid": "f534f69b-2b95-4d57-8444-87abda574052", 00:37:10.156 "is_configured": true, 00:37:10.156 "data_offset": 0, 00:37:10.156 "data_size": 65536 00:37:10.156 } 00:37:10.156 ] 00:37:10.156 }' 00:37:10.156 12:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:10.156 12:00:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:10.721 12:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:10.721 12:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:10.980 12:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:37:10.980 12:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:37:11.238 [2024-06-10 12:00:43.223140] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:11.238 12:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:11.238 12:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:11.238 12:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:11.238 12:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:11.238 12:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:11.238 12:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:11.238 12:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:11.238 12:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:11.238 12:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:11.238 12:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:11.238 12:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:11.238 12:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:11.507 12:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:11.507 "name": "Existed_Raid", 00:37:11.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:11.507 "strip_size_kb": 64, 00:37:11.507 "state": "configuring", 00:37:11.507 "raid_level": "raid5f", 00:37:11.507 "superblock": false, 00:37:11.507 "num_base_bdevs": 4, 00:37:11.507 "num_base_bdevs_discovered": 3, 00:37:11.507 "num_base_bdevs_operational": 4, 00:37:11.507 "base_bdevs_list": [ 00:37:11.507 { 00:37:11.507 "name": null, 00:37:11.507 "uuid": "899e5f49-3432-4179-a1d5-4dc4d79349c4", 00:37:11.507 "is_configured": false, 00:37:11.507 "data_offset": 0, 00:37:11.507 "data_size": 65536 00:37:11.507 }, 00:37:11.507 { 00:37:11.507 "name": "BaseBdev2", 00:37:11.507 "uuid": "95699549-b14a-4f4c-972e-b0c3678f7c65", 00:37:11.507 "is_configured": true, 00:37:11.507 "data_offset": 0, 00:37:11.507 "data_size": 65536 00:37:11.507 }, 00:37:11.507 { 00:37:11.507 "name": "BaseBdev3", 00:37:11.507 "uuid": "4db55e9d-258f-4d34-8132-3784dae67ec3", 00:37:11.507 "is_configured": true, 00:37:11.507 "data_offset": 0, 00:37:11.507 "data_size": 65536 00:37:11.507 }, 00:37:11.507 { 00:37:11.507 "name": "BaseBdev4", 00:37:11.507 "uuid": "f534f69b-2b95-4d57-8444-87abda574052", 00:37:11.507 "is_configured": true, 00:37:11.507 "data_offset": 0, 00:37:11.507 "data_size": 65536 00:37:11.507 } 00:37:11.507 ] 00:37:11.507 }' 00:37:11.507 12:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:11.507 12:00:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:12.076 12:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:12.076 12:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:12.642 12:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:37:12.642 12:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:12.642 12:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:37:12.642 12:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 899e5f49-3432-4179-a1d5-4dc4d79349c4 00:37:12.900 [2024-06-10 12:00:44.897845] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:37:12.900 [2024-06-10 12:00:44.898111] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:37:12.900 [2024-06-10 12:00:44.898158] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:37:12.900 [2024-06-10 12:00:44.898361] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:37:12.900 [2024-06-10 12:00:44.906199] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:37:12.900 [2024-06-10 12:00:44.906357] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:37:12.900 [2024-06-10 12:00:44.906715] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:12.900 NewBaseBdev 00:37:12.900 12:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:37:12.900 12:00:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:37:12.900 12:00:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:37:12.900 12:00:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local i 00:37:12.900 12:00:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:37:12.900 12:00:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:37:12.900 12:00:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:13.158 12:00:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:37:13.415 [ 00:37:13.415 { 00:37:13.415 "name": "NewBaseBdev", 00:37:13.415 "aliases": [ 00:37:13.415 "899e5f49-3432-4179-a1d5-4dc4d79349c4" 00:37:13.415 ], 00:37:13.415 "product_name": "Malloc disk", 00:37:13.415 "block_size": 512, 00:37:13.415 "num_blocks": 65536, 00:37:13.415 "uuid": "899e5f49-3432-4179-a1d5-4dc4d79349c4", 00:37:13.415 "assigned_rate_limits": { 00:37:13.415 "rw_ios_per_sec": 0, 00:37:13.415 "rw_mbytes_per_sec": 0, 00:37:13.415 "r_mbytes_per_sec": 0, 00:37:13.415 "w_mbytes_per_sec": 0 00:37:13.415 }, 00:37:13.415 "claimed": true, 00:37:13.415 "claim_type": "exclusive_write", 00:37:13.415 "zoned": false, 00:37:13.415 "supported_io_types": { 00:37:13.415 "read": true, 00:37:13.415 "write": true, 00:37:13.415 "unmap": true, 00:37:13.415 "write_zeroes": true, 00:37:13.415 "flush": true, 00:37:13.415 "reset": true, 00:37:13.415 "compare": false, 00:37:13.415 "compare_and_write": false, 00:37:13.415 "abort": true, 00:37:13.415 "nvme_admin": false, 00:37:13.415 "nvme_io": false 00:37:13.415 }, 00:37:13.415 "memory_domains": [ 00:37:13.415 { 00:37:13.415 "dma_device_id": "system", 00:37:13.415 "dma_device_type": 1 00:37:13.415 }, 00:37:13.415 { 00:37:13.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:13.415 "dma_device_type": 2 00:37:13.415 } 00:37:13.415 ], 00:37:13.415 "driver_specific": {} 00:37:13.415 } 00:37:13.415 ] 00:37:13.415 12:00:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:37:13.415 12:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:37:13.415 12:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:13.416 12:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:13.416 12:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:13.416 12:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:13.416 12:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:13.416 12:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:13.416 12:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:13.416 12:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:13.416 12:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:13.416 12:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:13.416 12:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:13.673 12:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:13.673 "name": "Existed_Raid", 00:37:13.673 "uuid": "da8046b7-6079-4a95-b441-22cdd7a95a23", 00:37:13.673 "strip_size_kb": 64, 00:37:13.673 "state": "online", 00:37:13.673 "raid_level": "raid5f", 00:37:13.673 "superblock": false, 00:37:13.673 "num_base_bdevs": 4, 00:37:13.673 "num_base_bdevs_discovered": 4, 00:37:13.673 "num_base_bdevs_operational": 4, 00:37:13.673 "base_bdevs_list": [ 00:37:13.673 { 00:37:13.673 "name": "NewBaseBdev", 00:37:13.673 "uuid": "899e5f49-3432-4179-a1d5-4dc4d79349c4", 00:37:13.673 "is_configured": true, 00:37:13.673 "data_offset": 0, 00:37:13.673 "data_size": 65536 00:37:13.673 }, 00:37:13.673 { 00:37:13.673 "name": "BaseBdev2", 00:37:13.673 "uuid": "95699549-b14a-4f4c-972e-b0c3678f7c65", 00:37:13.673 "is_configured": true, 00:37:13.673 "data_offset": 0, 00:37:13.673 "data_size": 65536 00:37:13.673 }, 00:37:13.673 { 00:37:13.673 "name": "BaseBdev3", 00:37:13.673 "uuid": "4db55e9d-258f-4d34-8132-3784dae67ec3", 00:37:13.673 "is_configured": true, 00:37:13.673 "data_offset": 0, 00:37:13.673 "data_size": 65536 00:37:13.673 }, 00:37:13.673 { 00:37:13.673 "name": "BaseBdev4", 00:37:13.673 "uuid": "f534f69b-2b95-4d57-8444-87abda574052", 00:37:13.673 "is_configured": true, 00:37:13.673 "data_offset": 0, 00:37:13.673 "data_size": 65536 00:37:13.673 } 00:37:13.673 ] 00:37:13.673 }' 00:37:13.673 12:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:13.673 12:00:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:14.608 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:37:14.608 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:37:14.608 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:14.608 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:14.608 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:14.608 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:37:14.608 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:37:14.608 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:14.608 [2024-06-10 12:00:46.541584] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:14.608 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:14.608 "name": "Existed_Raid", 00:37:14.608 "aliases": [ 00:37:14.608 "da8046b7-6079-4a95-b441-22cdd7a95a23" 00:37:14.608 ], 00:37:14.608 "product_name": "Raid Volume", 00:37:14.608 "block_size": 512, 00:37:14.608 "num_blocks": 196608, 00:37:14.608 "uuid": "da8046b7-6079-4a95-b441-22cdd7a95a23", 00:37:14.608 "assigned_rate_limits": { 00:37:14.608 "rw_ios_per_sec": 0, 00:37:14.608 "rw_mbytes_per_sec": 0, 00:37:14.608 "r_mbytes_per_sec": 0, 00:37:14.608 "w_mbytes_per_sec": 0 00:37:14.608 }, 00:37:14.608 "claimed": false, 00:37:14.608 "zoned": false, 00:37:14.608 "supported_io_types": { 00:37:14.608 "read": true, 00:37:14.608 "write": true, 00:37:14.608 "unmap": false, 00:37:14.608 "write_zeroes": true, 00:37:14.608 "flush": false, 00:37:14.608 "reset": true, 00:37:14.608 "compare": false, 00:37:14.608 "compare_and_write": false, 00:37:14.608 "abort": false, 00:37:14.608 "nvme_admin": false, 00:37:14.608 "nvme_io": false 00:37:14.608 }, 00:37:14.608 "driver_specific": { 00:37:14.608 "raid": { 00:37:14.608 "uuid": "da8046b7-6079-4a95-b441-22cdd7a95a23", 00:37:14.608 "strip_size_kb": 64, 00:37:14.608 "state": "online", 00:37:14.608 "raid_level": "raid5f", 00:37:14.608 "superblock": false, 00:37:14.608 "num_base_bdevs": 4, 00:37:14.608 "num_base_bdevs_discovered": 4, 00:37:14.608 "num_base_bdevs_operational": 4, 00:37:14.608 "base_bdevs_list": [ 00:37:14.608 { 00:37:14.608 "name": "NewBaseBdev", 00:37:14.608 "uuid": "899e5f49-3432-4179-a1d5-4dc4d79349c4", 00:37:14.608 "is_configured": true, 00:37:14.608 "data_offset": 0, 00:37:14.608 "data_size": 65536 00:37:14.608 }, 00:37:14.608 { 00:37:14.608 "name": "BaseBdev2", 00:37:14.608 "uuid": "95699549-b14a-4f4c-972e-b0c3678f7c65", 00:37:14.608 "is_configured": true, 00:37:14.608 "data_offset": 0, 00:37:14.608 "data_size": 65536 00:37:14.608 }, 00:37:14.608 { 00:37:14.608 "name": "BaseBdev3", 00:37:14.608 "uuid": "4db55e9d-258f-4d34-8132-3784dae67ec3", 00:37:14.608 "is_configured": true, 00:37:14.608 "data_offset": 0, 00:37:14.608 "data_size": 65536 00:37:14.608 }, 00:37:14.608 { 00:37:14.608 "name": "BaseBdev4", 00:37:14.608 "uuid": "f534f69b-2b95-4d57-8444-87abda574052", 00:37:14.608 "is_configured": true, 00:37:14.608 "data_offset": 0, 00:37:14.608 "data_size": 65536 00:37:14.608 } 00:37:14.608 ] 00:37:14.608 } 00:37:14.608 } 00:37:14.608 }' 00:37:14.608 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:14.608 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:37:14.608 BaseBdev2 00:37:14.608 BaseBdev3 00:37:14.608 BaseBdev4' 00:37:14.608 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:14.608 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:37:14.608 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:14.866 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:14.866 "name": "NewBaseBdev", 00:37:14.866 "aliases": [ 00:37:14.866 "899e5f49-3432-4179-a1d5-4dc4d79349c4" 00:37:14.866 ], 00:37:14.866 "product_name": "Malloc disk", 00:37:14.866 "block_size": 512, 00:37:14.866 "num_blocks": 65536, 00:37:14.866 "uuid": "899e5f49-3432-4179-a1d5-4dc4d79349c4", 00:37:14.866 "assigned_rate_limits": { 00:37:14.866 "rw_ios_per_sec": 0, 00:37:14.866 "rw_mbytes_per_sec": 0, 00:37:14.866 "r_mbytes_per_sec": 0, 00:37:14.866 "w_mbytes_per_sec": 0 00:37:14.866 }, 00:37:14.866 "claimed": true, 00:37:14.866 "claim_type": "exclusive_write", 00:37:14.866 "zoned": false, 00:37:14.866 "supported_io_types": { 00:37:14.866 "read": true, 00:37:14.866 "write": true, 00:37:14.866 "unmap": true, 00:37:14.866 "write_zeroes": true, 00:37:14.866 "flush": true, 00:37:14.866 "reset": true, 00:37:14.866 "compare": false, 00:37:14.866 "compare_and_write": false, 00:37:14.866 "abort": true, 00:37:14.866 "nvme_admin": false, 00:37:14.866 "nvme_io": false 00:37:14.866 }, 00:37:14.866 "memory_domains": [ 00:37:14.866 { 00:37:14.866 "dma_device_id": "system", 00:37:14.866 "dma_device_type": 1 00:37:14.866 }, 00:37:14.866 { 00:37:14.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:14.866 "dma_device_type": 2 00:37:14.866 } 00:37:14.866 ], 00:37:14.866 "driver_specific": {} 00:37:14.866 }' 00:37:14.866 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:14.866 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:14.866 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:14.866 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:15.126 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:15.126 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:15.126 12:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:15.126 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:15.126 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:15.126 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:15.126 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:15.126 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:15.126 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:15.126 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:37:15.126 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:15.454 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:15.454 "name": "BaseBdev2", 00:37:15.454 "aliases": [ 00:37:15.454 "95699549-b14a-4f4c-972e-b0c3678f7c65" 00:37:15.454 ], 00:37:15.454 "product_name": "Malloc disk", 00:37:15.454 "block_size": 512, 00:37:15.454 "num_blocks": 65536, 00:37:15.454 "uuid": "95699549-b14a-4f4c-972e-b0c3678f7c65", 00:37:15.454 "assigned_rate_limits": { 00:37:15.454 "rw_ios_per_sec": 0, 00:37:15.454 "rw_mbytes_per_sec": 0, 00:37:15.454 "r_mbytes_per_sec": 0, 00:37:15.454 "w_mbytes_per_sec": 0 00:37:15.454 }, 00:37:15.454 "claimed": true, 00:37:15.454 "claim_type": "exclusive_write", 00:37:15.454 "zoned": false, 00:37:15.454 "supported_io_types": { 00:37:15.454 "read": true, 00:37:15.454 "write": true, 00:37:15.454 "unmap": true, 00:37:15.454 "write_zeroes": true, 00:37:15.454 "flush": true, 00:37:15.454 "reset": true, 00:37:15.454 "compare": false, 00:37:15.454 "compare_and_write": false, 00:37:15.454 "abort": true, 00:37:15.454 "nvme_admin": false, 00:37:15.454 "nvme_io": false 00:37:15.454 }, 00:37:15.454 "memory_domains": [ 00:37:15.454 { 00:37:15.454 "dma_device_id": "system", 00:37:15.454 "dma_device_type": 1 00:37:15.454 }, 00:37:15.454 { 00:37:15.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:15.454 "dma_device_type": 2 00:37:15.454 } 00:37:15.454 ], 00:37:15.454 "driver_specific": {} 00:37:15.454 }' 00:37:15.454 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:15.454 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:15.454 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:15.454 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:15.454 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:15.711 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:15.711 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:15.711 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:15.711 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:15.711 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:15.711 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:15.711 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:15.711 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:15.711 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:37:15.711 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:15.969 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:15.969 "name": "BaseBdev3", 00:37:15.969 "aliases": [ 00:37:15.969 "4db55e9d-258f-4d34-8132-3784dae67ec3" 00:37:15.969 ], 00:37:15.969 "product_name": "Malloc disk", 00:37:15.969 "block_size": 512, 00:37:15.969 "num_blocks": 65536, 00:37:15.969 "uuid": "4db55e9d-258f-4d34-8132-3784dae67ec3", 00:37:15.969 "assigned_rate_limits": { 00:37:15.969 "rw_ios_per_sec": 0, 00:37:15.969 "rw_mbytes_per_sec": 0, 00:37:15.969 "r_mbytes_per_sec": 0, 00:37:15.969 "w_mbytes_per_sec": 0 00:37:15.969 }, 00:37:15.969 "claimed": true, 00:37:15.969 "claim_type": "exclusive_write", 00:37:15.969 "zoned": false, 00:37:15.969 "supported_io_types": { 00:37:15.969 "read": true, 00:37:15.969 "write": true, 00:37:15.969 "unmap": true, 00:37:15.969 "write_zeroes": true, 00:37:15.969 "flush": true, 00:37:15.969 "reset": true, 00:37:15.969 "compare": false, 00:37:15.969 "compare_and_write": false, 00:37:15.969 "abort": true, 00:37:15.969 "nvme_admin": false, 00:37:15.969 "nvme_io": false 00:37:15.969 }, 00:37:15.969 "memory_domains": [ 00:37:15.969 { 00:37:15.969 "dma_device_id": "system", 00:37:15.969 "dma_device_type": 1 00:37:15.969 }, 00:37:15.969 { 00:37:15.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:15.969 "dma_device_type": 2 00:37:15.969 } 00:37:15.969 ], 00:37:15.969 "driver_specific": {} 00:37:15.969 }' 00:37:15.969 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:15.969 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:15.969 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:15.969 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:15.969 12:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:15.969 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:15.969 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:16.228 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:16.228 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:16.228 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:16.228 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:16.228 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:16.228 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:16.228 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:37:16.228 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:16.486 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:16.486 "name": "BaseBdev4", 00:37:16.486 "aliases": [ 00:37:16.486 "f534f69b-2b95-4d57-8444-87abda574052" 00:37:16.486 ], 00:37:16.486 "product_name": "Malloc disk", 00:37:16.486 "block_size": 512, 00:37:16.486 "num_blocks": 65536, 00:37:16.486 "uuid": "f534f69b-2b95-4d57-8444-87abda574052", 00:37:16.486 "assigned_rate_limits": { 00:37:16.486 "rw_ios_per_sec": 0, 00:37:16.486 "rw_mbytes_per_sec": 0, 00:37:16.486 "r_mbytes_per_sec": 0, 00:37:16.486 "w_mbytes_per_sec": 0 00:37:16.486 }, 00:37:16.486 "claimed": true, 00:37:16.486 "claim_type": "exclusive_write", 00:37:16.486 "zoned": false, 00:37:16.486 "supported_io_types": { 00:37:16.486 "read": true, 00:37:16.486 "write": true, 00:37:16.486 "unmap": true, 00:37:16.486 "write_zeroes": true, 00:37:16.486 "flush": true, 00:37:16.486 "reset": true, 00:37:16.486 "compare": false, 00:37:16.486 "compare_and_write": false, 00:37:16.486 "abort": true, 00:37:16.486 "nvme_admin": false, 00:37:16.486 "nvme_io": false 00:37:16.486 }, 00:37:16.486 "memory_domains": [ 00:37:16.486 { 00:37:16.486 "dma_device_id": "system", 00:37:16.486 "dma_device_type": 1 00:37:16.486 }, 00:37:16.486 { 00:37:16.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:16.486 "dma_device_type": 2 00:37:16.486 } 00:37:16.486 ], 00:37:16.486 "driver_specific": {} 00:37:16.486 }' 00:37:16.486 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:16.486 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:16.486 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:16.486 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:16.746 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:16.746 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:16.746 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:16.746 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:16.746 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:16.746 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:16.746 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:17.005 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:17.005 12:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:17.263 [2024-06-10 12:00:49.077949] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:17.263 [2024-06-10 12:00:49.078198] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:17.263 [2024-06-10 12:00:49.078369] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:17.263 [2024-06-10 12:00:49.078769] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:17.263 [2024-06-10 12:00:49.078890] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:37:17.263 12:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 156074 00:37:17.263 12:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 156074 ']' 00:37:17.263 12:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # kill -0 156074 00:37:17.263 12:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # uname 00:37:17.263 12:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:17.263 12:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 156074 00:37:17.263 12:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:37:17.263 12:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:37:17.263 12:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 156074' 00:37:17.263 killing process with pid 156074 00:37:17.263 12:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # kill 156074 00:37:17.263 [2024-06-10 12:00:49.128143] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:17.263 12:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # wait 156074 00:37:17.829 [2024-06-10 12:00:49.582243] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:19.204 ************************************ 00:37:19.204 END TEST raid5f_state_function_test 00:37:19.204 ************************************ 00:37:19.205 12:00:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:37:19.205 00:37:19.205 real 0m36.442s 00:37:19.205 user 1m6.019s 00:37:19.205 sys 0m5.256s 00:37:19.205 12:00:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:19.205 12:00:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.205 12:00:51 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:37:19.205 12:00:51 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:37:19.205 12:00:51 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:19.205 12:00:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:19.205 ************************************ 00:37:19.205 START TEST raid5f_state_function_test_sb 00:37:19.205 ************************************ 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid5f 4 true 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=157194 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 157194' 00:37:19.205 Process raid pid: 157194 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 157194 /var/tmp/spdk-raid.sock 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 157194 ']' 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:19.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:19.205 12:00:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:19.205 [2024-06-10 12:00:51.142969] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:37:19.205 [2024-06-10 12:00:51.143319] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:19.464 [2024-06-10 12:00:51.311799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:19.464 [2024-06-10 12:00:51.517301] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:37:19.723 [2024-06-10 12:00:51.732957] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:20.288 12:00:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:20.288 12:00:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:37:20.288 12:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:37:20.547 [2024-06-10 12:00:52.348223] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:20.547 [2024-06-10 12:00:52.348591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:20.547 [2024-06-10 12:00:52.348727] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:20.547 [2024-06-10 12:00:52.348894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:20.547 [2024-06-10 12:00:52.349010] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:20.547 [2024-06-10 12:00:52.349080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:20.547 [2024-06-10 12:00:52.349124] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:20.548 [2024-06-10 12:00:52.349262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:20.548 12:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:20.548 12:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:20.548 12:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:20.548 12:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:20.548 12:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:20.548 12:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:20.548 12:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:20.548 12:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:20.548 12:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:20.548 12:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:20.548 12:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:20.548 12:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:20.809 12:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:20.809 "name": "Existed_Raid", 00:37:20.809 "uuid": "bfec69e1-0cd7-4157-9973-3ca1843c41d5", 00:37:20.809 "strip_size_kb": 64, 00:37:20.809 "state": "configuring", 00:37:20.809 "raid_level": "raid5f", 00:37:20.809 "superblock": true, 00:37:20.809 "num_base_bdevs": 4, 00:37:20.809 "num_base_bdevs_discovered": 0, 00:37:20.809 "num_base_bdevs_operational": 4, 00:37:20.809 "base_bdevs_list": [ 00:37:20.809 { 00:37:20.809 "name": "BaseBdev1", 00:37:20.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:20.809 "is_configured": false, 00:37:20.809 "data_offset": 0, 00:37:20.809 "data_size": 0 00:37:20.809 }, 00:37:20.809 { 00:37:20.809 "name": "BaseBdev2", 00:37:20.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:20.809 "is_configured": false, 00:37:20.809 "data_offset": 0, 00:37:20.809 "data_size": 0 00:37:20.809 }, 00:37:20.809 { 00:37:20.809 "name": "BaseBdev3", 00:37:20.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:20.809 "is_configured": false, 00:37:20.809 "data_offset": 0, 00:37:20.809 "data_size": 0 00:37:20.809 }, 00:37:20.809 { 00:37:20.809 "name": "BaseBdev4", 00:37:20.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:20.809 "is_configured": false, 00:37:20.809 "data_offset": 0, 00:37:20.809 "data_size": 0 00:37:20.809 } 00:37:20.809 ] 00:37:20.809 }' 00:37:20.809 12:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:20.810 12:00:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:21.377 12:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:21.635 [2024-06-10 12:00:53.536248] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:21.635 [2024-06-10 12:00:53.536488] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:37:21.635 12:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:37:21.895 [2024-06-10 12:00:53.788338] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:21.895 [2024-06-10 12:00:53.788566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:21.895 [2024-06-10 12:00:53.788656] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:21.895 [2024-06-10 12:00:53.788741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:21.895 [2024-06-10 12:00:53.788835] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:21.895 [2024-06-10 12:00:53.788907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:21.895 [2024-06-10 12:00:53.788939] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:21.895 [2024-06-10 12:00:53.789044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:21.895 12:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:37:22.153 [2024-06-10 12:00:54.110390] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:22.153 BaseBdev1 00:37:22.153 12:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:37:22.153 12:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:37:22.153 12:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:37:22.153 12:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:37:22.153 12:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:37:22.153 12:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:37:22.153 12:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:22.412 12:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:22.671 [ 00:37:22.671 { 00:37:22.671 "name": "BaseBdev1", 00:37:22.671 "aliases": [ 00:37:22.671 "b0dfbc4b-dd59-44f5-aa78-57965ae64ff1" 00:37:22.671 ], 00:37:22.671 "product_name": "Malloc disk", 00:37:22.671 "block_size": 512, 00:37:22.671 "num_blocks": 65536, 00:37:22.671 "uuid": "b0dfbc4b-dd59-44f5-aa78-57965ae64ff1", 00:37:22.671 "assigned_rate_limits": { 00:37:22.671 "rw_ios_per_sec": 0, 00:37:22.671 "rw_mbytes_per_sec": 0, 00:37:22.671 "r_mbytes_per_sec": 0, 00:37:22.671 "w_mbytes_per_sec": 0 00:37:22.671 }, 00:37:22.671 "claimed": true, 00:37:22.671 "claim_type": "exclusive_write", 00:37:22.671 "zoned": false, 00:37:22.671 "supported_io_types": { 00:37:22.671 "read": true, 00:37:22.671 "write": true, 00:37:22.671 "unmap": true, 00:37:22.671 "write_zeroes": true, 00:37:22.671 "flush": true, 00:37:22.671 "reset": true, 00:37:22.671 "compare": false, 00:37:22.671 "compare_and_write": false, 00:37:22.671 "abort": true, 00:37:22.671 "nvme_admin": false, 00:37:22.671 "nvme_io": false 00:37:22.671 }, 00:37:22.671 "memory_domains": [ 00:37:22.671 { 00:37:22.671 "dma_device_id": "system", 00:37:22.671 "dma_device_type": 1 00:37:22.671 }, 00:37:22.671 { 00:37:22.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:22.671 "dma_device_type": 2 00:37:22.671 } 00:37:22.671 ], 00:37:22.671 "driver_specific": {} 00:37:22.671 } 00:37:22.671 ] 00:37:22.671 12:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:37:22.671 12:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:22.671 12:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:22.671 12:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:22.671 12:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:22.671 12:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:22.671 12:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:22.671 12:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:22.671 12:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:22.671 12:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:22.671 12:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:22.671 12:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:22.671 12:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:22.930 12:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:22.930 "name": "Existed_Raid", 00:37:22.930 "uuid": "93cd13f0-5d64-45ac-bdc4-344b009205c6", 00:37:22.930 "strip_size_kb": 64, 00:37:22.930 "state": "configuring", 00:37:22.930 "raid_level": "raid5f", 00:37:22.930 "superblock": true, 00:37:22.930 "num_base_bdevs": 4, 00:37:22.930 "num_base_bdevs_discovered": 1, 00:37:22.930 "num_base_bdevs_operational": 4, 00:37:22.930 "base_bdevs_list": [ 00:37:22.930 { 00:37:22.930 "name": "BaseBdev1", 00:37:22.930 "uuid": "b0dfbc4b-dd59-44f5-aa78-57965ae64ff1", 00:37:22.930 "is_configured": true, 00:37:22.930 "data_offset": 2048, 00:37:22.930 "data_size": 63488 00:37:22.930 }, 00:37:22.930 { 00:37:22.930 "name": "BaseBdev2", 00:37:22.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:22.930 "is_configured": false, 00:37:22.930 "data_offset": 0, 00:37:22.930 "data_size": 0 00:37:22.930 }, 00:37:22.930 { 00:37:22.930 "name": "BaseBdev3", 00:37:22.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:22.930 "is_configured": false, 00:37:22.930 "data_offset": 0, 00:37:22.930 "data_size": 0 00:37:22.930 }, 00:37:22.930 { 00:37:22.930 "name": "BaseBdev4", 00:37:22.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:22.930 "is_configured": false, 00:37:22.930 "data_offset": 0, 00:37:22.930 "data_size": 0 00:37:22.930 } 00:37:22.930 ] 00:37:22.930 }' 00:37:22.930 12:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:22.930 12:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:23.494 12:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:23.753 [2024-06-10 12:00:55.578725] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:23.753 [2024-06-10 12:00:55.578950] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:37:23.753 12:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:37:23.753 [2024-06-10 12:00:55.770808] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:23.753 [2024-06-10 12:00:55.773204] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:23.753 [2024-06-10 12:00:55.773385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:23.753 [2024-06-10 12:00:55.773480] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:23.753 [2024-06-10 12:00:55.773544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:23.753 [2024-06-10 12:00:55.773630] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:23.753 [2024-06-10 12:00:55.773711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:23.753 12:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:37:23.753 12:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:37:23.753 12:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:23.753 12:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:23.753 12:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:23.753 12:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:23.753 12:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:23.753 12:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:23.753 12:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:23.753 12:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:23.753 12:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:23.753 12:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:23.753 12:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:23.753 12:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:24.384 12:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:24.384 "name": "Existed_Raid", 00:37:24.384 "uuid": "82ed5d19-8c41-4a6d-988f-3ebb76db8e78", 00:37:24.384 "strip_size_kb": 64, 00:37:24.384 "state": "configuring", 00:37:24.384 "raid_level": "raid5f", 00:37:24.384 "superblock": true, 00:37:24.384 "num_base_bdevs": 4, 00:37:24.384 "num_base_bdevs_discovered": 1, 00:37:24.384 "num_base_bdevs_operational": 4, 00:37:24.384 "base_bdevs_list": [ 00:37:24.384 { 00:37:24.384 "name": "BaseBdev1", 00:37:24.384 "uuid": "b0dfbc4b-dd59-44f5-aa78-57965ae64ff1", 00:37:24.384 "is_configured": true, 00:37:24.384 "data_offset": 2048, 00:37:24.384 "data_size": 63488 00:37:24.384 }, 00:37:24.384 { 00:37:24.384 "name": "BaseBdev2", 00:37:24.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:24.384 "is_configured": false, 00:37:24.384 "data_offset": 0, 00:37:24.384 "data_size": 0 00:37:24.384 }, 00:37:24.384 { 00:37:24.384 "name": "BaseBdev3", 00:37:24.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:24.384 "is_configured": false, 00:37:24.384 "data_offset": 0, 00:37:24.384 "data_size": 0 00:37:24.384 }, 00:37:24.384 { 00:37:24.384 "name": "BaseBdev4", 00:37:24.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:24.384 "is_configured": false, 00:37:24.384 "data_offset": 0, 00:37:24.385 "data_size": 0 00:37:24.385 } 00:37:24.385 ] 00:37:24.385 }' 00:37:24.385 12:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:24.385 12:00:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:24.952 12:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:37:25.211 [2024-06-10 12:00:57.054952] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:25.211 BaseBdev2 00:37:25.211 12:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:37:25.211 12:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:37:25.211 12:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:37:25.211 12:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:37:25.211 12:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:37:25.211 12:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:37:25.211 12:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:25.469 12:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:25.727 [ 00:37:25.727 { 00:37:25.727 "name": "BaseBdev2", 00:37:25.727 "aliases": [ 00:37:25.727 "57d0df3b-599a-4786-b28f-1d40c557c5d7" 00:37:25.727 ], 00:37:25.727 "product_name": "Malloc disk", 00:37:25.727 "block_size": 512, 00:37:25.727 "num_blocks": 65536, 00:37:25.727 "uuid": "57d0df3b-599a-4786-b28f-1d40c557c5d7", 00:37:25.727 "assigned_rate_limits": { 00:37:25.727 "rw_ios_per_sec": 0, 00:37:25.727 "rw_mbytes_per_sec": 0, 00:37:25.727 "r_mbytes_per_sec": 0, 00:37:25.727 "w_mbytes_per_sec": 0 00:37:25.727 }, 00:37:25.727 "claimed": true, 00:37:25.727 "claim_type": "exclusive_write", 00:37:25.727 "zoned": false, 00:37:25.727 "supported_io_types": { 00:37:25.727 "read": true, 00:37:25.727 "write": true, 00:37:25.727 "unmap": true, 00:37:25.727 "write_zeroes": true, 00:37:25.727 "flush": true, 00:37:25.727 "reset": true, 00:37:25.727 "compare": false, 00:37:25.727 "compare_and_write": false, 00:37:25.727 "abort": true, 00:37:25.727 "nvme_admin": false, 00:37:25.727 "nvme_io": false 00:37:25.727 }, 00:37:25.727 "memory_domains": [ 00:37:25.727 { 00:37:25.727 "dma_device_id": "system", 00:37:25.727 "dma_device_type": 1 00:37:25.727 }, 00:37:25.727 { 00:37:25.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:25.727 "dma_device_type": 2 00:37:25.727 } 00:37:25.727 ], 00:37:25.727 "driver_specific": {} 00:37:25.727 } 00:37:25.727 ] 00:37:25.727 12:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:37:25.727 12:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:37:25.727 12:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:37:25.727 12:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:25.727 12:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:25.727 12:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:25.727 12:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:25.727 12:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:25.727 12:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:25.727 12:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:25.727 12:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:25.727 12:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:25.727 12:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:25.727 12:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:25.727 12:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:25.996 12:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:25.996 "name": "Existed_Raid", 00:37:25.996 "uuid": "82ed5d19-8c41-4a6d-988f-3ebb76db8e78", 00:37:25.996 "strip_size_kb": 64, 00:37:25.996 "state": "configuring", 00:37:25.996 "raid_level": "raid5f", 00:37:25.996 "superblock": true, 00:37:25.996 "num_base_bdevs": 4, 00:37:25.996 "num_base_bdevs_discovered": 2, 00:37:25.996 "num_base_bdevs_operational": 4, 00:37:25.996 "base_bdevs_list": [ 00:37:25.996 { 00:37:25.996 "name": "BaseBdev1", 00:37:25.996 "uuid": "b0dfbc4b-dd59-44f5-aa78-57965ae64ff1", 00:37:25.996 "is_configured": true, 00:37:25.996 "data_offset": 2048, 00:37:25.996 "data_size": 63488 00:37:25.996 }, 00:37:25.996 { 00:37:25.996 "name": "BaseBdev2", 00:37:25.996 "uuid": "57d0df3b-599a-4786-b28f-1d40c557c5d7", 00:37:25.996 "is_configured": true, 00:37:25.996 "data_offset": 2048, 00:37:25.996 "data_size": 63488 00:37:25.996 }, 00:37:25.996 { 00:37:25.996 "name": "BaseBdev3", 00:37:25.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:25.996 "is_configured": false, 00:37:25.996 "data_offset": 0, 00:37:25.996 "data_size": 0 00:37:25.996 }, 00:37:25.996 { 00:37:25.996 "name": "BaseBdev4", 00:37:25.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:25.996 "is_configured": false, 00:37:25.996 "data_offset": 0, 00:37:25.996 "data_size": 0 00:37:25.996 } 00:37:25.996 ] 00:37:25.996 }' 00:37:25.996 12:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:25.996 12:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:26.560 12:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:37:26.817 [2024-06-10 12:00:58.732053] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:26.817 BaseBdev3 00:37:26.817 12:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:37:26.817 12:00:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:37:26.817 12:00:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:37:26.817 12:00:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:37:26.817 12:00:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:37:26.817 12:00:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:37:26.817 12:00:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:27.075 12:00:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:27.336 [ 00:37:27.336 { 00:37:27.336 "name": "BaseBdev3", 00:37:27.336 "aliases": [ 00:37:27.336 "b0fbd5cc-9866-478e-928c-bc2574d75ef5" 00:37:27.336 ], 00:37:27.336 "product_name": "Malloc disk", 00:37:27.336 "block_size": 512, 00:37:27.336 "num_blocks": 65536, 00:37:27.336 "uuid": "b0fbd5cc-9866-478e-928c-bc2574d75ef5", 00:37:27.336 "assigned_rate_limits": { 00:37:27.336 "rw_ios_per_sec": 0, 00:37:27.336 "rw_mbytes_per_sec": 0, 00:37:27.336 "r_mbytes_per_sec": 0, 00:37:27.336 "w_mbytes_per_sec": 0 00:37:27.336 }, 00:37:27.336 "claimed": true, 00:37:27.336 "claim_type": "exclusive_write", 00:37:27.336 "zoned": false, 00:37:27.336 "supported_io_types": { 00:37:27.336 "read": true, 00:37:27.336 "write": true, 00:37:27.336 "unmap": true, 00:37:27.336 "write_zeroes": true, 00:37:27.336 "flush": true, 00:37:27.336 "reset": true, 00:37:27.336 "compare": false, 00:37:27.336 "compare_and_write": false, 00:37:27.336 "abort": true, 00:37:27.336 "nvme_admin": false, 00:37:27.336 "nvme_io": false 00:37:27.336 }, 00:37:27.336 "memory_domains": [ 00:37:27.336 { 00:37:27.336 "dma_device_id": "system", 00:37:27.336 "dma_device_type": 1 00:37:27.336 }, 00:37:27.336 { 00:37:27.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:27.336 "dma_device_type": 2 00:37:27.336 } 00:37:27.336 ], 00:37:27.336 "driver_specific": {} 00:37:27.336 } 00:37:27.336 ] 00:37:27.336 12:00:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:37:27.336 12:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:37:27.336 12:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:37:27.336 12:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:27.336 12:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:27.336 12:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:27.336 12:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:27.336 12:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:27.336 12:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:27.336 12:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:27.336 12:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:27.336 12:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:27.336 12:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:27.336 12:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:27.336 12:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:27.598 12:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:27.598 "name": "Existed_Raid", 00:37:27.598 "uuid": "82ed5d19-8c41-4a6d-988f-3ebb76db8e78", 00:37:27.598 "strip_size_kb": 64, 00:37:27.598 "state": "configuring", 00:37:27.598 "raid_level": "raid5f", 00:37:27.598 "superblock": true, 00:37:27.598 "num_base_bdevs": 4, 00:37:27.598 "num_base_bdevs_discovered": 3, 00:37:27.598 "num_base_bdevs_operational": 4, 00:37:27.598 "base_bdevs_list": [ 00:37:27.598 { 00:37:27.598 "name": "BaseBdev1", 00:37:27.598 "uuid": "b0dfbc4b-dd59-44f5-aa78-57965ae64ff1", 00:37:27.598 "is_configured": true, 00:37:27.598 "data_offset": 2048, 00:37:27.598 "data_size": 63488 00:37:27.598 }, 00:37:27.598 { 00:37:27.598 "name": "BaseBdev2", 00:37:27.598 "uuid": "57d0df3b-599a-4786-b28f-1d40c557c5d7", 00:37:27.598 "is_configured": true, 00:37:27.598 "data_offset": 2048, 00:37:27.598 "data_size": 63488 00:37:27.598 }, 00:37:27.598 { 00:37:27.598 "name": "BaseBdev3", 00:37:27.598 "uuid": "b0fbd5cc-9866-478e-928c-bc2574d75ef5", 00:37:27.598 "is_configured": true, 00:37:27.598 "data_offset": 2048, 00:37:27.598 "data_size": 63488 00:37:27.598 }, 00:37:27.598 { 00:37:27.598 "name": "BaseBdev4", 00:37:27.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:27.598 "is_configured": false, 00:37:27.598 "data_offset": 0, 00:37:27.598 "data_size": 0 00:37:27.598 } 00:37:27.598 ] 00:37:27.598 }' 00:37:27.598 12:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:27.598 12:00:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:28.164 12:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:37:28.422 [2024-06-10 12:01:00.427571] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:28.422 [2024-06-10 12:01:00.428080] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:37:28.422 [2024-06-10 12:01:00.428220] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:28.422 [2024-06-10 12:01:00.428374] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:37:28.422 BaseBdev4 00:37:28.422 [2024-06-10 12:01:00.436714] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:37:28.422 [2024-06-10 12:01:00.436853] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:37:28.422 [2024-06-10 12:01:00.437219] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:28.422 12:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:37:28.422 12:01:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:37:28.422 12:01:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:37:28.422 12:01:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:37:28.422 12:01:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:37:28.422 12:01:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:37:28.422 12:01:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:28.680 12:01:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:37:28.938 [ 00:37:28.938 { 00:37:28.938 "name": "BaseBdev4", 00:37:28.938 "aliases": [ 00:37:28.938 "00dd9c66-08bb-49ed-ab2f-9c4a5b4b1da9" 00:37:28.938 ], 00:37:28.938 "product_name": "Malloc disk", 00:37:28.938 "block_size": 512, 00:37:28.938 "num_blocks": 65536, 00:37:28.938 "uuid": "00dd9c66-08bb-49ed-ab2f-9c4a5b4b1da9", 00:37:28.938 "assigned_rate_limits": { 00:37:28.938 "rw_ios_per_sec": 0, 00:37:28.938 "rw_mbytes_per_sec": 0, 00:37:28.938 "r_mbytes_per_sec": 0, 00:37:28.938 "w_mbytes_per_sec": 0 00:37:28.938 }, 00:37:28.938 "claimed": true, 00:37:28.939 "claim_type": "exclusive_write", 00:37:28.939 "zoned": false, 00:37:28.939 "supported_io_types": { 00:37:28.939 "read": true, 00:37:28.939 "write": true, 00:37:28.939 "unmap": true, 00:37:28.939 "write_zeroes": true, 00:37:28.939 "flush": true, 00:37:28.939 "reset": true, 00:37:28.939 "compare": false, 00:37:28.939 "compare_and_write": false, 00:37:28.939 "abort": true, 00:37:28.939 "nvme_admin": false, 00:37:28.939 "nvme_io": false 00:37:28.939 }, 00:37:28.939 "memory_domains": [ 00:37:28.939 { 00:37:28.939 "dma_device_id": "system", 00:37:28.939 "dma_device_type": 1 00:37:28.939 }, 00:37:28.939 { 00:37:28.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:28.939 "dma_device_type": 2 00:37:28.939 } 00:37:28.939 ], 00:37:28.939 "driver_specific": {} 00:37:28.939 } 00:37:28.939 ] 00:37:28.939 12:01:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:37:28.939 12:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:37:28.939 12:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:37:28.939 12:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:37:28.939 12:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:28.939 12:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:28.939 12:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:28.939 12:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:28.939 12:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:28.939 12:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:28.939 12:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:28.939 12:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:28.939 12:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:28.939 12:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:28.939 12:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:29.197 12:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:29.197 "name": "Existed_Raid", 00:37:29.197 "uuid": "82ed5d19-8c41-4a6d-988f-3ebb76db8e78", 00:37:29.197 "strip_size_kb": 64, 00:37:29.197 "state": "online", 00:37:29.197 "raid_level": "raid5f", 00:37:29.197 "superblock": true, 00:37:29.197 "num_base_bdevs": 4, 00:37:29.197 "num_base_bdevs_discovered": 4, 00:37:29.197 "num_base_bdevs_operational": 4, 00:37:29.197 "base_bdevs_list": [ 00:37:29.197 { 00:37:29.197 "name": "BaseBdev1", 00:37:29.197 "uuid": "b0dfbc4b-dd59-44f5-aa78-57965ae64ff1", 00:37:29.197 "is_configured": true, 00:37:29.197 "data_offset": 2048, 00:37:29.197 "data_size": 63488 00:37:29.197 }, 00:37:29.197 { 00:37:29.197 "name": "BaseBdev2", 00:37:29.197 "uuid": "57d0df3b-599a-4786-b28f-1d40c557c5d7", 00:37:29.197 "is_configured": true, 00:37:29.197 "data_offset": 2048, 00:37:29.197 "data_size": 63488 00:37:29.197 }, 00:37:29.197 { 00:37:29.197 "name": "BaseBdev3", 00:37:29.197 "uuid": "b0fbd5cc-9866-478e-928c-bc2574d75ef5", 00:37:29.197 "is_configured": true, 00:37:29.197 "data_offset": 2048, 00:37:29.197 "data_size": 63488 00:37:29.197 }, 00:37:29.197 { 00:37:29.197 "name": "BaseBdev4", 00:37:29.197 "uuid": "00dd9c66-08bb-49ed-ab2f-9c4a5b4b1da9", 00:37:29.197 "is_configured": true, 00:37:29.197 "data_offset": 2048, 00:37:29.197 "data_size": 63488 00:37:29.197 } 00:37:29.197 ] 00:37:29.197 }' 00:37:29.197 12:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:29.197 12:01:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:29.763 12:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:37:29.763 12:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:37:29.763 12:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:29.763 12:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:29.763 12:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:29.763 12:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:37:29.763 12:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:37:29.763 12:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:30.021 [2024-06-10 12:01:01.975302] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:30.021 12:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:30.021 "name": "Existed_Raid", 00:37:30.021 "aliases": [ 00:37:30.021 "82ed5d19-8c41-4a6d-988f-3ebb76db8e78" 00:37:30.021 ], 00:37:30.022 "product_name": "Raid Volume", 00:37:30.022 "block_size": 512, 00:37:30.022 "num_blocks": 190464, 00:37:30.022 "uuid": "82ed5d19-8c41-4a6d-988f-3ebb76db8e78", 00:37:30.022 "assigned_rate_limits": { 00:37:30.022 "rw_ios_per_sec": 0, 00:37:30.022 "rw_mbytes_per_sec": 0, 00:37:30.022 "r_mbytes_per_sec": 0, 00:37:30.022 "w_mbytes_per_sec": 0 00:37:30.022 }, 00:37:30.022 "claimed": false, 00:37:30.022 "zoned": false, 00:37:30.022 "supported_io_types": { 00:37:30.022 "read": true, 00:37:30.022 "write": true, 00:37:30.022 "unmap": false, 00:37:30.022 "write_zeroes": true, 00:37:30.022 "flush": false, 00:37:30.022 "reset": true, 00:37:30.022 "compare": false, 00:37:30.022 "compare_and_write": false, 00:37:30.022 "abort": false, 00:37:30.022 "nvme_admin": false, 00:37:30.022 "nvme_io": false 00:37:30.022 }, 00:37:30.022 "driver_specific": { 00:37:30.022 "raid": { 00:37:30.022 "uuid": "82ed5d19-8c41-4a6d-988f-3ebb76db8e78", 00:37:30.022 "strip_size_kb": 64, 00:37:30.022 "state": "online", 00:37:30.022 "raid_level": "raid5f", 00:37:30.022 "superblock": true, 00:37:30.022 "num_base_bdevs": 4, 00:37:30.022 "num_base_bdevs_discovered": 4, 00:37:30.022 "num_base_bdevs_operational": 4, 00:37:30.022 "base_bdevs_list": [ 00:37:30.022 { 00:37:30.022 "name": "BaseBdev1", 00:37:30.022 "uuid": "b0dfbc4b-dd59-44f5-aa78-57965ae64ff1", 00:37:30.022 "is_configured": true, 00:37:30.022 "data_offset": 2048, 00:37:30.022 "data_size": 63488 00:37:30.022 }, 00:37:30.022 { 00:37:30.022 "name": "BaseBdev2", 00:37:30.022 "uuid": "57d0df3b-599a-4786-b28f-1d40c557c5d7", 00:37:30.022 "is_configured": true, 00:37:30.022 "data_offset": 2048, 00:37:30.022 "data_size": 63488 00:37:30.022 }, 00:37:30.022 { 00:37:30.022 "name": "BaseBdev3", 00:37:30.022 "uuid": "b0fbd5cc-9866-478e-928c-bc2574d75ef5", 00:37:30.022 "is_configured": true, 00:37:30.022 "data_offset": 2048, 00:37:30.022 "data_size": 63488 00:37:30.022 }, 00:37:30.022 { 00:37:30.022 "name": "BaseBdev4", 00:37:30.022 "uuid": "00dd9c66-08bb-49ed-ab2f-9c4a5b4b1da9", 00:37:30.022 "is_configured": true, 00:37:30.022 "data_offset": 2048, 00:37:30.022 "data_size": 63488 00:37:30.022 } 00:37:30.022 ] 00:37:30.022 } 00:37:30.022 } 00:37:30.022 }' 00:37:30.022 12:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:30.022 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:37:30.022 BaseBdev2 00:37:30.022 BaseBdev3 00:37:30.022 BaseBdev4' 00:37:30.022 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:30.022 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:37:30.022 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:30.279 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:30.279 "name": "BaseBdev1", 00:37:30.279 "aliases": [ 00:37:30.279 "b0dfbc4b-dd59-44f5-aa78-57965ae64ff1" 00:37:30.279 ], 00:37:30.279 "product_name": "Malloc disk", 00:37:30.279 "block_size": 512, 00:37:30.279 "num_blocks": 65536, 00:37:30.279 "uuid": "b0dfbc4b-dd59-44f5-aa78-57965ae64ff1", 00:37:30.279 "assigned_rate_limits": { 00:37:30.279 "rw_ios_per_sec": 0, 00:37:30.279 "rw_mbytes_per_sec": 0, 00:37:30.279 "r_mbytes_per_sec": 0, 00:37:30.279 "w_mbytes_per_sec": 0 00:37:30.279 }, 00:37:30.279 "claimed": true, 00:37:30.279 "claim_type": "exclusive_write", 00:37:30.279 "zoned": false, 00:37:30.279 "supported_io_types": { 00:37:30.279 "read": true, 00:37:30.279 "write": true, 00:37:30.279 "unmap": true, 00:37:30.279 "write_zeroes": true, 00:37:30.279 "flush": true, 00:37:30.279 "reset": true, 00:37:30.279 "compare": false, 00:37:30.279 "compare_and_write": false, 00:37:30.279 "abort": true, 00:37:30.279 "nvme_admin": false, 00:37:30.279 "nvme_io": false 00:37:30.279 }, 00:37:30.279 "memory_domains": [ 00:37:30.279 { 00:37:30.279 "dma_device_id": "system", 00:37:30.279 "dma_device_type": 1 00:37:30.279 }, 00:37:30.279 { 00:37:30.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:30.279 "dma_device_type": 2 00:37:30.279 } 00:37:30.279 ], 00:37:30.279 "driver_specific": {} 00:37:30.279 }' 00:37:30.279 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:30.538 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:30.538 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:30.538 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:30.538 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:30.538 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:30.538 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:30.538 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:30.538 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:30.538 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:30.796 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:30.797 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:30.797 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:30.797 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:37:30.797 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:31.073 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:31.073 "name": "BaseBdev2", 00:37:31.073 "aliases": [ 00:37:31.073 "57d0df3b-599a-4786-b28f-1d40c557c5d7" 00:37:31.073 ], 00:37:31.073 "product_name": "Malloc disk", 00:37:31.073 "block_size": 512, 00:37:31.073 "num_blocks": 65536, 00:37:31.073 "uuid": "57d0df3b-599a-4786-b28f-1d40c557c5d7", 00:37:31.073 "assigned_rate_limits": { 00:37:31.073 "rw_ios_per_sec": 0, 00:37:31.073 "rw_mbytes_per_sec": 0, 00:37:31.073 "r_mbytes_per_sec": 0, 00:37:31.073 "w_mbytes_per_sec": 0 00:37:31.074 }, 00:37:31.074 "claimed": true, 00:37:31.074 "claim_type": "exclusive_write", 00:37:31.074 "zoned": false, 00:37:31.074 "supported_io_types": { 00:37:31.074 "read": true, 00:37:31.074 "write": true, 00:37:31.074 "unmap": true, 00:37:31.074 "write_zeroes": true, 00:37:31.074 "flush": true, 00:37:31.074 "reset": true, 00:37:31.074 "compare": false, 00:37:31.074 "compare_and_write": false, 00:37:31.074 "abort": true, 00:37:31.074 "nvme_admin": false, 00:37:31.074 "nvme_io": false 00:37:31.074 }, 00:37:31.074 "memory_domains": [ 00:37:31.074 { 00:37:31.074 "dma_device_id": "system", 00:37:31.074 "dma_device_type": 1 00:37:31.074 }, 00:37:31.074 { 00:37:31.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:31.074 "dma_device_type": 2 00:37:31.074 } 00:37:31.074 ], 00:37:31.074 "driver_specific": {} 00:37:31.074 }' 00:37:31.074 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:31.074 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:31.074 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:31.074 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:31.074 12:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:31.074 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:31.074 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:31.074 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:31.337 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:31.337 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:31.337 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:31.337 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:31.337 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:31.337 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:37:31.337 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:31.595 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:31.595 "name": "BaseBdev3", 00:37:31.595 "aliases": [ 00:37:31.595 "b0fbd5cc-9866-478e-928c-bc2574d75ef5" 00:37:31.595 ], 00:37:31.595 "product_name": "Malloc disk", 00:37:31.595 "block_size": 512, 00:37:31.595 "num_blocks": 65536, 00:37:31.595 "uuid": "b0fbd5cc-9866-478e-928c-bc2574d75ef5", 00:37:31.595 "assigned_rate_limits": { 00:37:31.595 "rw_ios_per_sec": 0, 00:37:31.595 "rw_mbytes_per_sec": 0, 00:37:31.595 "r_mbytes_per_sec": 0, 00:37:31.595 "w_mbytes_per_sec": 0 00:37:31.595 }, 00:37:31.595 "claimed": true, 00:37:31.595 "claim_type": "exclusive_write", 00:37:31.595 "zoned": false, 00:37:31.595 "supported_io_types": { 00:37:31.595 "read": true, 00:37:31.595 "write": true, 00:37:31.595 "unmap": true, 00:37:31.595 "write_zeroes": true, 00:37:31.595 "flush": true, 00:37:31.595 "reset": true, 00:37:31.595 "compare": false, 00:37:31.596 "compare_and_write": false, 00:37:31.596 "abort": true, 00:37:31.596 "nvme_admin": false, 00:37:31.596 "nvme_io": false 00:37:31.596 }, 00:37:31.596 "memory_domains": [ 00:37:31.596 { 00:37:31.596 "dma_device_id": "system", 00:37:31.596 "dma_device_type": 1 00:37:31.596 }, 00:37:31.596 { 00:37:31.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:31.596 "dma_device_type": 2 00:37:31.596 } 00:37:31.596 ], 00:37:31.596 "driver_specific": {} 00:37:31.596 }' 00:37:31.596 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:31.596 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:31.596 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:31.596 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:31.855 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:31.855 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:31.855 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:31.855 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:31.855 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:31.855 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:31.855 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:31.855 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:31.855 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:31.855 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:37:31.855 12:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:32.420 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:32.421 "name": "BaseBdev4", 00:37:32.421 "aliases": [ 00:37:32.421 "00dd9c66-08bb-49ed-ab2f-9c4a5b4b1da9" 00:37:32.421 ], 00:37:32.421 "product_name": "Malloc disk", 00:37:32.421 "block_size": 512, 00:37:32.421 "num_blocks": 65536, 00:37:32.421 "uuid": "00dd9c66-08bb-49ed-ab2f-9c4a5b4b1da9", 00:37:32.421 "assigned_rate_limits": { 00:37:32.421 "rw_ios_per_sec": 0, 00:37:32.421 "rw_mbytes_per_sec": 0, 00:37:32.421 "r_mbytes_per_sec": 0, 00:37:32.421 "w_mbytes_per_sec": 0 00:37:32.421 }, 00:37:32.421 "claimed": true, 00:37:32.421 "claim_type": "exclusive_write", 00:37:32.421 "zoned": false, 00:37:32.421 "supported_io_types": { 00:37:32.421 "read": true, 00:37:32.421 "write": true, 00:37:32.421 "unmap": true, 00:37:32.421 "write_zeroes": true, 00:37:32.421 "flush": true, 00:37:32.421 "reset": true, 00:37:32.421 "compare": false, 00:37:32.421 "compare_and_write": false, 00:37:32.421 "abort": true, 00:37:32.421 "nvme_admin": false, 00:37:32.421 "nvme_io": false 00:37:32.421 }, 00:37:32.421 "memory_domains": [ 00:37:32.421 { 00:37:32.421 "dma_device_id": "system", 00:37:32.421 "dma_device_type": 1 00:37:32.421 }, 00:37:32.421 { 00:37:32.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:32.421 "dma_device_type": 2 00:37:32.421 } 00:37:32.421 ], 00:37:32.421 "driver_specific": {} 00:37:32.421 }' 00:37:32.421 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:32.421 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:32.421 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:32.421 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:32.421 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:32.421 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:32.421 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:32.421 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:32.679 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:32.679 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:32.679 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:32.679 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:32.679 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:37:32.938 [2024-06-10 12:01:04.843853] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:32.938 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:37:32.938 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:37:32.938 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:37:32.938 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:37:32.938 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:37:32.938 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:37:32.938 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:32.938 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:32.938 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:32.938 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:32.938 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:37:32.938 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:32.938 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:32.938 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:32.938 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:32.938 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:32.938 12:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:33.197 12:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:33.197 "name": "Existed_Raid", 00:37:33.197 "uuid": "82ed5d19-8c41-4a6d-988f-3ebb76db8e78", 00:37:33.197 "strip_size_kb": 64, 00:37:33.197 "state": "online", 00:37:33.197 "raid_level": "raid5f", 00:37:33.197 "superblock": true, 00:37:33.197 "num_base_bdevs": 4, 00:37:33.197 "num_base_bdevs_discovered": 3, 00:37:33.197 "num_base_bdevs_operational": 3, 00:37:33.197 "base_bdevs_list": [ 00:37:33.197 { 00:37:33.197 "name": null, 00:37:33.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:33.197 "is_configured": false, 00:37:33.197 "data_offset": 2048, 00:37:33.197 "data_size": 63488 00:37:33.197 }, 00:37:33.197 { 00:37:33.197 "name": "BaseBdev2", 00:37:33.197 "uuid": "57d0df3b-599a-4786-b28f-1d40c557c5d7", 00:37:33.197 "is_configured": true, 00:37:33.197 "data_offset": 2048, 00:37:33.197 "data_size": 63488 00:37:33.197 }, 00:37:33.197 { 00:37:33.197 "name": "BaseBdev3", 00:37:33.197 "uuid": "b0fbd5cc-9866-478e-928c-bc2574d75ef5", 00:37:33.197 "is_configured": true, 00:37:33.197 "data_offset": 2048, 00:37:33.197 "data_size": 63488 00:37:33.197 }, 00:37:33.197 { 00:37:33.197 "name": "BaseBdev4", 00:37:33.197 "uuid": "00dd9c66-08bb-49ed-ab2f-9c4a5b4b1da9", 00:37:33.197 "is_configured": true, 00:37:33.197 "data_offset": 2048, 00:37:33.197 "data_size": 63488 00:37:33.197 } 00:37:33.197 ] 00:37:33.197 }' 00:37:33.197 12:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:33.197 12:01:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:33.765 12:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:37:33.765 12:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:37:33.765 12:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:33.765 12:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:37:34.025 12:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:37:34.025 12:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:34.025 12:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:37:34.285 [2024-06-10 12:01:06.153715] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:34.285 [2024-06-10 12:01:06.154026] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:34.285 [2024-06-10 12:01:06.256761] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:34.285 12:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:37:34.285 12:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:37:34.285 12:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:34.285 12:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:37:34.544 12:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:37:34.544 12:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:34.544 12:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:37:34.802 [2024-06-10 12:01:06.664930] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:34.802 12:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:37:34.802 12:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:37:34.802 12:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:37:34.802 12:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:35.060 12:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:37:35.060 12:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:35.060 12:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:37:35.318 [2024-06-10 12:01:07.331012] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:37:35.318 [2024-06-10 12:01:07.331283] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:37:35.576 12:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:37:35.576 12:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:37:35.576 12:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:35.576 12:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:37:35.833 12:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:37:35.833 12:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:37:35.833 12:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:37:35.833 12:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:37:35.833 12:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:37:35.833 12:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:37:36.091 BaseBdev2 00:37:36.091 12:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:37:36.091 12:01:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:37:36.091 12:01:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:37:36.091 12:01:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:37:36.091 12:01:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:37:36.091 12:01:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:37:36.091 12:01:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:36.348 12:01:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:36.604 [ 00:37:36.604 { 00:37:36.604 "name": "BaseBdev2", 00:37:36.604 "aliases": [ 00:37:36.604 "a965e9ea-42b9-4ab6-b217-08191c747a80" 00:37:36.604 ], 00:37:36.604 "product_name": "Malloc disk", 00:37:36.604 "block_size": 512, 00:37:36.604 "num_blocks": 65536, 00:37:36.604 "uuid": "a965e9ea-42b9-4ab6-b217-08191c747a80", 00:37:36.604 "assigned_rate_limits": { 00:37:36.604 "rw_ios_per_sec": 0, 00:37:36.604 "rw_mbytes_per_sec": 0, 00:37:36.604 "r_mbytes_per_sec": 0, 00:37:36.604 "w_mbytes_per_sec": 0 00:37:36.604 }, 00:37:36.604 "claimed": false, 00:37:36.604 "zoned": false, 00:37:36.604 "supported_io_types": { 00:37:36.604 "read": true, 00:37:36.604 "write": true, 00:37:36.604 "unmap": true, 00:37:36.604 "write_zeroes": true, 00:37:36.604 "flush": true, 00:37:36.604 "reset": true, 00:37:36.604 "compare": false, 00:37:36.604 "compare_and_write": false, 00:37:36.604 "abort": true, 00:37:36.604 "nvme_admin": false, 00:37:36.604 "nvme_io": false 00:37:36.604 }, 00:37:36.604 "memory_domains": [ 00:37:36.604 { 00:37:36.604 "dma_device_id": "system", 00:37:36.604 "dma_device_type": 1 00:37:36.604 }, 00:37:36.604 { 00:37:36.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:36.604 "dma_device_type": 2 00:37:36.604 } 00:37:36.604 ], 00:37:36.604 "driver_specific": {} 00:37:36.604 } 00:37:36.604 ] 00:37:36.862 12:01:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:37:36.862 12:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:37:36.862 12:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:37:36.862 12:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:37:36.862 BaseBdev3 00:37:37.121 12:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:37:37.121 12:01:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:37:37.121 12:01:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:37:37.121 12:01:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:37:37.121 12:01:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:37:37.121 12:01:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:37:37.121 12:01:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:37.380 12:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:37.380 [ 00:37:37.380 { 00:37:37.380 "name": "BaseBdev3", 00:37:37.380 "aliases": [ 00:37:37.380 "77422d09-e7d3-4b55-9f25-90887a0bba11" 00:37:37.380 ], 00:37:37.380 "product_name": "Malloc disk", 00:37:37.380 "block_size": 512, 00:37:37.380 "num_blocks": 65536, 00:37:37.380 "uuid": "77422d09-e7d3-4b55-9f25-90887a0bba11", 00:37:37.380 "assigned_rate_limits": { 00:37:37.380 "rw_ios_per_sec": 0, 00:37:37.380 "rw_mbytes_per_sec": 0, 00:37:37.380 "r_mbytes_per_sec": 0, 00:37:37.380 "w_mbytes_per_sec": 0 00:37:37.380 }, 00:37:37.380 "claimed": false, 00:37:37.380 "zoned": false, 00:37:37.380 "supported_io_types": { 00:37:37.380 "read": true, 00:37:37.380 "write": true, 00:37:37.380 "unmap": true, 00:37:37.380 "write_zeroes": true, 00:37:37.380 "flush": true, 00:37:37.380 "reset": true, 00:37:37.380 "compare": false, 00:37:37.380 "compare_and_write": false, 00:37:37.380 "abort": true, 00:37:37.380 "nvme_admin": false, 00:37:37.380 "nvme_io": false 00:37:37.380 }, 00:37:37.380 "memory_domains": [ 00:37:37.380 { 00:37:37.380 "dma_device_id": "system", 00:37:37.380 "dma_device_type": 1 00:37:37.380 }, 00:37:37.380 { 00:37:37.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:37.380 "dma_device_type": 2 00:37:37.380 } 00:37:37.380 ], 00:37:37.380 "driver_specific": {} 00:37:37.380 } 00:37:37.380 ] 00:37:37.380 12:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:37:37.380 12:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:37:37.380 12:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:37:37.380 12:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:37:37.948 BaseBdev4 00:37:37.948 12:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:37:37.948 12:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:37:37.948 12:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:37:37.948 12:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:37:37.948 12:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:37:37.948 12:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:37:37.948 12:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:37.948 12:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:37:38.205 [ 00:37:38.205 { 00:37:38.205 "name": "BaseBdev4", 00:37:38.205 "aliases": [ 00:37:38.205 "2b89fabc-d434-4a9f-8a51-e7d8fb9db8e6" 00:37:38.205 ], 00:37:38.205 "product_name": "Malloc disk", 00:37:38.205 "block_size": 512, 00:37:38.205 "num_blocks": 65536, 00:37:38.205 "uuid": "2b89fabc-d434-4a9f-8a51-e7d8fb9db8e6", 00:37:38.205 "assigned_rate_limits": { 00:37:38.205 "rw_ios_per_sec": 0, 00:37:38.205 "rw_mbytes_per_sec": 0, 00:37:38.205 "r_mbytes_per_sec": 0, 00:37:38.205 "w_mbytes_per_sec": 0 00:37:38.205 }, 00:37:38.205 "claimed": false, 00:37:38.205 "zoned": false, 00:37:38.205 "supported_io_types": { 00:37:38.205 "read": true, 00:37:38.205 "write": true, 00:37:38.205 "unmap": true, 00:37:38.205 "write_zeroes": true, 00:37:38.205 "flush": true, 00:37:38.205 "reset": true, 00:37:38.205 "compare": false, 00:37:38.205 "compare_and_write": false, 00:37:38.205 "abort": true, 00:37:38.205 "nvme_admin": false, 00:37:38.205 "nvme_io": false 00:37:38.205 }, 00:37:38.205 "memory_domains": [ 00:37:38.205 { 00:37:38.205 "dma_device_id": "system", 00:37:38.205 "dma_device_type": 1 00:37:38.205 }, 00:37:38.205 { 00:37:38.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:38.205 "dma_device_type": 2 00:37:38.205 } 00:37:38.205 ], 00:37:38.205 "driver_specific": {} 00:37:38.205 } 00:37:38.206 ] 00:37:38.206 12:01:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:37:38.206 12:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:37:38.206 12:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:37:38.206 12:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:37:38.465 [2024-06-10 12:01:10.427408] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:38.465 [2024-06-10 12:01:10.427719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:38.465 [2024-06-10 12:01:10.427876] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:38.465 [2024-06-10 12:01:10.430814] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:38.465 [2024-06-10 12:01:10.431051] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:38.465 12:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:38.465 12:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:38.465 12:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:38.465 12:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:38.465 12:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:38.465 12:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:38.465 12:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:38.465 12:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:38.465 12:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:38.465 12:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:38.465 12:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:38.465 12:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:38.729 12:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:38.729 "name": "Existed_Raid", 00:37:38.729 "uuid": "00e874da-acf2-4533-979f-0aa567fd0245", 00:37:38.729 "strip_size_kb": 64, 00:37:38.729 "state": "configuring", 00:37:38.729 "raid_level": "raid5f", 00:37:38.729 "superblock": true, 00:37:38.729 "num_base_bdevs": 4, 00:37:38.729 "num_base_bdevs_discovered": 3, 00:37:38.729 "num_base_bdevs_operational": 4, 00:37:38.729 "base_bdevs_list": [ 00:37:38.729 { 00:37:38.729 "name": "BaseBdev1", 00:37:38.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:38.729 "is_configured": false, 00:37:38.729 "data_offset": 0, 00:37:38.729 "data_size": 0 00:37:38.729 }, 00:37:38.729 { 00:37:38.729 "name": "BaseBdev2", 00:37:38.729 "uuid": "a965e9ea-42b9-4ab6-b217-08191c747a80", 00:37:38.729 "is_configured": true, 00:37:38.729 "data_offset": 2048, 00:37:38.729 "data_size": 63488 00:37:38.729 }, 00:37:38.729 { 00:37:38.729 "name": "BaseBdev3", 00:37:38.729 "uuid": "77422d09-e7d3-4b55-9f25-90887a0bba11", 00:37:38.729 "is_configured": true, 00:37:38.729 "data_offset": 2048, 00:37:38.729 "data_size": 63488 00:37:38.729 }, 00:37:38.729 { 00:37:38.729 "name": "BaseBdev4", 00:37:38.729 "uuid": "2b89fabc-d434-4a9f-8a51-e7d8fb9db8e6", 00:37:38.729 "is_configured": true, 00:37:38.729 "data_offset": 2048, 00:37:38.729 "data_size": 63488 00:37:38.729 } 00:37:38.729 ] 00:37:38.729 }' 00:37:38.729 12:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:38.729 12:01:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:39.309 12:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:37:39.574 [2024-06-10 12:01:11.503537] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:39.574 12:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:39.574 12:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:39.574 12:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:39.574 12:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:39.574 12:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:39.574 12:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:39.574 12:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:39.574 12:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:39.574 12:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:39.574 12:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:39.574 12:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:39.574 12:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:39.832 12:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:39.832 "name": "Existed_Raid", 00:37:39.832 "uuid": "00e874da-acf2-4533-979f-0aa567fd0245", 00:37:39.832 "strip_size_kb": 64, 00:37:39.832 "state": "configuring", 00:37:39.832 "raid_level": "raid5f", 00:37:39.832 "superblock": true, 00:37:39.832 "num_base_bdevs": 4, 00:37:39.832 "num_base_bdevs_discovered": 2, 00:37:39.832 "num_base_bdevs_operational": 4, 00:37:39.832 "base_bdevs_list": [ 00:37:39.832 { 00:37:39.832 "name": "BaseBdev1", 00:37:39.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:39.832 "is_configured": false, 00:37:39.832 "data_offset": 0, 00:37:39.832 "data_size": 0 00:37:39.832 }, 00:37:39.832 { 00:37:39.832 "name": null, 00:37:39.832 "uuid": "a965e9ea-42b9-4ab6-b217-08191c747a80", 00:37:39.832 "is_configured": false, 00:37:39.832 "data_offset": 2048, 00:37:39.832 "data_size": 63488 00:37:39.832 }, 00:37:39.832 { 00:37:39.832 "name": "BaseBdev3", 00:37:39.832 "uuid": "77422d09-e7d3-4b55-9f25-90887a0bba11", 00:37:39.832 "is_configured": true, 00:37:39.832 "data_offset": 2048, 00:37:39.832 "data_size": 63488 00:37:39.832 }, 00:37:39.832 { 00:37:39.832 "name": "BaseBdev4", 00:37:39.832 "uuid": "2b89fabc-d434-4a9f-8a51-e7d8fb9db8e6", 00:37:39.832 "is_configured": true, 00:37:39.832 "data_offset": 2048, 00:37:39.832 "data_size": 63488 00:37:39.832 } 00:37:39.832 ] 00:37:39.832 }' 00:37:39.832 12:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:39.832 12:01:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:40.764 12:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:40.764 12:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:40.764 12:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:37:40.764 12:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:37:41.022 [2024-06-10 12:01:12.958233] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:41.022 BaseBdev1 00:37:41.022 12:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:37:41.022 12:01:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:37:41.022 12:01:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:37:41.022 12:01:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:37:41.022 12:01:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:37:41.022 12:01:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:37:41.022 12:01:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:41.586 12:01:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:41.843 [ 00:37:41.843 { 00:37:41.843 "name": "BaseBdev1", 00:37:41.843 "aliases": [ 00:37:41.843 "6c324d63-dd5e-4fef-b203-a012373201d9" 00:37:41.843 ], 00:37:41.843 "product_name": "Malloc disk", 00:37:41.843 "block_size": 512, 00:37:41.843 "num_blocks": 65536, 00:37:41.843 "uuid": "6c324d63-dd5e-4fef-b203-a012373201d9", 00:37:41.843 "assigned_rate_limits": { 00:37:41.843 "rw_ios_per_sec": 0, 00:37:41.843 "rw_mbytes_per_sec": 0, 00:37:41.843 "r_mbytes_per_sec": 0, 00:37:41.843 "w_mbytes_per_sec": 0 00:37:41.843 }, 00:37:41.843 "claimed": true, 00:37:41.843 "claim_type": "exclusive_write", 00:37:41.843 "zoned": false, 00:37:41.843 "supported_io_types": { 00:37:41.843 "read": true, 00:37:41.843 "write": true, 00:37:41.843 "unmap": true, 00:37:41.843 "write_zeroes": true, 00:37:41.843 "flush": true, 00:37:41.843 "reset": true, 00:37:41.843 "compare": false, 00:37:41.843 "compare_and_write": false, 00:37:41.843 "abort": true, 00:37:41.843 "nvme_admin": false, 00:37:41.843 "nvme_io": false 00:37:41.843 }, 00:37:41.843 "memory_domains": [ 00:37:41.843 { 00:37:41.843 "dma_device_id": "system", 00:37:41.843 "dma_device_type": 1 00:37:41.844 }, 00:37:41.844 { 00:37:41.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:41.844 "dma_device_type": 2 00:37:41.844 } 00:37:41.844 ], 00:37:41.844 "driver_specific": {} 00:37:41.844 } 00:37:41.844 ] 00:37:41.844 12:01:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:37:41.844 12:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:41.844 12:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:41.844 12:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:41.844 12:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:41.844 12:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:41.844 12:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:41.844 12:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:41.844 12:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:41.844 12:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:41.844 12:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:41.844 12:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:41.844 12:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:42.101 12:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:42.101 "name": "Existed_Raid", 00:37:42.101 "uuid": "00e874da-acf2-4533-979f-0aa567fd0245", 00:37:42.101 "strip_size_kb": 64, 00:37:42.101 "state": "configuring", 00:37:42.101 "raid_level": "raid5f", 00:37:42.101 "superblock": true, 00:37:42.101 "num_base_bdevs": 4, 00:37:42.101 "num_base_bdevs_discovered": 3, 00:37:42.101 "num_base_bdevs_operational": 4, 00:37:42.101 "base_bdevs_list": [ 00:37:42.101 { 00:37:42.101 "name": "BaseBdev1", 00:37:42.101 "uuid": "6c324d63-dd5e-4fef-b203-a012373201d9", 00:37:42.101 "is_configured": true, 00:37:42.101 "data_offset": 2048, 00:37:42.101 "data_size": 63488 00:37:42.101 }, 00:37:42.101 { 00:37:42.101 "name": null, 00:37:42.101 "uuid": "a965e9ea-42b9-4ab6-b217-08191c747a80", 00:37:42.101 "is_configured": false, 00:37:42.101 "data_offset": 2048, 00:37:42.101 "data_size": 63488 00:37:42.101 }, 00:37:42.101 { 00:37:42.102 "name": "BaseBdev3", 00:37:42.102 "uuid": "77422d09-e7d3-4b55-9f25-90887a0bba11", 00:37:42.102 "is_configured": true, 00:37:42.102 "data_offset": 2048, 00:37:42.102 "data_size": 63488 00:37:42.102 }, 00:37:42.102 { 00:37:42.102 "name": "BaseBdev4", 00:37:42.102 "uuid": "2b89fabc-d434-4a9f-8a51-e7d8fb9db8e6", 00:37:42.102 "is_configured": true, 00:37:42.102 "data_offset": 2048, 00:37:42.102 "data_size": 63488 00:37:42.102 } 00:37:42.102 ] 00:37:42.102 }' 00:37:42.102 12:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:42.102 12:01:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.703 12:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:42.703 12:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:42.959 12:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:37:42.959 12:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:37:43.219 [2024-06-10 12:01:15.110806] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:43.219 12:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:43.219 12:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:43.219 12:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:43.219 12:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:43.219 12:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:43.219 12:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:43.219 12:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:43.219 12:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:43.219 12:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:43.219 12:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:43.219 12:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:43.219 12:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:43.477 12:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:43.477 "name": "Existed_Raid", 00:37:43.477 "uuid": "00e874da-acf2-4533-979f-0aa567fd0245", 00:37:43.477 "strip_size_kb": 64, 00:37:43.477 "state": "configuring", 00:37:43.477 "raid_level": "raid5f", 00:37:43.477 "superblock": true, 00:37:43.477 "num_base_bdevs": 4, 00:37:43.477 "num_base_bdevs_discovered": 2, 00:37:43.477 "num_base_bdevs_operational": 4, 00:37:43.477 "base_bdevs_list": [ 00:37:43.477 { 00:37:43.477 "name": "BaseBdev1", 00:37:43.477 "uuid": "6c324d63-dd5e-4fef-b203-a012373201d9", 00:37:43.477 "is_configured": true, 00:37:43.477 "data_offset": 2048, 00:37:43.477 "data_size": 63488 00:37:43.477 }, 00:37:43.477 { 00:37:43.477 "name": null, 00:37:43.477 "uuid": "a965e9ea-42b9-4ab6-b217-08191c747a80", 00:37:43.477 "is_configured": false, 00:37:43.477 "data_offset": 2048, 00:37:43.477 "data_size": 63488 00:37:43.477 }, 00:37:43.477 { 00:37:43.477 "name": null, 00:37:43.477 "uuid": "77422d09-e7d3-4b55-9f25-90887a0bba11", 00:37:43.477 "is_configured": false, 00:37:43.477 "data_offset": 2048, 00:37:43.477 "data_size": 63488 00:37:43.477 }, 00:37:43.477 { 00:37:43.477 "name": "BaseBdev4", 00:37:43.477 "uuid": "2b89fabc-d434-4a9f-8a51-e7d8fb9db8e6", 00:37:43.477 "is_configured": true, 00:37:43.477 "data_offset": 2048, 00:37:43.477 "data_size": 63488 00:37:43.477 } 00:37:43.477 ] 00:37:43.477 }' 00:37:43.477 12:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:43.477 12:01:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.042 12:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:44.042 12:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:44.301 12:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:37:44.301 12:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:37:44.560 [2024-06-10 12:01:16.483233] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:44.560 12:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:44.560 12:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:44.560 12:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:44.560 12:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:44.560 12:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:44.560 12:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:44.560 12:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:44.560 12:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:44.560 12:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:44.560 12:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:44.560 12:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:44.560 12:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:44.817 12:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:44.817 "name": "Existed_Raid", 00:37:44.817 "uuid": "00e874da-acf2-4533-979f-0aa567fd0245", 00:37:44.817 "strip_size_kb": 64, 00:37:44.817 "state": "configuring", 00:37:44.817 "raid_level": "raid5f", 00:37:44.817 "superblock": true, 00:37:44.817 "num_base_bdevs": 4, 00:37:44.817 "num_base_bdevs_discovered": 3, 00:37:44.817 "num_base_bdevs_operational": 4, 00:37:44.817 "base_bdevs_list": [ 00:37:44.817 { 00:37:44.817 "name": "BaseBdev1", 00:37:44.817 "uuid": "6c324d63-dd5e-4fef-b203-a012373201d9", 00:37:44.817 "is_configured": true, 00:37:44.817 "data_offset": 2048, 00:37:44.817 "data_size": 63488 00:37:44.817 }, 00:37:44.817 { 00:37:44.817 "name": null, 00:37:44.817 "uuid": "a965e9ea-42b9-4ab6-b217-08191c747a80", 00:37:44.817 "is_configured": false, 00:37:44.817 "data_offset": 2048, 00:37:44.817 "data_size": 63488 00:37:44.817 }, 00:37:44.817 { 00:37:44.817 "name": "BaseBdev3", 00:37:44.817 "uuid": "77422d09-e7d3-4b55-9f25-90887a0bba11", 00:37:44.817 "is_configured": true, 00:37:44.817 "data_offset": 2048, 00:37:44.817 "data_size": 63488 00:37:44.817 }, 00:37:44.817 { 00:37:44.817 "name": "BaseBdev4", 00:37:44.817 "uuid": "2b89fabc-d434-4a9f-8a51-e7d8fb9db8e6", 00:37:44.817 "is_configured": true, 00:37:44.817 "data_offset": 2048, 00:37:44.817 "data_size": 63488 00:37:44.817 } 00:37:44.817 ] 00:37:44.817 }' 00:37:44.817 12:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:44.817 12:01:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:45.752 12:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:45.752 12:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:45.752 12:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:37:45.752 12:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:37:46.011 [2024-06-10 12:01:17.859506] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:46.011 12:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:46.011 12:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:46.011 12:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:46.011 12:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:46.011 12:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:46.011 12:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:46.011 12:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:46.011 12:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:46.011 12:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:46.011 12:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:46.011 12:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:46.011 12:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:46.270 12:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:46.270 "name": "Existed_Raid", 00:37:46.270 "uuid": "00e874da-acf2-4533-979f-0aa567fd0245", 00:37:46.270 "strip_size_kb": 64, 00:37:46.270 "state": "configuring", 00:37:46.270 "raid_level": "raid5f", 00:37:46.270 "superblock": true, 00:37:46.270 "num_base_bdevs": 4, 00:37:46.270 "num_base_bdevs_discovered": 2, 00:37:46.270 "num_base_bdevs_operational": 4, 00:37:46.270 "base_bdevs_list": [ 00:37:46.270 { 00:37:46.270 "name": null, 00:37:46.270 "uuid": "6c324d63-dd5e-4fef-b203-a012373201d9", 00:37:46.270 "is_configured": false, 00:37:46.270 "data_offset": 2048, 00:37:46.270 "data_size": 63488 00:37:46.270 }, 00:37:46.270 { 00:37:46.270 "name": null, 00:37:46.270 "uuid": "a965e9ea-42b9-4ab6-b217-08191c747a80", 00:37:46.270 "is_configured": false, 00:37:46.270 "data_offset": 2048, 00:37:46.270 "data_size": 63488 00:37:46.270 }, 00:37:46.270 { 00:37:46.270 "name": "BaseBdev3", 00:37:46.270 "uuid": "77422d09-e7d3-4b55-9f25-90887a0bba11", 00:37:46.270 "is_configured": true, 00:37:46.270 "data_offset": 2048, 00:37:46.270 "data_size": 63488 00:37:46.270 }, 00:37:46.270 { 00:37:46.270 "name": "BaseBdev4", 00:37:46.270 "uuid": "2b89fabc-d434-4a9f-8a51-e7d8fb9db8e6", 00:37:46.270 "is_configured": true, 00:37:46.270 "data_offset": 2048, 00:37:46.270 "data_size": 63488 00:37:46.270 } 00:37:46.270 ] 00:37:46.270 }' 00:37:46.270 12:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:46.270 12:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:46.837 12:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:46.837 12:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:47.094 12:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:37:47.094 12:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:37:47.353 [2024-06-10 12:01:19.263636] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:47.353 12:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:47.353 12:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:47.353 12:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:47.353 12:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:47.353 12:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:47.353 12:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:47.353 12:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:47.353 12:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:47.353 12:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:47.353 12:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:47.353 12:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:47.353 12:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:47.612 12:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:47.612 "name": "Existed_Raid", 00:37:47.612 "uuid": "00e874da-acf2-4533-979f-0aa567fd0245", 00:37:47.612 "strip_size_kb": 64, 00:37:47.612 "state": "configuring", 00:37:47.612 "raid_level": "raid5f", 00:37:47.612 "superblock": true, 00:37:47.612 "num_base_bdevs": 4, 00:37:47.612 "num_base_bdevs_discovered": 3, 00:37:47.612 "num_base_bdevs_operational": 4, 00:37:47.612 "base_bdevs_list": [ 00:37:47.612 { 00:37:47.612 "name": null, 00:37:47.612 "uuid": "6c324d63-dd5e-4fef-b203-a012373201d9", 00:37:47.612 "is_configured": false, 00:37:47.612 "data_offset": 2048, 00:37:47.612 "data_size": 63488 00:37:47.612 }, 00:37:47.612 { 00:37:47.612 "name": "BaseBdev2", 00:37:47.612 "uuid": "a965e9ea-42b9-4ab6-b217-08191c747a80", 00:37:47.612 "is_configured": true, 00:37:47.612 "data_offset": 2048, 00:37:47.612 "data_size": 63488 00:37:47.612 }, 00:37:47.612 { 00:37:47.612 "name": "BaseBdev3", 00:37:47.612 "uuid": "77422d09-e7d3-4b55-9f25-90887a0bba11", 00:37:47.612 "is_configured": true, 00:37:47.612 "data_offset": 2048, 00:37:47.612 "data_size": 63488 00:37:47.612 }, 00:37:47.612 { 00:37:47.612 "name": "BaseBdev4", 00:37:47.612 "uuid": "2b89fabc-d434-4a9f-8a51-e7d8fb9db8e6", 00:37:47.612 "is_configured": true, 00:37:47.612 "data_offset": 2048, 00:37:47.612 "data_size": 63488 00:37:47.612 } 00:37:47.612 ] 00:37:47.612 }' 00:37:47.612 12:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:47.612 12:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:48.179 12:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:48.179 12:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:48.747 12:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:37:48.747 12:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:48.747 12:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:37:48.747 12:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6c324d63-dd5e-4fef-b203-a012373201d9 00:37:49.004 [2024-06-10 12:01:21.047733] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:37:49.004 [2024-06-10 12:01:21.048233] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:37:49.004 [2024-06-10 12:01:21.048364] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:49.004 [2024-06-10 12:01:21.048517] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:37:49.004 NewBaseBdev 00:37:49.004 [2024-06-10 12:01:21.056144] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:37:49.004 [2024-06-10 12:01:21.056276] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:37:49.004 [2024-06-10 12:01:21.056526] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:49.262 12:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:37:49.262 12:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:37:49.262 12:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:37:49.262 12:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:37:49.262 12:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:37:49.262 12:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:37:49.262 12:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:49.262 12:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:37:49.521 [ 00:37:49.521 { 00:37:49.521 "name": "NewBaseBdev", 00:37:49.521 "aliases": [ 00:37:49.521 "6c324d63-dd5e-4fef-b203-a012373201d9" 00:37:49.521 ], 00:37:49.521 "product_name": "Malloc disk", 00:37:49.521 "block_size": 512, 00:37:49.521 "num_blocks": 65536, 00:37:49.521 "uuid": "6c324d63-dd5e-4fef-b203-a012373201d9", 00:37:49.521 "assigned_rate_limits": { 00:37:49.521 "rw_ios_per_sec": 0, 00:37:49.521 "rw_mbytes_per_sec": 0, 00:37:49.521 "r_mbytes_per_sec": 0, 00:37:49.521 "w_mbytes_per_sec": 0 00:37:49.521 }, 00:37:49.521 "claimed": true, 00:37:49.521 "claim_type": "exclusive_write", 00:37:49.521 "zoned": false, 00:37:49.521 "supported_io_types": { 00:37:49.521 "read": true, 00:37:49.521 "write": true, 00:37:49.521 "unmap": true, 00:37:49.521 "write_zeroes": true, 00:37:49.521 "flush": true, 00:37:49.521 "reset": true, 00:37:49.521 "compare": false, 00:37:49.521 "compare_and_write": false, 00:37:49.521 "abort": true, 00:37:49.521 "nvme_admin": false, 00:37:49.521 "nvme_io": false 00:37:49.521 }, 00:37:49.521 "memory_domains": [ 00:37:49.521 { 00:37:49.521 "dma_device_id": "system", 00:37:49.521 "dma_device_type": 1 00:37:49.521 }, 00:37:49.521 { 00:37:49.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:49.521 "dma_device_type": 2 00:37:49.521 } 00:37:49.521 ], 00:37:49.521 "driver_specific": {} 00:37:49.521 } 00:37:49.521 ] 00:37:49.521 12:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:37:49.521 12:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:37:49.521 12:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:49.521 12:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:49.521 12:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:49.521 12:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:49.521 12:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:49.521 12:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:49.521 12:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:49.521 12:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:49.521 12:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:49.521 12:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:49.521 12:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:49.780 12:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:49.780 "name": "Existed_Raid", 00:37:49.780 "uuid": "00e874da-acf2-4533-979f-0aa567fd0245", 00:37:49.780 "strip_size_kb": 64, 00:37:49.780 "state": "online", 00:37:49.780 "raid_level": "raid5f", 00:37:49.780 "superblock": true, 00:37:49.780 "num_base_bdevs": 4, 00:37:49.780 "num_base_bdevs_discovered": 4, 00:37:49.780 "num_base_bdevs_operational": 4, 00:37:49.780 "base_bdevs_list": [ 00:37:49.780 { 00:37:49.780 "name": "NewBaseBdev", 00:37:49.780 "uuid": "6c324d63-dd5e-4fef-b203-a012373201d9", 00:37:49.780 "is_configured": true, 00:37:49.780 "data_offset": 2048, 00:37:49.780 "data_size": 63488 00:37:49.780 }, 00:37:49.780 { 00:37:49.780 "name": "BaseBdev2", 00:37:49.780 "uuid": "a965e9ea-42b9-4ab6-b217-08191c747a80", 00:37:49.780 "is_configured": true, 00:37:49.780 "data_offset": 2048, 00:37:49.780 "data_size": 63488 00:37:49.780 }, 00:37:49.780 { 00:37:49.780 "name": "BaseBdev3", 00:37:49.780 "uuid": "77422d09-e7d3-4b55-9f25-90887a0bba11", 00:37:49.780 "is_configured": true, 00:37:49.780 "data_offset": 2048, 00:37:49.780 "data_size": 63488 00:37:49.780 }, 00:37:49.780 { 00:37:49.780 "name": "BaseBdev4", 00:37:49.780 "uuid": "2b89fabc-d434-4a9f-8a51-e7d8fb9db8e6", 00:37:49.780 "is_configured": true, 00:37:49.780 "data_offset": 2048, 00:37:49.780 "data_size": 63488 00:37:49.780 } 00:37:49.780 ] 00:37:49.780 }' 00:37:49.780 12:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:49.780 12:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:50.345 12:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:37:50.345 12:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:37:50.345 12:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:50.345 12:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:50.345 12:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:50.345 12:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:37:50.345 12:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:37:50.345 12:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:50.604 [2024-06-10 12:01:22.575003] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:50.604 12:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:50.604 "name": "Existed_Raid", 00:37:50.604 "aliases": [ 00:37:50.604 "00e874da-acf2-4533-979f-0aa567fd0245" 00:37:50.604 ], 00:37:50.604 "product_name": "Raid Volume", 00:37:50.604 "block_size": 512, 00:37:50.604 "num_blocks": 190464, 00:37:50.604 "uuid": "00e874da-acf2-4533-979f-0aa567fd0245", 00:37:50.604 "assigned_rate_limits": { 00:37:50.604 "rw_ios_per_sec": 0, 00:37:50.604 "rw_mbytes_per_sec": 0, 00:37:50.604 "r_mbytes_per_sec": 0, 00:37:50.604 "w_mbytes_per_sec": 0 00:37:50.604 }, 00:37:50.604 "claimed": false, 00:37:50.604 "zoned": false, 00:37:50.604 "supported_io_types": { 00:37:50.604 "read": true, 00:37:50.604 "write": true, 00:37:50.604 "unmap": false, 00:37:50.604 "write_zeroes": true, 00:37:50.604 "flush": false, 00:37:50.604 "reset": true, 00:37:50.604 "compare": false, 00:37:50.604 "compare_and_write": false, 00:37:50.604 "abort": false, 00:37:50.604 "nvme_admin": false, 00:37:50.604 "nvme_io": false 00:37:50.604 }, 00:37:50.604 "driver_specific": { 00:37:50.604 "raid": { 00:37:50.604 "uuid": "00e874da-acf2-4533-979f-0aa567fd0245", 00:37:50.604 "strip_size_kb": 64, 00:37:50.604 "state": "online", 00:37:50.604 "raid_level": "raid5f", 00:37:50.604 "superblock": true, 00:37:50.604 "num_base_bdevs": 4, 00:37:50.604 "num_base_bdevs_discovered": 4, 00:37:50.604 "num_base_bdevs_operational": 4, 00:37:50.604 "base_bdevs_list": [ 00:37:50.604 { 00:37:50.604 "name": "NewBaseBdev", 00:37:50.604 "uuid": "6c324d63-dd5e-4fef-b203-a012373201d9", 00:37:50.604 "is_configured": true, 00:37:50.604 "data_offset": 2048, 00:37:50.604 "data_size": 63488 00:37:50.604 }, 00:37:50.604 { 00:37:50.604 "name": "BaseBdev2", 00:37:50.604 "uuid": "a965e9ea-42b9-4ab6-b217-08191c747a80", 00:37:50.604 "is_configured": true, 00:37:50.604 "data_offset": 2048, 00:37:50.604 "data_size": 63488 00:37:50.604 }, 00:37:50.604 { 00:37:50.604 "name": "BaseBdev3", 00:37:50.604 "uuid": "77422d09-e7d3-4b55-9f25-90887a0bba11", 00:37:50.604 "is_configured": true, 00:37:50.604 "data_offset": 2048, 00:37:50.604 "data_size": 63488 00:37:50.604 }, 00:37:50.604 { 00:37:50.604 "name": "BaseBdev4", 00:37:50.604 "uuid": "2b89fabc-d434-4a9f-8a51-e7d8fb9db8e6", 00:37:50.604 "is_configured": true, 00:37:50.604 "data_offset": 2048, 00:37:50.604 "data_size": 63488 00:37:50.604 } 00:37:50.604 ] 00:37:50.604 } 00:37:50.604 } 00:37:50.604 }' 00:37:50.604 12:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:50.604 12:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:37:50.604 BaseBdev2 00:37:50.604 BaseBdev3 00:37:50.604 BaseBdev4' 00:37:50.604 12:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:50.604 12:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:37:50.604 12:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:51.172 12:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:51.172 "name": "NewBaseBdev", 00:37:51.172 "aliases": [ 00:37:51.172 "6c324d63-dd5e-4fef-b203-a012373201d9" 00:37:51.172 ], 00:37:51.172 "product_name": "Malloc disk", 00:37:51.172 "block_size": 512, 00:37:51.172 "num_blocks": 65536, 00:37:51.172 "uuid": "6c324d63-dd5e-4fef-b203-a012373201d9", 00:37:51.172 "assigned_rate_limits": { 00:37:51.172 "rw_ios_per_sec": 0, 00:37:51.172 "rw_mbytes_per_sec": 0, 00:37:51.172 "r_mbytes_per_sec": 0, 00:37:51.172 "w_mbytes_per_sec": 0 00:37:51.172 }, 00:37:51.172 "claimed": true, 00:37:51.172 "claim_type": "exclusive_write", 00:37:51.172 "zoned": false, 00:37:51.172 "supported_io_types": { 00:37:51.172 "read": true, 00:37:51.172 "write": true, 00:37:51.172 "unmap": true, 00:37:51.172 "write_zeroes": true, 00:37:51.172 "flush": true, 00:37:51.172 "reset": true, 00:37:51.172 "compare": false, 00:37:51.172 "compare_and_write": false, 00:37:51.172 "abort": true, 00:37:51.172 "nvme_admin": false, 00:37:51.172 "nvme_io": false 00:37:51.172 }, 00:37:51.172 "memory_domains": [ 00:37:51.172 { 00:37:51.172 "dma_device_id": "system", 00:37:51.172 "dma_device_type": 1 00:37:51.172 }, 00:37:51.172 { 00:37:51.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:51.172 "dma_device_type": 2 00:37:51.172 } 00:37:51.172 ], 00:37:51.172 "driver_specific": {} 00:37:51.172 }' 00:37:51.172 12:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:51.172 12:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:51.172 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:51.172 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:51.172 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:51.172 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:51.172 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:51.172 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:51.172 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:51.172 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:51.432 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:51.432 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:51.432 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:51.432 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:37:51.432 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:51.689 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:51.689 "name": "BaseBdev2", 00:37:51.689 "aliases": [ 00:37:51.689 "a965e9ea-42b9-4ab6-b217-08191c747a80" 00:37:51.689 ], 00:37:51.689 "product_name": "Malloc disk", 00:37:51.689 "block_size": 512, 00:37:51.689 "num_blocks": 65536, 00:37:51.689 "uuid": "a965e9ea-42b9-4ab6-b217-08191c747a80", 00:37:51.689 "assigned_rate_limits": { 00:37:51.689 "rw_ios_per_sec": 0, 00:37:51.689 "rw_mbytes_per_sec": 0, 00:37:51.689 "r_mbytes_per_sec": 0, 00:37:51.689 "w_mbytes_per_sec": 0 00:37:51.689 }, 00:37:51.689 "claimed": true, 00:37:51.689 "claim_type": "exclusive_write", 00:37:51.689 "zoned": false, 00:37:51.689 "supported_io_types": { 00:37:51.689 "read": true, 00:37:51.689 "write": true, 00:37:51.689 "unmap": true, 00:37:51.689 "write_zeroes": true, 00:37:51.689 "flush": true, 00:37:51.689 "reset": true, 00:37:51.689 "compare": false, 00:37:51.689 "compare_and_write": false, 00:37:51.689 "abort": true, 00:37:51.689 "nvme_admin": false, 00:37:51.689 "nvme_io": false 00:37:51.689 }, 00:37:51.689 "memory_domains": [ 00:37:51.689 { 00:37:51.689 "dma_device_id": "system", 00:37:51.689 "dma_device_type": 1 00:37:51.689 }, 00:37:51.689 { 00:37:51.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:51.689 "dma_device_type": 2 00:37:51.689 } 00:37:51.689 ], 00:37:51.689 "driver_specific": {} 00:37:51.689 }' 00:37:51.689 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:51.689 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:51.689 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:51.689 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:51.689 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:51.948 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:51.948 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:51.948 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:51.948 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:51.948 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:51.948 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:51.948 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:51.948 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:51.948 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:37:51.948 12:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:52.206 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:52.206 "name": "BaseBdev3", 00:37:52.206 "aliases": [ 00:37:52.206 "77422d09-e7d3-4b55-9f25-90887a0bba11" 00:37:52.206 ], 00:37:52.206 "product_name": "Malloc disk", 00:37:52.206 "block_size": 512, 00:37:52.206 "num_blocks": 65536, 00:37:52.206 "uuid": "77422d09-e7d3-4b55-9f25-90887a0bba11", 00:37:52.206 "assigned_rate_limits": { 00:37:52.206 "rw_ios_per_sec": 0, 00:37:52.206 "rw_mbytes_per_sec": 0, 00:37:52.206 "r_mbytes_per_sec": 0, 00:37:52.206 "w_mbytes_per_sec": 0 00:37:52.206 }, 00:37:52.206 "claimed": true, 00:37:52.206 "claim_type": "exclusive_write", 00:37:52.206 "zoned": false, 00:37:52.206 "supported_io_types": { 00:37:52.206 "read": true, 00:37:52.206 "write": true, 00:37:52.206 "unmap": true, 00:37:52.206 "write_zeroes": true, 00:37:52.206 "flush": true, 00:37:52.206 "reset": true, 00:37:52.206 "compare": false, 00:37:52.206 "compare_and_write": false, 00:37:52.206 "abort": true, 00:37:52.206 "nvme_admin": false, 00:37:52.206 "nvme_io": false 00:37:52.206 }, 00:37:52.206 "memory_domains": [ 00:37:52.206 { 00:37:52.206 "dma_device_id": "system", 00:37:52.206 "dma_device_type": 1 00:37:52.206 }, 00:37:52.206 { 00:37:52.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:52.206 "dma_device_type": 2 00:37:52.206 } 00:37:52.206 ], 00:37:52.206 "driver_specific": {} 00:37:52.206 }' 00:37:52.206 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:52.206 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:52.465 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:52.465 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:52.465 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:52.465 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:52.465 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:52.465 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:52.465 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:52.465 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:52.723 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:52.723 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:52.723 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:52.723 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:37:52.723 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:52.982 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:52.982 "name": "BaseBdev4", 00:37:52.982 "aliases": [ 00:37:52.982 "2b89fabc-d434-4a9f-8a51-e7d8fb9db8e6" 00:37:52.982 ], 00:37:52.982 "product_name": "Malloc disk", 00:37:52.982 "block_size": 512, 00:37:52.982 "num_blocks": 65536, 00:37:52.982 "uuid": "2b89fabc-d434-4a9f-8a51-e7d8fb9db8e6", 00:37:52.982 "assigned_rate_limits": { 00:37:52.982 "rw_ios_per_sec": 0, 00:37:52.982 "rw_mbytes_per_sec": 0, 00:37:52.982 "r_mbytes_per_sec": 0, 00:37:52.982 "w_mbytes_per_sec": 0 00:37:52.982 }, 00:37:52.982 "claimed": true, 00:37:52.982 "claim_type": "exclusive_write", 00:37:52.982 "zoned": false, 00:37:52.982 "supported_io_types": { 00:37:52.982 "read": true, 00:37:52.982 "write": true, 00:37:52.982 "unmap": true, 00:37:52.982 "write_zeroes": true, 00:37:52.982 "flush": true, 00:37:52.982 "reset": true, 00:37:52.982 "compare": false, 00:37:52.982 "compare_and_write": false, 00:37:52.982 "abort": true, 00:37:52.982 "nvme_admin": false, 00:37:52.982 "nvme_io": false 00:37:52.982 }, 00:37:52.982 "memory_domains": [ 00:37:52.982 { 00:37:52.982 "dma_device_id": "system", 00:37:52.982 "dma_device_type": 1 00:37:52.982 }, 00:37:52.982 { 00:37:52.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:52.982 "dma_device_type": 2 00:37:52.982 } 00:37:52.982 ], 00:37:52.982 "driver_specific": {} 00:37:52.982 }' 00:37:52.982 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:52.982 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:52.982 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:52.982 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:52.982 12:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:52.982 12:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:52.982 12:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:53.240 12:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:53.240 12:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:53.241 12:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:53.241 12:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:53.241 12:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:53.241 12:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:53.506 [2024-06-10 12:01:25.519396] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:53.506 [2024-06-10 12:01:25.519592] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:53.506 [2024-06-10 12:01:25.519779] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:53.506 [2024-06-10 12:01:25.520177] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:53.506 [2024-06-10 12:01:25.520287] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:37:53.506 12:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 157194 00:37:53.506 12:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 157194 ']' 00:37:53.506 12:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 157194 00:37:53.506 12:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:37:53.506 12:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:53.506 12:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 157194 00:37:53.764 12:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:37:53.764 12:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:37:53.764 12:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 157194' 00:37:53.764 killing process with pid 157194 00:37:53.764 12:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 157194 00:37:53.764 [2024-06-10 12:01:25.574888] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:53.764 12:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 157194 00:37:54.021 [2024-06-10 12:01:26.001357] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:55.393 12:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:37:55.393 00:37:55.393 real 0m36.343s 00:37:55.393 user 1m5.673s 00:37:55.393 sys 0m5.503s 00:37:55.393 12:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:55.393 12:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:55.393 ************************************ 00:37:55.393 END TEST raid5f_state_function_test_sb 00:37:55.393 ************************************ 00:37:55.651 12:01:27 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:37:55.651 12:01:27 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:37:55.651 12:01:27 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:55.651 12:01:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:55.651 ************************************ 00:37:55.651 START TEST raid5f_superblock_test 00:37:55.651 ************************************ 00:37:55.651 12:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid5f 4 00:37:55.651 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:37:55.651 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:37:55.651 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:37:55.651 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:37:55.651 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:37:55.651 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:37:55.651 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:37:55.651 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:37:55.651 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:37:55.651 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:37:55.651 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:37:55.651 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:37:55.651 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:37:55.652 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:37:55.652 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:37:55.652 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:37:55.652 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=158299 00:37:55.652 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 158299 /var/tmp/spdk-raid.sock 00:37:55.652 12:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:37:55.652 12:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 158299 ']' 00:37:55.652 12:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:55.652 12:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:55.652 12:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:55.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:55.652 12:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:55.652 12:01:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:55.652 [2024-06-10 12:01:27.578100] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:37:55.652 [2024-06-10 12:01:27.578549] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158299 ] 00:37:55.911 [2024-06-10 12:01:27.766308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:56.169 [2024-06-10 12:01:28.040918] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:37:56.427 [2024-06-10 12:01:28.257133] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:56.427 12:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:56.427 12:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:37:56.427 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:37:56.427 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:56.427 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:37:56.427 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:37:56.427 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:37:56.427 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:56.427 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:37:56.427 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:56.427 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:37:56.684 malloc1 00:37:56.684 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:57.019 [2024-06-10 12:01:28.895973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:57.019 [2024-06-10 12:01:28.896304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:57.019 [2024-06-10 12:01:28.896405] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:37:57.019 [2024-06-10 12:01:28.896665] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:57.019 [2024-06-10 12:01:28.899277] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:57.019 [2024-06-10 12:01:28.899435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:57.019 pt1 00:37:57.019 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:37:57.019 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:57.019 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:37:57.019 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:37:57.019 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:37:57.019 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:57.020 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:37:57.020 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:57.020 12:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:37:57.278 malloc2 00:37:57.278 12:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:57.537 [2024-06-10 12:01:29.434767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:57.537 [2024-06-10 12:01:29.435035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:57.537 [2024-06-10 12:01:29.435126] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:37:57.537 [2024-06-10 12:01:29.435289] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:57.537 [2024-06-10 12:01:29.437729] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:57.537 [2024-06-10 12:01:29.437894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:57.537 pt2 00:37:57.537 12:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:37:57.537 12:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:57.537 12:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:37:57.537 12:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:37:57.537 12:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:37:57.537 12:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:57.537 12:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:37:57.537 12:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:57.537 12:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:37:57.795 malloc3 00:37:57.795 12:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:58.055 [2024-06-10 12:01:30.042821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:58.055 [2024-06-10 12:01:30.043125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:58.055 [2024-06-10 12:01:30.043255] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:37:58.055 [2024-06-10 12:01:30.043356] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:58.055 [2024-06-10 12:01:30.046026] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:58.055 [2024-06-10 12:01:30.046210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:58.055 pt3 00:37:58.055 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:37:58.055 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:58.055 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:37:58.055 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:37:58.055 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:37:58.055 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:58.055 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:37:58.055 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:58.055 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:37:58.313 malloc4 00:37:58.313 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:37:58.571 [2024-06-10 12:01:30.478267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:37:58.571 [2024-06-10 12:01:30.478590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:58.571 [2024-06-10 12:01:30.478701] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:37:58.571 [2024-06-10 12:01:30.478891] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:58.572 [2024-06-10 12:01:30.481420] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:58.572 [2024-06-10 12:01:30.481600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:37:58.572 pt4 00:37:58.572 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:37:58.572 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:58.572 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:37:58.831 [2024-06-10 12:01:30.682414] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:58.831 [2024-06-10 12:01:30.684649] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:58.831 [2024-06-10 12:01:30.684836] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:58.831 [2024-06-10 12:01:30.684922] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:37:58.831 [2024-06-10 12:01:30.685223] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:37:58.831 [2024-06-10 12:01:30.685338] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:58.831 [2024-06-10 12:01:30.685545] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:37:58.831 [2024-06-10 12:01:30.693203] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:37:58.831 [2024-06-10 12:01:30.693346] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:37:58.831 [2024-06-10 12:01:30.693678] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:58.831 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:37:58.831 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:58.831 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:58.831 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:58.831 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:58.831 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:58.831 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:58.831 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:58.831 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:58.831 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:58.831 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:58.831 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:59.090 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:59.090 "name": "raid_bdev1", 00:37:59.090 "uuid": "86043249-91f6-4416-b95d-a146f201b016", 00:37:59.090 "strip_size_kb": 64, 00:37:59.090 "state": "online", 00:37:59.090 "raid_level": "raid5f", 00:37:59.090 "superblock": true, 00:37:59.090 "num_base_bdevs": 4, 00:37:59.090 "num_base_bdevs_discovered": 4, 00:37:59.090 "num_base_bdevs_operational": 4, 00:37:59.090 "base_bdevs_list": [ 00:37:59.090 { 00:37:59.090 "name": "pt1", 00:37:59.090 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:59.090 "is_configured": true, 00:37:59.090 "data_offset": 2048, 00:37:59.090 "data_size": 63488 00:37:59.090 }, 00:37:59.090 { 00:37:59.090 "name": "pt2", 00:37:59.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:59.090 "is_configured": true, 00:37:59.090 "data_offset": 2048, 00:37:59.090 "data_size": 63488 00:37:59.090 }, 00:37:59.090 { 00:37:59.090 "name": "pt3", 00:37:59.090 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:59.090 "is_configured": true, 00:37:59.090 "data_offset": 2048, 00:37:59.090 "data_size": 63488 00:37:59.090 }, 00:37:59.090 { 00:37:59.090 "name": "pt4", 00:37:59.090 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:59.090 "is_configured": true, 00:37:59.090 "data_offset": 2048, 00:37:59.090 "data_size": 63488 00:37:59.090 } 00:37:59.090 ] 00:37:59.090 }' 00:37:59.090 12:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:59.090 12:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:59.658 12:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:37:59.658 12:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:37:59.658 12:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:59.658 12:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:59.658 12:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:59.658 12:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:37:59.658 12:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:59.658 12:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:59.915 [2024-06-10 12:01:31.855537] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:59.915 12:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:59.916 "name": "raid_bdev1", 00:37:59.916 "aliases": [ 00:37:59.916 "86043249-91f6-4416-b95d-a146f201b016" 00:37:59.916 ], 00:37:59.916 "product_name": "Raid Volume", 00:37:59.916 "block_size": 512, 00:37:59.916 "num_blocks": 190464, 00:37:59.916 "uuid": "86043249-91f6-4416-b95d-a146f201b016", 00:37:59.916 "assigned_rate_limits": { 00:37:59.916 "rw_ios_per_sec": 0, 00:37:59.916 "rw_mbytes_per_sec": 0, 00:37:59.916 "r_mbytes_per_sec": 0, 00:37:59.916 "w_mbytes_per_sec": 0 00:37:59.916 }, 00:37:59.916 "claimed": false, 00:37:59.916 "zoned": false, 00:37:59.916 "supported_io_types": { 00:37:59.916 "read": true, 00:37:59.916 "write": true, 00:37:59.916 "unmap": false, 00:37:59.916 "write_zeroes": true, 00:37:59.916 "flush": false, 00:37:59.916 "reset": true, 00:37:59.916 "compare": false, 00:37:59.916 "compare_and_write": false, 00:37:59.916 "abort": false, 00:37:59.916 "nvme_admin": false, 00:37:59.916 "nvme_io": false 00:37:59.916 }, 00:37:59.916 "driver_specific": { 00:37:59.916 "raid": { 00:37:59.916 "uuid": "86043249-91f6-4416-b95d-a146f201b016", 00:37:59.916 "strip_size_kb": 64, 00:37:59.916 "state": "online", 00:37:59.916 "raid_level": "raid5f", 00:37:59.916 "superblock": true, 00:37:59.916 "num_base_bdevs": 4, 00:37:59.916 "num_base_bdevs_discovered": 4, 00:37:59.916 "num_base_bdevs_operational": 4, 00:37:59.916 "base_bdevs_list": [ 00:37:59.916 { 00:37:59.916 "name": "pt1", 00:37:59.916 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:59.916 "is_configured": true, 00:37:59.916 "data_offset": 2048, 00:37:59.916 "data_size": 63488 00:37:59.916 }, 00:37:59.916 { 00:37:59.916 "name": "pt2", 00:37:59.916 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:59.916 "is_configured": true, 00:37:59.916 "data_offset": 2048, 00:37:59.916 "data_size": 63488 00:37:59.916 }, 00:37:59.916 { 00:37:59.916 "name": "pt3", 00:37:59.916 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:59.916 "is_configured": true, 00:37:59.916 "data_offset": 2048, 00:37:59.916 "data_size": 63488 00:37:59.916 }, 00:37:59.916 { 00:37:59.916 "name": "pt4", 00:37:59.916 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:59.916 "is_configured": true, 00:37:59.916 "data_offset": 2048, 00:37:59.916 "data_size": 63488 00:37:59.916 } 00:37:59.916 ] 00:37:59.916 } 00:37:59.916 } 00:37:59.916 }' 00:37:59.916 12:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:59.916 12:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:37:59.916 pt2 00:37:59.916 pt3 00:37:59.916 pt4' 00:37:59.916 12:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:59.916 12:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:59.916 12:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:38:00.178 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:00.178 "name": "pt1", 00:38:00.178 "aliases": [ 00:38:00.178 "00000000-0000-0000-0000-000000000001" 00:38:00.178 ], 00:38:00.178 "product_name": "passthru", 00:38:00.178 "block_size": 512, 00:38:00.178 "num_blocks": 65536, 00:38:00.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:00.178 "assigned_rate_limits": { 00:38:00.178 "rw_ios_per_sec": 0, 00:38:00.178 "rw_mbytes_per_sec": 0, 00:38:00.178 "r_mbytes_per_sec": 0, 00:38:00.178 "w_mbytes_per_sec": 0 00:38:00.178 }, 00:38:00.178 "claimed": true, 00:38:00.178 "claim_type": "exclusive_write", 00:38:00.178 "zoned": false, 00:38:00.178 "supported_io_types": { 00:38:00.178 "read": true, 00:38:00.178 "write": true, 00:38:00.178 "unmap": true, 00:38:00.178 "write_zeroes": true, 00:38:00.178 "flush": true, 00:38:00.178 "reset": true, 00:38:00.178 "compare": false, 00:38:00.178 "compare_and_write": false, 00:38:00.178 "abort": true, 00:38:00.178 "nvme_admin": false, 00:38:00.178 "nvme_io": false 00:38:00.178 }, 00:38:00.178 "memory_domains": [ 00:38:00.178 { 00:38:00.178 "dma_device_id": "system", 00:38:00.178 "dma_device_type": 1 00:38:00.178 }, 00:38:00.178 { 00:38:00.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:00.178 "dma_device_type": 2 00:38:00.178 } 00:38:00.178 ], 00:38:00.178 "driver_specific": { 00:38:00.178 "passthru": { 00:38:00.178 "name": "pt1", 00:38:00.178 "base_bdev_name": "malloc1" 00:38:00.178 } 00:38:00.178 } 00:38:00.178 }' 00:38:00.178 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:00.178 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:00.446 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:38:00.446 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:00.446 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:00.446 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:38:00.446 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:00.446 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:00.446 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:38:00.446 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:00.446 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:00.704 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:38:00.704 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:00.704 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:38:00.704 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:00.963 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:00.963 "name": "pt2", 00:38:00.963 "aliases": [ 00:38:00.963 "00000000-0000-0000-0000-000000000002" 00:38:00.963 ], 00:38:00.963 "product_name": "passthru", 00:38:00.963 "block_size": 512, 00:38:00.963 "num_blocks": 65536, 00:38:00.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:00.963 "assigned_rate_limits": { 00:38:00.963 "rw_ios_per_sec": 0, 00:38:00.963 "rw_mbytes_per_sec": 0, 00:38:00.963 "r_mbytes_per_sec": 0, 00:38:00.963 "w_mbytes_per_sec": 0 00:38:00.963 }, 00:38:00.963 "claimed": true, 00:38:00.963 "claim_type": "exclusive_write", 00:38:00.963 "zoned": false, 00:38:00.963 "supported_io_types": { 00:38:00.963 "read": true, 00:38:00.963 "write": true, 00:38:00.963 "unmap": true, 00:38:00.963 "write_zeroes": true, 00:38:00.963 "flush": true, 00:38:00.963 "reset": true, 00:38:00.963 "compare": false, 00:38:00.963 "compare_and_write": false, 00:38:00.963 "abort": true, 00:38:00.963 "nvme_admin": false, 00:38:00.963 "nvme_io": false 00:38:00.963 }, 00:38:00.963 "memory_domains": [ 00:38:00.963 { 00:38:00.963 "dma_device_id": "system", 00:38:00.963 "dma_device_type": 1 00:38:00.963 }, 00:38:00.963 { 00:38:00.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:00.963 "dma_device_type": 2 00:38:00.963 } 00:38:00.963 ], 00:38:00.963 "driver_specific": { 00:38:00.963 "passthru": { 00:38:00.963 "name": "pt2", 00:38:00.963 "base_bdev_name": "malloc2" 00:38:00.963 } 00:38:00.963 } 00:38:00.963 }' 00:38:00.963 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:00.963 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:00.963 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:38:00.963 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:00.963 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:00.963 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:38:00.963 12:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:00.963 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:01.221 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:38:01.221 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:01.221 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:01.221 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:38:01.221 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:01.221 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:38:01.221 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:01.480 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:01.480 "name": "pt3", 00:38:01.480 "aliases": [ 00:38:01.480 "00000000-0000-0000-0000-000000000003" 00:38:01.480 ], 00:38:01.480 "product_name": "passthru", 00:38:01.480 "block_size": 512, 00:38:01.480 "num_blocks": 65536, 00:38:01.480 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:01.481 "assigned_rate_limits": { 00:38:01.481 "rw_ios_per_sec": 0, 00:38:01.481 "rw_mbytes_per_sec": 0, 00:38:01.481 "r_mbytes_per_sec": 0, 00:38:01.481 "w_mbytes_per_sec": 0 00:38:01.481 }, 00:38:01.481 "claimed": true, 00:38:01.481 "claim_type": "exclusive_write", 00:38:01.481 "zoned": false, 00:38:01.481 "supported_io_types": { 00:38:01.481 "read": true, 00:38:01.481 "write": true, 00:38:01.481 "unmap": true, 00:38:01.481 "write_zeroes": true, 00:38:01.481 "flush": true, 00:38:01.481 "reset": true, 00:38:01.481 "compare": false, 00:38:01.481 "compare_and_write": false, 00:38:01.481 "abort": true, 00:38:01.481 "nvme_admin": false, 00:38:01.481 "nvme_io": false 00:38:01.481 }, 00:38:01.481 "memory_domains": [ 00:38:01.481 { 00:38:01.481 "dma_device_id": "system", 00:38:01.481 "dma_device_type": 1 00:38:01.481 }, 00:38:01.481 { 00:38:01.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:01.481 "dma_device_type": 2 00:38:01.481 } 00:38:01.481 ], 00:38:01.481 "driver_specific": { 00:38:01.481 "passthru": { 00:38:01.481 "name": "pt3", 00:38:01.481 "base_bdev_name": "malloc3" 00:38:01.481 } 00:38:01.481 } 00:38:01.481 }' 00:38:01.481 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:01.481 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:01.481 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:38:01.481 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:01.481 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:01.775 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:38:01.775 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:01.775 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:01.775 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:38:01.775 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:01.775 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:01.775 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:38:01.775 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:01.775 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:38:01.775 12:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:02.034 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:02.034 "name": "pt4", 00:38:02.034 "aliases": [ 00:38:02.034 "00000000-0000-0000-0000-000000000004" 00:38:02.034 ], 00:38:02.034 "product_name": "passthru", 00:38:02.034 "block_size": 512, 00:38:02.034 "num_blocks": 65536, 00:38:02.034 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:02.034 "assigned_rate_limits": { 00:38:02.034 "rw_ios_per_sec": 0, 00:38:02.034 "rw_mbytes_per_sec": 0, 00:38:02.034 "r_mbytes_per_sec": 0, 00:38:02.034 "w_mbytes_per_sec": 0 00:38:02.034 }, 00:38:02.034 "claimed": true, 00:38:02.034 "claim_type": "exclusive_write", 00:38:02.034 "zoned": false, 00:38:02.034 "supported_io_types": { 00:38:02.034 "read": true, 00:38:02.034 "write": true, 00:38:02.034 "unmap": true, 00:38:02.034 "write_zeroes": true, 00:38:02.034 "flush": true, 00:38:02.034 "reset": true, 00:38:02.034 "compare": false, 00:38:02.034 "compare_and_write": false, 00:38:02.034 "abort": true, 00:38:02.034 "nvme_admin": false, 00:38:02.034 "nvme_io": false 00:38:02.034 }, 00:38:02.034 "memory_domains": [ 00:38:02.034 { 00:38:02.034 "dma_device_id": "system", 00:38:02.034 "dma_device_type": 1 00:38:02.034 }, 00:38:02.034 { 00:38:02.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:02.034 "dma_device_type": 2 00:38:02.034 } 00:38:02.034 ], 00:38:02.034 "driver_specific": { 00:38:02.034 "passthru": { 00:38:02.034 "name": "pt4", 00:38:02.034 "base_bdev_name": "malloc4" 00:38:02.034 } 00:38:02.034 } 00:38:02.034 }' 00:38:02.034 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:02.292 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:02.292 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:38:02.292 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:02.292 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:02.292 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:38:02.292 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:02.292 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:02.292 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:38:02.292 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:02.551 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:02.551 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:38:02.551 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:38:02.551 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:02.551 [2024-06-10 12:01:34.600133] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:02.808 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=86043249-91f6-4416-b95d-a146f201b016 00:38:02.809 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 86043249-91f6-4416-b95d-a146f201b016 ']' 00:38:02.809 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:02.809 [2024-06-10 12:01:34.860100] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:02.809 [2024-06-10 12:01:34.860352] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:02.809 [2024-06-10 12:01:34.860592] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:02.809 [2024-06-10 12:01:34.860771] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:02.809 [2024-06-10 12:01:34.860861] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:38:03.067 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:03.067 12:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:38:03.067 12:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:38:03.067 12:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:38:03.067 12:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:38:03.067 12:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:38:03.326 12:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:38:03.326 12:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:38:03.584 12:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:38:03.584 12:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:38:03.842 12:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:38:03.842 12:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:38:04.100 12:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:38:04.100 12:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:38:04.358 12:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:38:04.358 12:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:38:04.358 12:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:38:04.359 12:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:38:04.359 12:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:04.359 12:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:04.359 12:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:04.359 12:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:04.359 12:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:04.359 12:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:04.359 12:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:04.359 12:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:38:04.359 12:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:38:04.628 [2024-06-10 12:01:36.639177] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:38:04.628 [2024-06-10 12:01:36.641543] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:38:04.628 [2024-06-10 12:01:36.641767] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:38:04.628 [2024-06-10 12:01:36.641839] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:38:04.629 [2024-06-10 12:01:36.641978] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:38:04.629 [2024-06-10 12:01:36.642104] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:38:04.629 [2024-06-10 12:01:36.642229] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:38:04.629 [2024-06-10 12:01:36.642355] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:38:04.629 [2024-06-10 12:01:36.642496] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:04.629 [2024-06-10 12:01:36.642536] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:38:04.629 request: 00:38:04.629 { 00:38:04.629 "name": "raid_bdev1", 00:38:04.629 "raid_level": "raid5f", 00:38:04.629 "base_bdevs": [ 00:38:04.629 "malloc1", 00:38:04.629 "malloc2", 00:38:04.629 "malloc3", 00:38:04.629 "malloc4" 00:38:04.629 ], 00:38:04.629 "strip_size_kb": 64, 00:38:04.629 "superblock": false, 00:38:04.629 "method": "bdev_raid_create", 00:38:04.629 "req_id": 1 00:38:04.629 } 00:38:04.629 Got JSON-RPC error response 00:38:04.629 response: 00:38:04.629 { 00:38:04.629 "code": -17, 00:38:04.629 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:38:04.629 } 00:38:04.629 12:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:38:04.629 12:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:04.629 12:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:04.629 12:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:04.629 12:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:04.629 12:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:38:04.893 12:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:38:04.893 12:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:38:04.893 12:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:05.459 [2024-06-10 12:01:37.215333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:05.459 [2024-06-10 12:01:37.215667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:05.459 [2024-06-10 12:01:37.215739] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:38:05.459 [2024-06-10 12:01:37.215866] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:05.459 [2024-06-10 12:01:37.218313] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:05.459 [2024-06-10 12:01:37.218508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:05.459 [2024-06-10 12:01:37.218750] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:38:05.460 [2024-06-10 12:01:37.218902] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:05.460 pt1 00:38:05.460 12:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:38:05.460 12:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:05.460 12:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:05.460 12:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:05.460 12:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:05.460 12:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:38:05.460 12:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:05.460 12:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:05.460 12:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:05.460 12:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:05.460 12:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:05.460 12:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:05.718 12:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:05.718 "name": "raid_bdev1", 00:38:05.718 "uuid": "86043249-91f6-4416-b95d-a146f201b016", 00:38:05.718 "strip_size_kb": 64, 00:38:05.718 "state": "configuring", 00:38:05.718 "raid_level": "raid5f", 00:38:05.718 "superblock": true, 00:38:05.718 "num_base_bdevs": 4, 00:38:05.718 "num_base_bdevs_discovered": 1, 00:38:05.718 "num_base_bdevs_operational": 4, 00:38:05.718 "base_bdevs_list": [ 00:38:05.718 { 00:38:05.718 "name": "pt1", 00:38:05.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:05.718 "is_configured": true, 00:38:05.718 "data_offset": 2048, 00:38:05.718 "data_size": 63488 00:38:05.718 }, 00:38:05.718 { 00:38:05.718 "name": null, 00:38:05.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:05.718 "is_configured": false, 00:38:05.718 "data_offset": 2048, 00:38:05.718 "data_size": 63488 00:38:05.718 }, 00:38:05.718 { 00:38:05.718 "name": null, 00:38:05.718 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:05.718 "is_configured": false, 00:38:05.718 "data_offset": 2048, 00:38:05.718 "data_size": 63488 00:38:05.718 }, 00:38:05.718 { 00:38:05.718 "name": null, 00:38:05.718 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:05.718 "is_configured": false, 00:38:05.718 "data_offset": 2048, 00:38:05.718 "data_size": 63488 00:38:05.718 } 00:38:05.718 ] 00:38:05.718 }' 00:38:05.718 12:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:05.718 12:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:06.284 12:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:38:06.284 12:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:06.542 [2024-06-10 12:01:38.351615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:06.542 [2024-06-10 12:01:38.351962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:06.542 [2024-06-10 12:01:38.352048] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:38:06.542 [2024-06-10 12:01:38.352225] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:06.542 [2024-06-10 12:01:38.352774] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:06.542 [2024-06-10 12:01:38.352922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:06.542 [2024-06-10 12:01:38.353181] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:38:06.542 [2024-06-10 12:01:38.353312] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:06.542 pt2 00:38:06.542 12:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:38:06.800 [2024-06-10 12:01:38.627717] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:38:06.800 12:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:38:06.800 12:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:06.800 12:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:06.800 12:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:06.800 12:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:06.800 12:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:38:06.800 12:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:06.800 12:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:06.800 12:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:06.800 12:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:06.800 12:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:06.800 12:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:06.801 12:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:06.801 "name": "raid_bdev1", 00:38:06.801 "uuid": "86043249-91f6-4416-b95d-a146f201b016", 00:38:06.801 "strip_size_kb": 64, 00:38:06.801 "state": "configuring", 00:38:06.801 "raid_level": "raid5f", 00:38:06.801 "superblock": true, 00:38:06.801 "num_base_bdevs": 4, 00:38:06.801 "num_base_bdevs_discovered": 1, 00:38:06.801 "num_base_bdevs_operational": 4, 00:38:06.801 "base_bdevs_list": [ 00:38:06.801 { 00:38:06.801 "name": "pt1", 00:38:06.801 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:06.801 "is_configured": true, 00:38:06.801 "data_offset": 2048, 00:38:06.801 "data_size": 63488 00:38:06.801 }, 00:38:06.801 { 00:38:06.801 "name": null, 00:38:06.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:06.801 "is_configured": false, 00:38:06.801 "data_offset": 2048, 00:38:06.801 "data_size": 63488 00:38:06.801 }, 00:38:06.801 { 00:38:06.801 "name": null, 00:38:06.801 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:06.801 "is_configured": false, 00:38:06.801 "data_offset": 2048, 00:38:06.801 "data_size": 63488 00:38:06.801 }, 00:38:06.801 { 00:38:06.801 "name": null, 00:38:06.801 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:06.801 "is_configured": false, 00:38:06.801 "data_offset": 2048, 00:38:06.801 "data_size": 63488 00:38:06.801 } 00:38:06.801 ] 00:38:06.801 }' 00:38:06.801 12:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:06.801 12:01:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:07.737 12:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:38:07.737 12:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:38:07.737 12:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:07.737 [2024-06-10 12:01:39.683932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:07.737 [2024-06-10 12:01:39.684254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:07.737 [2024-06-10 12:01:39.684329] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:38:07.737 [2024-06-10 12:01:39.684446] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:07.737 [2024-06-10 12:01:39.684936] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:07.737 [2024-06-10 12:01:39.685087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:07.737 [2024-06-10 12:01:39.685307] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:38:07.737 [2024-06-10 12:01:39.685436] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:07.737 pt2 00:38:07.737 12:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:38:07.737 12:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:38:07.737 12:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:38:07.994 [2024-06-10 12:01:40.008089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:38:07.994 [2024-06-10 12:01:40.008402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:07.994 [2024-06-10 12:01:40.008536] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:38:07.994 [2024-06-10 12:01:40.008671] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:07.994 [2024-06-10 12:01:40.009208] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:07.994 [2024-06-10 12:01:40.009374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:38:07.994 [2024-06-10 12:01:40.009598] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:38:07.994 [2024-06-10 12:01:40.009742] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:38:07.994 pt3 00:38:07.994 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:38:07.995 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:38:07.995 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:38:08.253 [2024-06-10 12:01:40.288084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:38:08.253 [2024-06-10 12:01:40.288386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:08.253 [2024-06-10 12:01:40.288473] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:38:08.253 [2024-06-10 12:01:40.288597] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:08.253 [2024-06-10 12:01:40.289189] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:08.253 [2024-06-10 12:01:40.289368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:38:08.253 [2024-06-10 12:01:40.289610] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:38:08.253 [2024-06-10 12:01:40.289728] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:38:08.253 [2024-06-10 12:01:40.289978] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:38:08.253 [2024-06-10 12:01:40.290074] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:38:08.253 [2024-06-10 12:01:40.290208] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:38:08.253 [2024-06-10 12:01:40.297784] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:38:08.253 [2024-06-10 12:01:40.297950] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:38:08.254 [2024-06-10 12:01:40.298259] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:08.254 pt4 00:38:08.513 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:38:08.513 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:38:08.513 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:38:08.513 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:08.513 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:08.513 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:08.513 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:08.513 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:38:08.513 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:08.513 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:08.513 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:08.513 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:08.513 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:08.513 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:08.772 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:08.772 "name": "raid_bdev1", 00:38:08.772 "uuid": "86043249-91f6-4416-b95d-a146f201b016", 00:38:08.772 "strip_size_kb": 64, 00:38:08.772 "state": "online", 00:38:08.772 "raid_level": "raid5f", 00:38:08.772 "superblock": true, 00:38:08.772 "num_base_bdevs": 4, 00:38:08.772 "num_base_bdevs_discovered": 4, 00:38:08.772 "num_base_bdevs_operational": 4, 00:38:08.772 "base_bdevs_list": [ 00:38:08.772 { 00:38:08.772 "name": "pt1", 00:38:08.772 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:08.772 "is_configured": true, 00:38:08.772 "data_offset": 2048, 00:38:08.772 "data_size": 63488 00:38:08.772 }, 00:38:08.772 { 00:38:08.772 "name": "pt2", 00:38:08.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:08.772 "is_configured": true, 00:38:08.772 "data_offset": 2048, 00:38:08.772 "data_size": 63488 00:38:08.772 }, 00:38:08.772 { 00:38:08.772 "name": "pt3", 00:38:08.772 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:08.772 "is_configured": true, 00:38:08.772 "data_offset": 2048, 00:38:08.772 "data_size": 63488 00:38:08.772 }, 00:38:08.772 { 00:38:08.772 "name": "pt4", 00:38:08.772 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:08.772 "is_configured": true, 00:38:08.772 "data_offset": 2048, 00:38:08.772 "data_size": 63488 00:38:08.772 } 00:38:08.772 ] 00:38:08.772 }' 00:38:08.772 12:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:08.772 12:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:09.338 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:38:09.338 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:38:09.338 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:38:09.338 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:38:09.338 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:38:09.338 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:38:09.338 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:09.338 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:38:09.595 [2024-06-10 12:01:41.460494] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:09.595 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:38:09.595 "name": "raid_bdev1", 00:38:09.595 "aliases": [ 00:38:09.595 "86043249-91f6-4416-b95d-a146f201b016" 00:38:09.595 ], 00:38:09.595 "product_name": "Raid Volume", 00:38:09.595 "block_size": 512, 00:38:09.596 "num_blocks": 190464, 00:38:09.596 "uuid": "86043249-91f6-4416-b95d-a146f201b016", 00:38:09.596 "assigned_rate_limits": { 00:38:09.596 "rw_ios_per_sec": 0, 00:38:09.596 "rw_mbytes_per_sec": 0, 00:38:09.596 "r_mbytes_per_sec": 0, 00:38:09.596 "w_mbytes_per_sec": 0 00:38:09.596 }, 00:38:09.596 "claimed": false, 00:38:09.596 "zoned": false, 00:38:09.596 "supported_io_types": { 00:38:09.596 "read": true, 00:38:09.596 "write": true, 00:38:09.596 "unmap": false, 00:38:09.596 "write_zeroes": true, 00:38:09.596 "flush": false, 00:38:09.596 "reset": true, 00:38:09.596 "compare": false, 00:38:09.596 "compare_and_write": false, 00:38:09.596 "abort": false, 00:38:09.596 "nvme_admin": false, 00:38:09.596 "nvme_io": false 00:38:09.596 }, 00:38:09.596 "driver_specific": { 00:38:09.596 "raid": { 00:38:09.596 "uuid": "86043249-91f6-4416-b95d-a146f201b016", 00:38:09.596 "strip_size_kb": 64, 00:38:09.596 "state": "online", 00:38:09.596 "raid_level": "raid5f", 00:38:09.596 "superblock": true, 00:38:09.596 "num_base_bdevs": 4, 00:38:09.596 "num_base_bdevs_discovered": 4, 00:38:09.596 "num_base_bdevs_operational": 4, 00:38:09.596 "base_bdevs_list": [ 00:38:09.596 { 00:38:09.596 "name": "pt1", 00:38:09.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:09.596 "is_configured": true, 00:38:09.596 "data_offset": 2048, 00:38:09.596 "data_size": 63488 00:38:09.596 }, 00:38:09.596 { 00:38:09.596 "name": "pt2", 00:38:09.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:09.596 "is_configured": true, 00:38:09.596 "data_offset": 2048, 00:38:09.596 "data_size": 63488 00:38:09.596 }, 00:38:09.596 { 00:38:09.596 "name": "pt3", 00:38:09.596 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:09.596 "is_configured": true, 00:38:09.596 "data_offset": 2048, 00:38:09.596 "data_size": 63488 00:38:09.596 }, 00:38:09.596 { 00:38:09.596 "name": "pt4", 00:38:09.596 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:09.596 "is_configured": true, 00:38:09.596 "data_offset": 2048, 00:38:09.596 "data_size": 63488 00:38:09.596 } 00:38:09.596 ] 00:38:09.596 } 00:38:09.596 } 00:38:09.596 }' 00:38:09.596 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:09.596 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:38:09.596 pt2 00:38:09.596 pt3 00:38:09.596 pt4' 00:38:09.596 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:09.596 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:38:09.596 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:09.854 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:09.854 "name": "pt1", 00:38:09.854 "aliases": [ 00:38:09.854 "00000000-0000-0000-0000-000000000001" 00:38:09.854 ], 00:38:09.854 "product_name": "passthru", 00:38:09.854 "block_size": 512, 00:38:09.854 "num_blocks": 65536, 00:38:09.854 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:09.854 "assigned_rate_limits": { 00:38:09.854 "rw_ios_per_sec": 0, 00:38:09.854 "rw_mbytes_per_sec": 0, 00:38:09.854 "r_mbytes_per_sec": 0, 00:38:09.854 "w_mbytes_per_sec": 0 00:38:09.854 }, 00:38:09.854 "claimed": true, 00:38:09.854 "claim_type": "exclusive_write", 00:38:09.854 "zoned": false, 00:38:09.854 "supported_io_types": { 00:38:09.854 "read": true, 00:38:09.854 "write": true, 00:38:09.854 "unmap": true, 00:38:09.854 "write_zeroes": true, 00:38:09.854 "flush": true, 00:38:09.854 "reset": true, 00:38:09.854 "compare": false, 00:38:09.854 "compare_and_write": false, 00:38:09.854 "abort": true, 00:38:09.854 "nvme_admin": false, 00:38:09.854 "nvme_io": false 00:38:09.854 }, 00:38:09.854 "memory_domains": [ 00:38:09.854 { 00:38:09.854 "dma_device_id": "system", 00:38:09.854 "dma_device_type": 1 00:38:09.854 }, 00:38:09.854 { 00:38:09.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:09.855 "dma_device_type": 2 00:38:09.855 } 00:38:09.855 ], 00:38:09.855 "driver_specific": { 00:38:09.855 "passthru": { 00:38:09.855 "name": "pt1", 00:38:09.855 "base_bdev_name": "malloc1" 00:38:09.855 } 00:38:09.855 } 00:38:09.855 }' 00:38:09.855 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:09.855 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:09.855 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:38:09.855 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:10.113 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:10.113 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:38:10.113 12:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:10.113 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:10.113 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:38:10.113 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:10.113 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:10.113 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:38:10.113 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:10.113 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:38:10.113 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:10.404 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:10.404 "name": "pt2", 00:38:10.404 "aliases": [ 00:38:10.404 "00000000-0000-0000-0000-000000000002" 00:38:10.404 ], 00:38:10.404 "product_name": "passthru", 00:38:10.404 "block_size": 512, 00:38:10.404 "num_blocks": 65536, 00:38:10.404 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:10.404 "assigned_rate_limits": { 00:38:10.404 "rw_ios_per_sec": 0, 00:38:10.404 "rw_mbytes_per_sec": 0, 00:38:10.404 "r_mbytes_per_sec": 0, 00:38:10.404 "w_mbytes_per_sec": 0 00:38:10.404 }, 00:38:10.404 "claimed": true, 00:38:10.404 "claim_type": "exclusive_write", 00:38:10.404 "zoned": false, 00:38:10.404 "supported_io_types": { 00:38:10.404 "read": true, 00:38:10.404 "write": true, 00:38:10.404 "unmap": true, 00:38:10.404 "write_zeroes": true, 00:38:10.404 "flush": true, 00:38:10.404 "reset": true, 00:38:10.404 "compare": false, 00:38:10.404 "compare_and_write": false, 00:38:10.404 "abort": true, 00:38:10.404 "nvme_admin": false, 00:38:10.404 "nvme_io": false 00:38:10.404 }, 00:38:10.404 "memory_domains": [ 00:38:10.404 { 00:38:10.404 "dma_device_id": "system", 00:38:10.404 "dma_device_type": 1 00:38:10.404 }, 00:38:10.404 { 00:38:10.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:10.404 "dma_device_type": 2 00:38:10.404 } 00:38:10.404 ], 00:38:10.404 "driver_specific": { 00:38:10.404 "passthru": { 00:38:10.404 "name": "pt2", 00:38:10.404 "base_bdev_name": "malloc2" 00:38:10.404 } 00:38:10.404 } 00:38:10.404 }' 00:38:10.404 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:10.404 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:10.404 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:38:10.404 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:10.679 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:10.679 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:38:10.679 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:10.679 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:10.679 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:38:10.679 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:10.679 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:10.679 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:38:10.679 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:10.679 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:10.679 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:38:10.936 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:10.936 "name": "pt3", 00:38:10.936 "aliases": [ 00:38:10.936 "00000000-0000-0000-0000-000000000003" 00:38:10.936 ], 00:38:10.936 "product_name": "passthru", 00:38:10.936 "block_size": 512, 00:38:10.936 "num_blocks": 65536, 00:38:10.936 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:10.936 "assigned_rate_limits": { 00:38:10.936 "rw_ios_per_sec": 0, 00:38:10.936 "rw_mbytes_per_sec": 0, 00:38:10.936 "r_mbytes_per_sec": 0, 00:38:10.936 "w_mbytes_per_sec": 0 00:38:10.936 }, 00:38:10.936 "claimed": true, 00:38:10.936 "claim_type": "exclusive_write", 00:38:10.936 "zoned": false, 00:38:10.936 "supported_io_types": { 00:38:10.936 "read": true, 00:38:10.936 "write": true, 00:38:10.936 "unmap": true, 00:38:10.936 "write_zeroes": true, 00:38:10.936 "flush": true, 00:38:10.936 "reset": true, 00:38:10.936 "compare": false, 00:38:10.936 "compare_and_write": false, 00:38:10.936 "abort": true, 00:38:10.936 "nvme_admin": false, 00:38:10.936 "nvme_io": false 00:38:10.936 }, 00:38:10.936 "memory_domains": [ 00:38:10.936 { 00:38:10.936 "dma_device_id": "system", 00:38:10.936 "dma_device_type": 1 00:38:10.936 }, 00:38:10.936 { 00:38:10.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:10.936 "dma_device_type": 2 00:38:10.936 } 00:38:10.936 ], 00:38:10.936 "driver_specific": { 00:38:10.936 "passthru": { 00:38:10.936 "name": "pt3", 00:38:10.936 "base_bdev_name": "malloc3" 00:38:10.936 } 00:38:10.936 } 00:38:10.936 }' 00:38:10.936 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:10.937 12:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:11.192 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:38:11.193 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:11.193 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:11.193 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:38:11.193 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:11.193 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:11.193 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:38:11.193 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:11.193 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:11.449 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:38:11.449 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:11.449 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:38:11.449 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:11.706 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:11.706 "name": "pt4", 00:38:11.706 "aliases": [ 00:38:11.706 "00000000-0000-0000-0000-000000000004" 00:38:11.706 ], 00:38:11.706 "product_name": "passthru", 00:38:11.706 "block_size": 512, 00:38:11.706 "num_blocks": 65536, 00:38:11.706 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:11.706 "assigned_rate_limits": { 00:38:11.706 "rw_ios_per_sec": 0, 00:38:11.706 "rw_mbytes_per_sec": 0, 00:38:11.706 "r_mbytes_per_sec": 0, 00:38:11.706 "w_mbytes_per_sec": 0 00:38:11.706 }, 00:38:11.706 "claimed": true, 00:38:11.706 "claim_type": "exclusive_write", 00:38:11.706 "zoned": false, 00:38:11.706 "supported_io_types": { 00:38:11.706 "read": true, 00:38:11.706 "write": true, 00:38:11.706 "unmap": true, 00:38:11.706 "write_zeroes": true, 00:38:11.706 "flush": true, 00:38:11.706 "reset": true, 00:38:11.706 "compare": false, 00:38:11.706 "compare_and_write": false, 00:38:11.706 "abort": true, 00:38:11.706 "nvme_admin": false, 00:38:11.706 "nvme_io": false 00:38:11.706 }, 00:38:11.706 "memory_domains": [ 00:38:11.706 { 00:38:11.706 "dma_device_id": "system", 00:38:11.706 "dma_device_type": 1 00:38:11.706 }, 00:38:11.706 { 00:38:11.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:11.706 "dma_device_type": 2 00:38:11.706 } 00:38:11.706 ], 00:38:11.706 "driver_specific": { 00:38:11.706 "passthru": { 00:38:11.706 "name": "pt4", 00:38:11.706 "base_bdev_name": "malloc4" 00:38:11.706 } 00:38:11.706 } 00:38:11.706 }' 00:38:11.706 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:11.706 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:11.706 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:38:11.706 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:11.706 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:11.706 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:38:11.706 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:11.964 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:11.964 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:38:11.964 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:11.964 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:11.964 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:38:11.964 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:11.964 12:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:38:12.222 [2024-06-10 12:01:44.171501] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:12.222 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 86043249-91f6-4416-b95d-a146f201b016 '!=' 86043249-91f6-4416-b95d-a146f201b016 ']' 00:38:12.222 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:38:12.222 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:38:12.222 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:38:12.222 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:38:12.488 [2024-06-10 12:01:44.443472] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:38:12.488 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:12.488 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:12.488 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:12.488 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:12.488 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:12.488 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:38:12.488 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:12.488 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:12.488 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:12.488 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:12.488 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:12.488 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:12.745 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:12.745 "name": "raid_bdev1", 00:38:12.745 "uuid": "86043249-91f6-4416-b95d-a146f201b016", 00:38:12.745 "strip_size_kb": 64, 00:38:12.745 "state": "online", 00:38:12.745 "raid_level": "raid5f", 00:38:12.745 "superblock": true, 00:38:12.745 "num_base_bdevs": 4, 00:38:12.745 "num_base_bdevs_discovered": 3, 00:38:12.745 "num_base_bdevs_operational": 3, 00:38:12.745 "base_bdevs_list": [ 00:38:12.745 { 00:38:12.745 "name": null, 00:38:12.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:12.745 "is_configured": false, 00:38:12.745 "data_offset": 2048, 00:38:12.745 "data_size": 63488 00:38:12.745 }, 00:38:12.745 { 00:38:12.745 "name": "pt2", 00:38:12.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:12.745 "is_configured": true, 00:38:12.745 "data_offset": 2048, 00:38:12.745 "data_size": 63488 00:38:12.745 }, 00:38:12.745 { 00:38:12.745 "name": "pt3", 00:38:12.745 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:12.745 "is_configured": true, 00:38:12.745 "data_offset": 2048, 00:38:12.745 "data_size": 63488 00:38:12.745 }, 00:38:12.745 { 00:38:12.745 "name": "pt4", 00:38:12.745 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:12.745 "is_configured": true, 00:38:12.745 "data_offset": 2048, 00:38:12.745 "data_size": 63488 00:38:12.745 } 00:38:12.745 ] 00:38:12.745 }' 00:38:12.745 12:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:12.745 12:01:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:13.311 12:01:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:13.877 [2024-06-10 12:01:45.639767] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:13.877 [2024-06-10 12:01:45.640012] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:13.877 [2024-06-10 12:01:45.640223] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:13.877 [2024-06-10 12:01:45.640400] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:13.877 [2024-06-10 12:01:45.640496] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:38:13.877 12:01:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:38:13.877 12:01:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:13.877 12:01:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:38:13.877 12:01:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:38:13.877 12:01:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:38:13.877 12:01:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:38:13.877 12:01:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:38:14.135 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:38:14.135 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:38:14.135 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:38:14.391 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:38:14.391 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:38:14.391 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:38:14.649 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:38:14.649 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:38:14.649 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:38:14.649 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:38:14.649 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:14.907 [2024-06-10 12:01:46.711975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:14.907 [2024-06-10 12:01:46.712277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:14.907 [2024-06-10 12:01:46.712349] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:38:14.907 [2024-06-10 12:01:46.712471] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:14.907 [2024-06-10 12:01:46.715176] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:14.907 [2024-06-10 12:01:46.715355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:14.907 [2024-06-10 12:01:46.715619] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:38:14.907 [2024-06-10 12:01:46.715768] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:14.907 pt2 00:38:14.907 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:38:14.907 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:14.907 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:14.907 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:14.907 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:14.907 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:38:14.907 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:14.907 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:14.907 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:14.907 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:14.907 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:14.907 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:14.907 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:14.907 "name": "raid_bdev1", 00:38:14.907 "uuid": "86043249-91f6-4416-b95d-a146f201b016", 00:38:14.907 "strip_size_kb": 64, 00:38:14.907 "state": "configuring", 00:38:14.907 "raid_level": "raid5f", 00:38:14.907 "superblock": true, 00:38:14.907 "num_base_bdevs": 4, 00:38:14.907 "num_base_bdevs_discovered": 1, 00:38:14.907 "num_base_bdevs_operational": 3, 00:38:14.907 "base_bdevs_list": [ 00:38:14.907 { 00:38:14.907 "name": null, 00:38:14.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:14.907 "is_configured": false, 00:38:14.907 "data_offset": 2048, 00:38:14.907 "data_size": 63488 00:38:14.907 }, 00:38:14.907 { 00:38:14.907 "name": "pt2", 00:38:14.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:14.907 "is_configured": true, 00:38:14.907 "data_offset": 2048, 00:38:14.907 "data_size": 63488 00:38:14.907 }, 00:38:14.907 { 00:38:14.907 "name": null, 00:38:14.907 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:14.907 "is_configured": false, 00:38:14.907 "data_offset": 2048, 00:38:14.907 "data_size": 63488 00:38:14.907 }, 00:38:14.907 { 00:38:14.907 "name": null, 00:38:14.907 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:14.907 "is_configured": false, 00:38:14.907 "data_offset": 2048, 00:38:14.907 "data_size": 63488 00:38:14.907 } 00:38:14.907 ] 00:38:14.907 }' 00:38:14.907 12:01:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:14.907 12:01:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:15.841 12:01:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:38:15.841 12:01:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:38:15.841 12:01:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:38:15.841 [2024-06-10 12:01:47.752215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:38:15.841 [2024-06-10 12:01:47.752471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:15.841 [2024-06-10 12:01:47.752548] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:38:15.841 [2024-06-10 12:01:47.752671] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:15.841 [2024-06-10 12:01:47.753186] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:15.841 [2024-06-10 12:01:47.753338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:38:15.841 [2024-06-10 12:01:47.753555] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:38:15.841 [2024-06-10 12:01:47.753668] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:38:15.841 pt3 00:38:15.841 12:01:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:38:15.841 12:01:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:15.841 12:01:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:15.841 12:01:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:15.841 12:01:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:15.841 12:01:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:38:15.841 12:01:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:15.841 12:01:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:15.841 12:01:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:15.841 12:01:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:15.841 12:01:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:15.841 12:01:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:16.099 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:16.099 "name": "raid_bdev1", 00:38:16.099 "uuid": "86043249-91f6-4416-b95d-a146f201b016", 00:38:16.099 "strip_size_kb": 64, 00:38:16.099 "state": "configuring", 00:38:16.099 "raid_level": "raid5f", 00:38:16.099 "superblock": true, 00:38:16.099 "num_base_bdevs": 4, 00:38:16.099 "num_base_bdevs_discovered": 2, 00:38:16.099 "num_base_bdevs_operational": 3, 00:38:16.099 "base_bdevs_list": [ 00:38:16.099 { 00:38:16.099 "name": null, 00:38:16.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:16.099 "is_configured": false, 00:38:16.099 "data_offset": 2048, 00:38:16.099 "data_size": 63488 00:38:16.099 }, 00:38:16.099 { 00:38:16.099 "name": "pt2", 00:38:16.099 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:16.099 "is_configured": true, 00:38:16.099 "data_offset": 2048, 00:38:16.099 "data_size": 63488 00:38:16.099 }, 00:38:16.099 { 00:38:16.099 "name": "pt3", 00:38:16.099 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:16.099 "is_configured": true, 00:38:16.099 "data_offset": 2048, 00:38:16.099 "data_size": 63488 00:38:16.099 }, 00:38:16.099 { 00:38:16.099 "name": null, 00:38:16.099 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:16.099 "is_configured": false, 00:38:16.099 "data_offset": 2048, 00:38:16.099 "data_size": 63488 00:38:16.099 } 00:38:16.099 ] 00:38:16.099 }' 00:38:16.099 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:16.099 12:01:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:16.665 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:38:16.665 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:38:16.665 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:38:16.665 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:38:16.923 [2024-06-10 12:01:48.908467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:38:16.923 [2024-06-10 12:01:48.908794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:16.923 [2024-06-10 12:01:48.908875] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:38:16.923 [2024-06-10 12:01:48.908976] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:16.923 [2024-06-10 12:01:48.909535] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:16.923 [2024-06-10 12:01:48.909701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:38:16.923 [2024-06-10 12:01:48.909932] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:38:16.923 [2024-06-10 12:01:48.910082] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:38:16.923 [2024-06-10 12:01:48.910257] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:38:16.923 [2024-06-10 12:01:48.910373] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:38:16.923 [2024-06-10 12:01:48.910507] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:38:16.923 [2024-06-10 12:01:48.918738] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:38:16.923 [2024-06-10 12:01:48.918892] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:38:16.923 [2024-06-10 12:01:48.919326] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:16.923 pt4 00:38:16.923 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:16.923 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:16.923 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:16.923 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:16.923 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:16.923 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:38:16.923 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:16.923 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:16.923 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:16.923 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:16.923 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:16.923 12:01:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:17.181 12:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:17.181 "name": "raid_bdev1", 00:38:17.181 "uuid": "86043249-91f6-4416-b95d-a146f201b016", 00:38:17.181 "strip_size_kb": 64, 00:38:17.181 "state": "online", 00:38:17.181 "raid_level": "raid5f", 00:38:17.181 "superblock": true, 00:38:17.181 "num_base_bdevs": 4, 00:38:17.181 "num_base_bdevs_discovered": 3, 00:38:17.181 "num_base_bdevs_operational": 3, 00:38:17.181 "base_bdevs_list": [ 00:38:17.181 { 00:38:17.181 "name": null, 00:38:17.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:17.181 "is_configured": false, 00:38:17.181 "data_offset": 2048, 00:38:17.181 "data_size": 63488 00:38:17.181 }, 00:38:17.181 { 00:38:17.181 "name": "pt2", 00:38:17.181 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:17.181 "is_configured": true, 00:38:17.181 "data_offset": 2048, 00:38:17.181 "data_size": 63488 00:38:17.181 }, 00:38:17.181 { 00:38:17.181 "name": "pt3", 00:38:17.181 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:17.181 "is_configured": true, 00:38:17.181 "data_offset": 2048, 00:38:17.181 "data_size": 63488 00:38:17.181 }, 00:38:17.181 { 00:38:17.181 "name": "pt4", 00:38:17.181 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:17.181 "is_configured": true, 00:38:17.181 "data_offset": 2048, 00:38:17.181 "data_size": 63488 00:38:17.181 } 00:38:17.181 ] 00:38:17.181 }' 00:38:17.181 12:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:17.181 12:01:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:18.114 12:01:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:18.114 [2024-06-10 12:01:50.050477] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:18.114 [2024-06-10 12:01:50.050777] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:18.114 [2024-06-10 12:01:50.050996] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:18.114 [2024-06-10 12:01:50.051210] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:18.114 [2024-06-10 12:01:50.051312] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:38:18.114 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:18.114 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:38:18.372 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:38:18.372 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:38:18.372 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:38:18.372 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:38:18.372 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:38:18.630 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:18.888 [2024-06-10 12:01:50.806628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:18.888 [2024-06-10 12:01:50.806964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:18.888 [2024-06-10 12:01:50.807045] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:38:18.888 [2024-06-10 12:01:50.807191] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:18.888 [2024-06-10 12:01:50.809936] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:18.888 [2024-06-10 12:01:50.810129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:18.888 [2024-06-10 12:01:50.810355] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:38:18.888 [2024-06-10 12:01:50.810491] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:18.888 [2024-06-10 12:01:50.810715] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:38:18.888 [2024-06-10 12:01:50.810826] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:18.888 [2024-06-10 12:01:50.810878] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:38:18.888 [2024-06-10 12:01:50.811025] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:18.888 [2024-06-10 12:01:50.811179] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:38:18.888 pt1 00:38:18.888 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:38:18.888 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:38:18.888 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:18.888 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:18.888 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:18.888 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:18.888 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:38:18.888 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:18.888 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:18.888 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:18.888 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:18.888 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:18.888 12:01:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:19.146 12:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:19.146 "name": "raid_bdev1", 00:38:19.146 "uuid": "86043249-91f6-4416-b95d-a146f201b016", 00:38:19.146 "strip_size_kb": 64, 00:38:19.146 "state": "configuring", 00:38:19.146 "raid_level": "raid5f", 00:38:19.146 "superblock": true, 00:38:19.146 "num_base_bdevs": 4, 00:38:19.146 "num_base_bdevs_discovered": 2, 00:38:19.146 "num_base_bdevs_operational": 3, 00:38:19.146 "base_bdevs_list": [ 00:38:19.146 { 00:38:19.146 "name": null, 00:38:19.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:19.146 "is_configured": false, 00:38:19.146 "data_offset": 2048, 00:38:19.146 "data_size": 63488 00:38:19.146 }, 00:38:19.146 { 00:38:19.146 "name": "pt2", 00:38:19.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:19.146 "is_configured": true, 00:38:19.146 "data_offset": 2048, 00:38:19.146 "data_size": 63488 00:38:19.146 }, 00:38:19.146 { 00:38:19.146 "name": "pt3", 00:38:19.146 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:19.146 "is_configured": true, 00:38:19.147 "data_offset": 2048, 00:38:19.147 "data_size": 63488 00:38:19.147 }, 00:38:19.147 { 00:38:19.147 "name": null, 00:38:19.147 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:19.147 "is_configured": false, 00:38:19.147 "data_offset": 2048, 00:38:19.147 "data_size": 63488 00:38:19.147 } 00:38:19.147 ] 00:38:19.147 }' 00:38:19.147 12:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:19.147 12:01:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:19.713 12:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:38:19.713 12:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:38:19.971 12:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:38:19.971 12:01:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:38:20.227 [2024-06-10 12:01:52.091208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:38:20.227 [2024-06-10 12:01:52.091563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:20.227 [2024-06-10 12:01:52.091655] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d280 00:38:20.227 [2024-06-10 12:01:52.092002] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:20.227 [2024-06-10 12:01:52.092575] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:20.227 [2024-06-10 12:01:52.092745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:38:20.227 [2024-06-10 12:01:52.093000] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:38:20.227 [2024-06-10 12:01:52.093153] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:38:20.227 [2024-06-10 12:01:52.093369] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cf80 00:38:20.227 [2024-06-10 12:01:52.093509] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:38:20.227 [2024-06-10 12:01:52.093796] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:38:20.227 [2024-06-10 12:01:52.102817] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cf80 00:38:20.227 [2024-06-10 12:01:52.103002] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cf80 00:38:20.227 [2024-06-10 12:01:52.103485] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:20.227 pt4 00:38:20.227 12:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:20.227 12:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:20.227 12:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:20.227 12:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:20.227 12:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:20.227 12:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:38:20.227 12:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:20.227 12:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:20.227 12:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:20.227 12:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:20.227 12:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:20.227 12:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:20.484 12:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:20.484 "name": "raid_bdev1", 00:38:20.484 "uuid": "86043249-91f6-4416-b95d-a146f201b016", 00:38:20.484 "strip_size_kb": 64, 00:38:20.484 "state": "online", 00:38:20.484 "raid_level": "raid5f", 00:38:20.484 "superblock": true, 00:38:20.484 "num_base_bdevs": 4, 00:38:20.484 "num_base_bdevs_discovered": 3, 00:38:20.484 "num_base_bdevs_operational": 3, 00:38:20.484 "base_bdevs_list": [ 00:38:20.484 { 00:38:20.484 "name": null, 00:38:20.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:20.484 "is_configured": false, 00:38:20.484 "data_offset": 2048, 00:38:20.484 "data_size": 63488 00:38:20.484 }, 00:38:20.484 { 00:38:20.484 "name": "pt2", 00:38:20.484 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:20.484 "is_configured": true, 00:38:20.484 "data_offset": 2048, 00:38:20.484 "data_size": 63488 00:38:20.484 }, 00:38:20.484 { 00:38:20.484 "name": "pt3", 00:38:20.484 "uuid": "00000000-0000-0000-0000-000000000003", 00:38:20.484 "is_configured": true, 00:38:20.484 "data_offset": 2048, 00:38:20.484 "data_size": 63488 00:38:20.484 }, 00:38:20.484 { 00:38:20.484 "name": "pt4", 00:38:20.484 "uuid": "00000000-0000-0000-0000-000000000004", 00:38:20.484 "is_configured": true, 00:38:20.484 "data_offset": 2048, 00:38:20.484 "data_size": 63488 00:38:20.484 } 00:38:20.484 ] 00:38:20.484 }' 00:38:20.484 12:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:20.484 12:01:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:21.050 12:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:38:21.050 12:01:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:38:21.050 12:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:38:21.050 12:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:38:21.050 12:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:21.309 [2024-06-10 12:01:53.317437] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:21.309 12:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 86043249-91f6-4416-b95d-a146f201b016 '!=' 86043249-91f6-4416-b95d-a146f201b016 ']' 00:38:21.309 12:01:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 158299 00:38:21.309 12:01:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 158299 ']' 00:38:21.309 12:01:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # kill -0 158299 00:38:21.309 12:01:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # uname 00:38:21.309 12:01:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:21.309 12:01:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 158299 00:38:21.309 12:01:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:21.309 12:01:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:21.309 12:01:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 158299' 00:38:21.309 killing process with pid 158299 00:38:21.309 12:01:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # kill 158299 00:38:21.309 [2024-06-10 12:01:53.365795] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:21.309 12:01:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # wait 158299 00:38:21.309 [2024-06-10 12:01:53.366023] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:21.309 [2024-06-10 12:01:53.366116] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:21.309 [2024-06-10 12:01:53.366129] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state offline 00:38:21.874 [2024-06-10 12:01:53.789705] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:23.260 12:01:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:38:23.260 00:38:23.260 real 0m27.673s 00:38:23.260 user 0m49.871s 00:38:23.260 sys 0m4.059s 00:38:23.260 12:01:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:23.260 12:01:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:38:23.260 ************************************ 00:38:23.260 END TEST raid5f_superblock_test 00:38:23.260 ************************************ 00:38:23.260 12:01:55 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:38:23.260 12:01:55 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:38:23.260 12:01:55 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:38:23.260 12:01:55 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:23.260 12:01:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:23.260 ************************************ 00:38:23.260 START TEST raid5f_rebuild_test 00:38:23.260 ************************************ 00:38:23.260 12:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid5f 4 false false true 00:38:23.260 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:38:23.260 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:38:23.260 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:38:23.260 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=159160 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 159160 /var/tmp/spdk-raid.sock 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@830 -- # '[' -z 159160 ']' 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:23.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:23.261 12:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:23.261 [2024-06-10 12:01:55.316586] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:38:23.261 [2024-06-10 12:01:55.317006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159160 ] 00:38:23.261 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:23.261 Zero copy mechanism will not be used. 00:38:23.520 [2024-06-10 12:01:55.497573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.778 [2024-06-10 12:01:55.793036] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.037 [2024-06-10 12:01:56.035199] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:24.295 12:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:24.295 12:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@863 -- # return 0 00:38:24.295 12:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:24.295 12:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:38:24.553 BaseBdev1_malloc 00:38:24.553 12:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:24.811 [2024-06-10 12:01:56.772808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:24.811 [2024-06-10 12:01:56.773175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:24.811 [2024-06-10 12:01:56.773330] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:38:24.811 [2024-06-10 12:01:56.773434] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:24.811 [2024-06-10 12:01:56.776292] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:24.811 [2024-06-10 12:01:56.776460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:24.811 BaseBdev1 00:38:24.811 12:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:24.811 12:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:38:25.069 BaseBdev2_malloc 00:38:25.069 12:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:25.327 [2024-06-10 12:01:57.311033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:25.327 [2024-06-10 12:01:57.311304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:25.327 [2024-06-10 12:01:57.311406] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:38:25.327 [2024-06-10 12:01:57.311649] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:25.327 [2024-06-10 12:01:57.314220] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:25.327 [2024-06-10 12:01:57.314391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:25.327 BaseBdev2 00:38:25.327 12:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:25.327 12:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:38:25.586 BaseBdev3_malloc 00:38:25.586 12:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:38:25.845 [2024-06-10 12:01:57.847440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:38:25.845 [2024-06-10 12:01:57.847773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:25.845 [2024-06-10 12:01:57.847909] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:38:25.845 [2024-06-10 12:01:57.848036] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:25.845 [2024-06-10 12:01:57.850663] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:25.845 [2024-06-10 12:01:57.850882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:38:25.845 BaseBdev3 00:38:25.845 12:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:25.845 12:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:38:26.103 BaseBdev4_malloc 00:38:26.103 12:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:38:26.361 [2024-06-10 12:01:58.411040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:38:26.361 [2024-06-10 12:01:58.411345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:26.361 [2024-06-10 12:01:58.411518] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:38:26.361 [2024-06-10 12:01:58.411662] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:26.361 [2024-06-10 12:01:58.414314] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:26.361 [2024-06-10 12:01:58.414498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:38:26.361 BaseBdev4 00:38:26.619 12:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:38:26.878 spare_malloc 00:38:26.878 12:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:27.136 spare_delay 00:38:27.136 12:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:27.136 [2024-06-10 12:01:59.179012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:27.136 [2024-06-10 12:01:59.179349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:27.136 [2024-06-10 12:01:59.179437] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:38:27.136 [2024-06-10 12:01:59.179575] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:27.136 [2024-06-10 12:01:59.182473] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:27.136 [2024-06-10 12:01:59.182670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:27.136 spare 00:38:27.394 12:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:38:27.394 [2024-06-10 12:01:59.387124] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:27.394 [2024-06-10 12:01:59.389493] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:27.394 [2024-06-10 12:01:59.389714] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:27.394 [2024-06-10 12:01:59.389870] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:27.394 [2024-06-10 12:01:59.390005] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:38:27.394 [2024-06-10 12:01:59.390167] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:38:27.394 [2024-06-10 12:01:59.390424] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:38:27.394 [2024-06-10 12:01:59.400386] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:38:27.394 [2024-06-10 12:01:59.400544] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:38:27.394 [2024-06-10 12:01:59.400882] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:27.394 12:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:38:27.394 12:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:27.394 12:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:27.394 12:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:27.394 12:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:27.394 12:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:38:27.394 12:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:27.394 12:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:27.394 12:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:27.394 12:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:27.394 12:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:27.394 12:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:27.651 12:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:27.651 "name": "raid_bdev1", 00:38:27.651 "uuid": "6f391b22-fad8-417d-86fd-27a6270f54d0", 00:38:27.651 "strip_size_kb": 64, 00:38:27.651 "state": "online", 00:38:27.651 "raid_level": "raid5f", 00:38:27.651 "superblock": false, 00:38:27.651 "num_base_bdevs": 4, 00:38:27.651 "num_base_bdevs_discovered": 4, 00:38:27.651 "num_base_bdevs_operational": 4, 00:38:27.651 "base_bdevs_list": [ 00:38:27.651 { 00:38:27.651 "name": "BaseBdev1", 00:38:27.651 "uuid": "675b5a38-e3c0-5968-9232-c25bf4e78a83", 00:38:27.651 "is_configured": true, 00:38:27.651 "data_offset": 0, 00:38:27.651 "data_size": 65536 00:38:27.651 }, 00:38:27.651 { 00:38:27.651 "name": "BaseBdev2", 00:38:27.651 "uuid": "bdd4bfa7-6948-5014-8b68-cf9b72768661", 00:38:27.651 "is_configured": true, 00:38:27.651 "data_offset": 0, 00:38:27.651 "data_size": 65536 00:38:27.651 }, 00:38:27.651 { 00:38:27.651 "name": "BaseBdev3", 00:38:27.651 "uuid": "49aec4bf-33ce-5c17-9a99-d3d4f6514ef3", 00:38:27.651 "is_configured": true, 00:38:27.651 "data_offset": 0, 00:38:27.651 "data_size": 65536 00:38:27.651 }, 00:38:27.651 { 00:38:27.651 "name": "BaseBdev4", 00:38:27.651 "uuid": "42424b82-4034-5dd7-9dde-ac28228d5c6a", 00:38:27.651 "is_configured": true, 00:38:27.651 "data_offset": 0, 00:38:27.651 "data_size": 65536 00:38:27.651 } 00:38:27.651 ] 00:38:27.651 }' 00:38:27.651 12:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:27.651 12:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:28.219 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:28.219 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:38:28.475 [2024-06-10 12:02:00.509093] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:28.475 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=196608 00:38:28.732 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:28.732 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:28.991 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:38:28.991 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:38:28.991 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:38:28.991 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:38:28.991 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:38:28.991 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:28.991 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:38:28.991 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:28.991 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:38:28.991 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:28.991 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:38:28.991 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:28.991 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:28.991 12:02:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:38:28.991 [2024-06-10 12:02:01.037120] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:38:29.248 /dev/nbd0 00:38:29.248 12:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:29.248 12:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:29.248 12:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:38:29.248 12:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:38:29.248 12:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:38:29.248 12:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:38:29.248 12:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:38:29.248 12:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # break 00:38:29.248 12:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:38:29.248 12:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:38:29.248 12:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:29.248 1+0 records in 00:38:29.249 1+0 records out 00:38:29.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365363 s, 11.2 MB/s 00:38:29.249 12:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:29.249 12:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:38:29.249 12:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:29.249 12:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:38:29.249 12:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:38:29.249 12:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:29.249 12:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:29.249 12:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:38:29.249 12:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:38:29.249 12:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 192 00:38:29.249 12:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:38:29.813 512+0 records in 00:38:29.813 512+0 records out 00:38:29.813 100663296 bytes (101 MB, 96 MiB) copied, 0.650347 s, 155 MB/s 00:38:29.813 12:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:38:29.813 12:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:29.813 12:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:29.813 12:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:29.813 12:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:38:29.813 12:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:29.813 12:02:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:38:30.070 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:30.070 [2024-06-10 12:02:02.065074] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:30.070 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:30.070 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:30.070 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:30.070 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:30.070 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:30.070 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:38:30.070 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:38:30.070 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:38:30.327 [2024-06-10 12:02:02.332692] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:30.327 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:30.327 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:30.327 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:30.327 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:30.327 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:30.327 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:38:30.327 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:30.327 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:30.327 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:30.327 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:30.328 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:30.328 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:30.585 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:30.585 "name": "raid_bdev1", 00:38:30.585 "uuid": "6f391b22-fad8-417d-86fd-27a6270f54d0", 00:38:30.585 "strip_size_kb": 64, 00:38:30.585 "state": "online", 00:38:30.585 "raid_level": "raid5f", 00:38:30.585 "superblock": false, 00:38:30.585 "num_base_bdevs": 4, 00:38:30.585 "num_base_bdevs_discovered": 3, 00:38:30.585 "num_base_bdevs_operational": 3, 00:38:30.585 "base_bdevs_list": [ 00:38:30.585 { 00:38:30.585 "name": null, 00:38:30.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:30.585 "is_configured": false, 00:38:30.585 "data_offset": 0, 00:38:30.585 "data_size": 65536 00:38:30.585 }, 00:38:30.585 { 00:38:30.585 "name": "BaseBdev2", 00:38:30.585 "uuid": "bdd4bfa7-6948-5014-8b68-cf9b72768661", 00:38:30.585 "is_configured": true, 00:38:30.585 "data_offset": 0, 00:38:30.585 "data_size": 65536 00:38:30.585 }, 00:38:30.585 { 00:38:30.585 "name": "BaseBdev3", 00:38:30.585 "uuid": "49aec4bf-33ce-5c17-9a99-d3d4f6514ef3", 00:38:30.585 "is_configured": true, 00:38:30.585 "data_offset": 0, 00:38:30.585 "data_size": 65536 00:38:30.585 }, 00:38:30.585 { 00:38:30.585 "name": "BaseBdev4", 00:38:30.585 "uuid": "42424b82-4034-5dd7-9dde-ac28228d5c6a", 00:38:30.585 "is_configured": true, 00:38:30.585 "data_offset": 0, 00:38:30.585 "data_size": 65536 00:38:30.585 } 00:38:30.585 ] 00:38:30.585 }' 00:38:30.585 12:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:30.585 12:02:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:31.175 12:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:31.433 [2024-06-10 12:02:03.442123] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:31.433 [2024-06-10 12:02:03.460838] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:38:31.433 [2024-06-10 12:02:03.472845] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:31.433 12:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:38:32.808 12:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:32.808 12:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:32.808 12:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:32.808 12:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:32.808 12:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:32.808 12:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:32.808 12:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:32.808 12:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:32.808 "name": "raid_bdev1", 00:38:32.808 "uuid": "6f391b22-fad8-417d-86fd-27a6270f54d0", 00:38:32.808 "strip_size_kb": 64, 00:38:32.808 "state": "online", 00:38:32.808 "raid_level": "raid5f", 00:38:32.808 "superblock": false, 00:38:32.808 "num_base_bdevs": 4, 00:38:32.808 "num_base_bdevs_discovered": 4, 00:38:32.808 "num_base_bdevs_operational": 4, 00:38:32.808 "process": { 00:38:32.808 "type": "rebuild", 00:38:32.808 "target": "spare", 00:38:32.808 "progress": { 00:38:32.808 "blocks": 21120, 00:38:32.808 "percent": 10 00:38:32.808 } 00:38:32.808 }, 00:38:32.808 "base_bdevs_list": [ 00:38:32.808 { 00:38:32.808 "name": "spare", 00:38:32.808 "uuid": "b1dc5eee-caaf-59f4-8cfe-78631533cd64", 00:38:32.808 "is_configured": true, 00:38:32.808 "data_offset": 0, 00:38:32.808 "data_size": 65536 00:38:32.808 }, 00:38:32.808 { 00:38:32.808 "name": "BaseBdev2", 00:38:32.808 "uuid": "bdd4bfa7-6948-5014-8b68-cf9b72768661", 00:38:32.808 "is_configured": true, 00:38:32.808 "data_offset": 0, 00:38:32.808 "data_size": 65536 00:38:32.808 }, 00:38:32.808 { 00:38:32.808 "name": "BaseBdev3", 00:38:32.808 "uuid": "49aec4bf-33ce-5c17-9a99-d3d4f6514ef3", 00:38:32.808 "is_configured": true, 00:38:32.808 "data_offset": 0, 00:38:32.808 "data_size": 65536 00:38:32.808 }, 00:38:32.808 { 00:38:32.808 "name": "BaseBdev4", 00:38:32.808 "uuid": "42424b82-4034-5dd7-9dde-ac28228d5c6a", 00:38:32.808 "is_configured": true, 00:38:32.808 "data_offset": 0, 00:38:32.808 "data_size": 65536 00:38:32.808 } 00:38:32.808 ] 00:38:32.808 }' 00:38:32.808 12:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:32.808 12:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:32.808 12:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:32.808 12:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:32.808 12:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:38:33.066 [2024-06-10 12:02:04.954516] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:33.066 [2024-06-10 12:02:04.985895] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:33.066 [2024-06-10 12:02:04.986188] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:33.066 [2024-06-10 12:02:04.986250] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:33.066 [2024-06-10 12:02:04.986349] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:33.066 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:33.066 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:33.066 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:33.066 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:33.066 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:33.067 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:38:33.067 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:33.067 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:33.067 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:33.067 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:33.067 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:33.067 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:33.326 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:33.326 "name": "raid_bdev1", 00:38:33.326 "uuid": "6f391b22-fad8-417d-86fd-27a6270f54d0", 00:38:33.326 "strip_size_kb": 64, 00:38:33.326 "state": "online", 00:38:33.326 "raid_level": "raid5f", 00:38:33.326 "superblock": false, 00:38:33.326 "num_base_bdevs": 4, 00:38:33.326 "num_base_bdevs_discovered": 3, 00:38:33.326 "num_base_bdevs_operational": 3, 00:38:33.326 "base_bdevs_list": [ 00:38:33.326 { 00:38:33.326 "name": null, 00:38:33.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:33.326 "is_configured": false, 00:38:33.326 "data_offset": 0, 00:38:33.326 "data_size": 65536 00:38:33.326 }, 00:38:33.326 { 00:38:33.326 "name": "BaseBdev2", 00:38:33.326 "uuid": "bdd4bfa7-6948-5014-8b68-cf9b72768661", 00:38:33.326 "is_configured": true, 00:38:33.326 "data_offset": 0, 00:38:33.326 "data_size": 65536 00:38:33.326 }, 00:38:33.326 { 00:38:33.326 "name": "BaseBdev3", 00:38:33.326 "uuid": "49aec4bf-33ce-5c17-9a99-d3d4f6514ef3", 00:38:33.326 "is_configured": true, 00:38:33.326 "data_offset": 0, 00:38:33.326 "data_size": 65536 00:38:33.326 }, 00:38:33.326 { 00:38:33.326 "name": "BaseBdev4", 00:38:33.326 "uuid": "42424b82-4034-5dd7-9dde-ac28228d5c6a", 00:38:33.326 "is_configured": true, 00:38:33.326 "data_offset": 0, 00:38:33.326 "data_size": 65536 00:38:33.326 } 00:38:33.326 ] 00:38:33.326 }' 00:38:33.326 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:33.326 12:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:33.891 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:33.891 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:33.891 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:33.891 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:33.891 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:33.891 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:33.891 12:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:34.150 12:02:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:34.150 "name": "raid_bdev1", 00:38:34.150 "uuid": "6f391b22-fad8-417d-86fd-27a6270f54d0", 00:38:34.150 "strip_size_kb": 64, 00:38:34.150 "state": "online", 00:38:34.150 "raid_level": "raid5f", 00:38:34.150 "superblock": false, 00:38:34.150 "num_base_bdevs": 4, 00:38:34.150 "num_base_bdevs_discovered": 3, 00:38:34.150 "num_base_bdevs_operational": 3, 00:38:34.150 "base_bdevs_list": [ 00:38:34.150 { 00:38:34.150 "name": null, 00:38:34.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:34.150 "is_configured": false, 00:38:34.150 "data_offset": 0, 00:38:34.150 "data_size": 65536 00:38:34.150 }, 00:38:34.150 { 00:38:34.150 "name": "BaseBdev2", 00:38:34.150 "uuid": "bdd4bfa7-6948-5014-8b68-cf9b72768661", 00:38:34.150 "is_configured": true, 00:38:34.150 "data_offset": 0, 00:38:34.150 "data_size": 65536 00:38:34.150 }, 00:38:34.150 { 00:38:34.150 "name": "BaseBdev3", 00:38:34.150 "uuid": "49aec4bf-33ce-5c17-9a99-d3d4f6514ef3", 00:38:34.150 "is_configured": true, 00:38:34.150 "data_offset": 0, 00:38:34.150 "data_size": 65536 00:38:34.150 }, 00:38:34.150 { 00:38:34.150 "name": "BaseBdev4", 00:38:34.150 "uuid": "42424b82-4034-5dd7-9dde-ac28228d5c6a", 00:38:34.150 "is_configured": true, 00:38:34.150 "data_offset": 0, 00:38:34.150 "data_size": 65536 00:38:34.150 } 00:38:34.150 ] 00:38:34.150 }' 00:38:34.150 12:02:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:34.150 12:02:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:34.150 12:02:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:34.408 12:02:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:34.408 12:02:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:34.408 [2024-06-10 12:02:06.412213] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:34.408 [2024-06-10 12:02:06.428927] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:38:34.408 [2024-06-10 12:02:06.439695] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:34.408 12:02:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:35.780 "name": "raid_bdev1", 00:38:35.780 "uuid": "6f391b22-fad8-417d-86fd-27a6270f54d0", 00:38:35.780 "strip_size_kb": 64, 00:38:35.780 "state": "online", 00:38:35.780 "raid_level": "raid5f", 00:38:35.780 "superblock": false, 00:38:35.780 "num_base_bdevs": 4, 00:38:35.780 "num_base_bdevs_discovered": 4, 00:38:35.780 "num_base_bdevs_operational": 4, 00:38:35.780 "process": { 00:38:35.780 "type": "rebuild", 00:38:35.780 "target": "spare", 00:38:35.780 "progress": { 00:38:35.780 "blocks": 23040, 00:38:35.780 "percent": 11 00:38:35.780 } 00:38:35.780 }, 00:38:35.780 "base_bdevs_list": [ 00:38:35.780 { 00:38:35.780 "name": "spare", 00:38:35.780 "uuid": "b1dc5eee-caaf-59f4-8cfe-78631533cd64", 00:38:35.780 "is_configured": true, 00:38:35.780 "data_offset": 0, 00:38:35.780 "data_size": 65536 00:38:35.780 }, 00:38:35.780 { 00:38:35.780 "name": "BaseBdev2", 00:38:35.780 "uuid": "bdd4bfa7-6948-5014-8b68-cf9b72768661", 00:38:35.780 "is_configured": true, 00:38:35.780 "data_offset": 0, 00:38:35.780 "data_size": 65536 00:38:35.780 }, 00:38:35.780 { 00:38:35.780 "name": "BaseBdev3", 00:38:35.780 "uuid": "49aec4bf-33ce-5c17-9a99-d3d4f6514ef3", 00:38:35.780 "is_configured": true, 00:38:35.780 "data_offset": 0, 00:38:35.780 "data_size": 65536 00:38:35.780 }, 00:38:35.780 { 00:38:35.780 "name": "BaseBdev4", 00:38:35.780 "uuid": "42424b82-4034-5dd7-9dde-ac28228d5c6a", 00:38:35.780 "is_configured": true, 00:38:35.780 "data_offset": 0, 00:38:35.780 "data_size": 65536 00:38:35.780 } 00:38:35.780 ] 00:38:35.780 }' 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1387 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:35.780 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:36.054 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:36.054 "name": "raid_bdev1", 00:38:36.054 "uuid": "6f391b22-fad8-417d-86fd-27a6270f54d0", 00:38:36.054 "strip_size_kb": 64, 00:38:36.054 "state": "online", 00:38:36.054 "raid_level": "raid5f", 00:38:36.054 "superblock": false, 00:38:36.054 "num_base_bdevs": 4, 00:38:36.054 "num_base_bdevs_discovered": 4, 00:38:36.054 "num_base_bdevs_operational": 4, 00:38:36.054 "process": { 00:38:36.054 "type": "rebuild", 00:38:36.054 "target": "spare", 00:38:36.054 "progress": { 00:38:36.054 "blocks": 26880, 00:38:36.054 "percent": 13 00:38:36.054 } 00:38:36.054 }, 00:38:36.054 "base_bdevs_list": [ 00:38:36.054 { 00:38:36.054 "name": "spare", 00:38:36.054 "uuid": "b1dc5eee-caaf-59f4-8cfe-78631533cd64", 00:38:36.054 "is_configured": true, 00:38:36.054 "data_offset": 0, 00:38:36.054 "data_size": 65536 00:38:36.054 }, 00:38:36.054 { 00:38:36.054 "name": "BaseBdev2", 00:38:36.054 "uuid": "bdd4bfa7-6948-5014-8b68-cf9b72768661", 00:38:36.054 "is_configured": true, 00:38:36.054 "data_offset": 0, 00:38:36.054 "data_size": 65536 00:38:36.054 }, 00:38:36.054 { 00:38:36.054 "name": "BaseBdev3", 00:38:36.054 "uuid": "49aec4bf-33ce-5c17-9a99-d3d4f6514ef3", 00:38:36.054 "is_configured": true, 00:38:36.054 "data_offset": 0, 00:38:36.054 "data_size": 65536 00:38:36.054 }, 00:38:36.054 { 00:38:36.054 "name": "BaseBdev4", 00:38:36.054 "uuid": "42424b82-4034-5dd7-9dde-ac28228d5c6a", 00:38:36.054 "is_configured": true, 00:38:36.054 "data_offset": 0, 00:38:36.055 "data_size": 65536 00:38:36.055 } 00:38:36.055 ] 00:38:36.055 }' 00:38:36.055 12:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:36.055 12:02:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:36.055 12:02:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:36.055 12:02:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:36.055 12:02:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:37.430 12:02:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:37.430 12:02:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:37.430 12:02:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:37.430 12:02:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:37.430 12:02:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:37.430 12:02:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:37.430 12:02:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:37.430 12:02:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:37.430 12:02:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:37.430 "name": "raid_bdev1", 00:38:37.430 "uuid": "6f391b22-fad8-417d-86fd-27a6270f54d0", 00:38:37.430 "strip_size_kb": 64, 00:38:37.430 "state": "online", 00:38:37.430 "raid_level": "raid5f", 00:38:37.430 "superblock": false, 00:38:37.430 "num_base_bdevs": 4, 00:38:37.430 "num_base_bdevs_discovered": 4, 00:38:37.430 "num_base_bdevs_operational": 4, 00:38:37.430 "process": { 00:38:37.430 "type": "rebuild", 00:38:37.430 "target": "spare", 00:38:37.430 "progress": { 00:38:37.430 "blocks": 53760, 00:38:37.430 "percent": 27 00:38:37.430 } 00:38:37.430 }, 00:38:37.430 "base_bdevs_list": [ 00:38:37.430 { 00:38:37.430 "name": "spare", 00:38:37.430 "uuid": "b1dc5eee-caaf-59f4-8cfe-78631533cd64", 00:38:37.430 "is_configured": true, 00:38:37.430 "data_offset": 0, 00:38:37.430 "data_size": 65536 00:38:37.430 }, 00:38:37.430 { 00:38:37.430 "name": "BaseBdev2", 00:38:37.430 "uuid": "bdd4bfa7-6948-5014-8b68-cf9b72768661", 00:38:37.430 "is_configured": true, 00:38:37.430 "data_offset": 0, 00:38:37.430 "data_size": 65536 00:38:37.430 }, 00:38:37.430 { 00:38:37.430 "name": "BaseBdev3", 00:38:37.430 "uuid": "49aec4bf-33ce-5c17-9a99-d3d4f6514ef3", 00:38:37.430 "is_configured": true, 00:38:37.430 "data_offset": 0, 00:38:37.430 "data_size": 65536 00:38:37.430 }, 00:38:37.430 { 00:38:37.430 "name": "BaseBdev4", 00:38:37.430 "uuid": "42424b82-4034-5dd7-9dde-ac28228d5c6a", 00:38:37.430 "is_configured": true, 00:38:37.430 "data_offset": 0, 00:38:37.430 "data_size": 65536 00:38:37.430 } 00:38:37.430 ] 00:38:37.430 }' 00:38:37.430 12:02:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:37.430 12:02:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:37.430 12:02:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:37.430 12:02:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:37.430 12:02:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:38.365 12:02:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:38.365 12:02:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:38.365 12:02:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:38.365 12:02:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:38.365 12:02:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:38.365 12:02:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:38.365 12:02:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:38.365 12:02:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:38.623 12:02:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:38.623 "name": "raid_bdev1", 00:38:38.623 "uuid": "6f391b22-fad8-417d-86fd-27a6270f54d0", 00:38:38.623 "strip_size_kb": 64, 00:38:38.623 "state": "online", 00:38:38.623 "raid_level": "raid5f", 00:38:38.623 "superblock": false, 00:38:38.623 "num_base_bdevs": 4, 00:38:38.623 "num_base_bdevs_discovered": 4, 00:38:38.623 "num_base_bdevs_operational": 4, 00:38:38.623 "process": { 00:38:38.624 "type": "rebuild", 00:38:38.624 "target": "spare", 00:38:38.624 "progress": { 00:38:38.624 "blocks": 78720, 00:38:38.624 "percent": 40 00:38:38.624 } 00:38:38.624 }, 00:38:38.624 "base_bdevs_list": [ 00:38:38.624 { 00:38:38.624 "name": "spare", 00:38:38.624 "uuid": "b1dc5eee-caaf-59f4-8cfe-78631533cd64", 00:38:38.624 "is_configured": true, 00:38:38.624 "data_offset": 0, 00:38:38.624 "data_size": 65536 00:38:38.624 }, 00:38:38.624 { 00:38:38.624 "name": "BaseBdev2", 00:38:38.624 "uuid": "bdd4bfa7-6948-5014-8b68-cf9b72768661", 00:38:38.624 "is_configured": true, 00:38:38.624 "data_offset": 0, 00:38:38.624 "data_size": 65536 00:38:38.624 }, 00:38:38.624 { 00:38:38.624 "name": "BaseBdev3", 00:38:38.624 "uuid": "49aec4bf-33ce-5c17-9a99-d3d4f6514ef3", 00:38:38.624 "is_configured": true, 00:38:38.624 "data_offset": 0, 00:38:38.624 "data_size": 65536 00:38:38.624 }, 00:38:38.624 { 00:38:38.624 "name": "BaseBdev4", 00:38:38.624 "uuid": "42424b82-4034-5dd7-9dde-ac28228d5c6a", 00:38:38.624 "is_configured": true, 00:38:38.624 "data_offset": 0, 00:38:38.624 "data_size": 65536 00:38:38.624 } 00:38:38.624 ] 00:38:38.624 }' 00:38:38.624 12:02:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:38.624 12:02:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:38.624 12:02:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:38.883 12:02:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:38.883 12:02:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:39.819 12:02:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:39.819 12:02:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:39.819 12:02:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:39.819 12:02:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:39.819 12:02:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:39.819 12:02:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:39.819 12:02:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:39.819 12:02:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:40.078 12:02:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:40.078 "name": "raid_bdev1", 00:38:40.078 "uuid": "6f391b22-fad8-417d-86fd-27a6270f54d0", 00:38:40.078 "strip_size_kb": 64, 00:38:40.078 "state": "online", 00:38:40.078 "raid_level": "raid5f", 00:38:40.078 "superblock": false, 00:38:40.078 "num_base_bdevs": 4, 00:38:40.078 "num_base_bdevs_discovered": 4, 00:38:40.078 "num_base_bdevs_operational": 4, 00:38:40.078 "process": { 00:38:40.078 "type": "rebuild", 00:38:40.078 "target": "spare", 00:38:40.078 "progress": { 00:38:40.078 "blocks": 101760, 00:38:40.078 "percent": 51 00:38:40.078 } 00:38:40.078 }, 00:38:40.078 "base_bdevs_list": [ 00:38:40.078 { 00:38:40.078 "name": "spare", 00:38:40.078 "uuid": "b1dc5eee-caaf-59f4-8cfe-78631533cd64", 00:38:40.078 "is_configured": true, 00:38:40.078 "data_offset": 0, 00:38:40.078 "data_size": 65536 00:38:40.078 }, 00:38:40.078 { 00:38:40.078 "name": "BaseBdev2", 00:38:40.078 "uuid": "bdd4bfa7-6948-5014-8b68-cf9b72768661", 00:38:40.078 "is_configured": true, 00:38:40.078 "data_offset": 0, 00:38:40.078 "data_size": 65536 00:38:40.078 }, 00:38:40.078 { 00:38:40.078 "name": "BaseBdev3", 00:38:40.078 "uuid": "49aec4bf-33ce-5c17-9a99-d3d4f6514ef3", 00:38:40.078 "is_configured": true, 00:38:40.078 "data_offset": 0, 00:38:40.078 "data_size": 65536 00:38:40.078 }, 00:38:40.078 { 00:38:40.078 "name": "BaseBdev4", 00:38:40.078 "uuid": "42424b82-4034-5dd7-9dde-ac28228d5c6a", 00:38:40.078 "is_configured": true, 00:38:40.078 "data_offset": 0, 00:38:40.078 "data_size": 65536 00:38:40.078 } 00:38:40.078 ] 00:38:40.078 }' 00:38:40.078 12:02:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:40.078 12:02:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:40.078 12:02:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:40.078 12:02:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:40.078 12:02:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:41.013 12:02:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:41.013 12:02:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:41.013 12:02:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:41.013 12:02:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:41.013 12:02:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:41.013 12:02:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:41.013 12:02:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:41.013 12:02:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:41.272 12:02:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:41.272 "name": "raid_bdev1", 00:38:41.272 "uuid": "6f391b22-fad8-417d-86fd-27a6270f54d0", 00:38:41.272 "strip_size_kb": 64, 00:38:41.272 "state": "online", 00:38:41.272 "raid_level": "raid5f", 00:38:41.272 "superblock": false, 00:38:41.272 "num_base_bdevs": 4, 00:38:41.272 "num_base_bdevs_discovered": 4, 00:38:41.272 "num_base_bdevs_operational": 4, 00:38:41.272 "process": { 00:38:41.272 "type": "rebuild", 00:38:41.272 "target": "spare", 00:38:41.272 "progress": { 00:38:41.272 "blocks": 128640, 00:38:41.272 "percent": 65 00:38:41.272 } 00:38:41.272 }, 00:38:41.272 "base_bdevs_list": [ 00:38:41.272 { 00:38:41.272 "name": "spare", 00:38:41.272 "uuid": "b1dc5eee-caaf-59f4-8cfe-78631533cd64", 00:38:41.272 "is_configured": true, 00:38:41.272 "data_offset": 0, 00:38:41.272 "data_size": 65536 00:38:41.272 }, 00:38:41.272 { 00:38:41.272 "name": "BaseBdev2", 00:38:41.272 "uuid": "bdd4bfa7-6948-5014-8b68-cf9b72768661", 00:38:41.272 "is_configured": true, 00:38:41.272 "data_offset": 0, 00:38:41.272 "data_size": 65536 00:38:41.272 }, 00:38:41.272 { 00:38:41.272 "name": "BaseBdev3", 00:38:41.272 "uuid": "49aec4bf-33ce-5c17-9a99-d3d4f6514ef3", 00:38:41.272 "is_configured": true, 00:38:41.272 "data_offset": 0, 00:38:41.272 "data_size": 65536 00:38:41.272 }, 00:38:41.272 { 00:38:41.272 "name": "BaseBdev4", 00:38:41.272 "uuid": "42424b82-4034-5dd7-9dde-ac28228d5c6a", 00:38:41.272 "is_configured": true, 00:38:41.272 "data_offset": 0, 00:38:41.272 "data_size": 65536 00:38:41.272 } 00:38:41.272 ] 00:38:41.272 }' 00:38:41.272 12:02:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:41.531 12:02:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:41.531 12:02:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:41.531 12:02:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:41.531 12:02:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:42.526 12:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:42.526 12:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:42.526 12:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:42.526 12:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:42.526 12:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:42.526 12:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:42.526 12:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:42.526 12:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:42.785 12:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:42.785 "name": "raid_bdev1", 00:38:42.785 "uuid": "6f391b22-fad8-417d-86fd-27a6270f54d0", 00:38:42.785 "strip_size_kb": 64, 00:38:42.785 "state": "online", 00:38:42.785 "raid_level": "raid5f", 00:38:42.785 "superblock": false, 00:38:42.785 "num_base_bdevs": 4, 00:38:42.785 "num_base_bdevs_discovered": 4, 00:38:42.785 "num_base_bdevs_operational": 4, 00:38:42.785 "process": { 00:38:42.785 "type": "rebuild", 00:38:42.785 "target": "spare", 00:38:42.785 "progress": { 00:38:42.785 "blocks": 155520, 00:38:42.785 "percent": 79 00:38:42.785 } 00:38:42.785 }, 00:38:42.785 "base_bdevs_list": [ 00:38:42.785 { 00:38:42.785 "name": "spare", 00:38:42.785 "uuid": "b1dc5eee-caaf-59f4-8cfe-78631533cd64", 00:38:42.785 "is_configured": true, 00:38:42.785 "data_offset": 0, 00:38:42.785 "data_size": 65536 00:38:42.785 }, 00:38:42.785 { 00:38:42.785 "name": "BaseBdev2", 00:38:42.785 "uuid": "bdd4bfa7-6948-5014-8b68-cf9b72768661", 00:38:42.785 "is_configured": true, 00:38:42.785 "data_offset": 0, 00:38:42.785 "data_size": 65536 00:38:42.785 }, 00:38:42.785 { 00:38:42.785 "name": "BaseBdev3", 00:38:42.785 "uuid": "49aec4bf-33ce-5c17-9a99-d3d4f6514ef3", 00:38:42.785 "is_configured": true, 00:38:42.785 "data_offset": 0, 00:38:42.785 "data_size": 65536 00:38:42.785 }, 00:38:42.785 { 00:38:42.785 "name": "BaseBdev4", 00:38:42.785 "uuid": "42424b82-4034-5dd7-9dde-ac28228d5c6a", 00:38:42.785 "is_configured": true, 00:38:42.785 "data_offset": 0, 00:38:42.785 "data_size": 65536 00:38:42.785 } 00:38:42.785 ] 00:38:42.785 }' 00:38:42.785 12:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:42.785 12:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:42.785 12:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:42.785 12:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:42.785 12:02:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:43.723 12:02:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:43.723 12:02:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:43.723 12:02:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:43.723 12:02:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:43.723 12:02:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:43.723 12:02:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:43.723 12:02:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:43.723 12:02:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:43.981 12:02:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:43.981 "name": "raid_bdev1", 00:38:43.981 "uuid": "6f391b22-fad8-417d-86fd-27a6270f54d0", 00:38:43.981 "strip_size_kb": 64, 00:38:43.981 "state": "online", 00:38:43.981 "raid_level": "raid5f", 00:38:43.981 "superblock": false, 00:38:43.981 "num_base_bdevs": 4, 00:38:43.981 "num_base_bdevs_discovered": 4, 00:38:43.981 "num_base_bdevs_operational": 4, 00:38:43.981 "process": { 00:38:43.981 "type": "rebuild", 00:38:43.981 "target": "spare", 00:38:43.981 "progress": { 00:38:43.981 "blocks": 180480, 00:38:43.981 "percent": 91 00:38:43.981 } 00:38:43.981 }, 00:38:43.981 "base_bdevs_list": [ 00:38:43.981 { 00:38:43.981 "name": "spare", 00:38:43.981 "uuid": "b1dc5eee-caaf-59f4-8cfe-78631533cd64", 00:38:43.981 "is_configured": true, 00:38:43.981 "data_offset": 0, 00:38:43.981 "data_size": 65536 00:38:43.981 }, 00:38:43.981 { 00:38:43.981 "name": "BaseBdev2", 00:38:43.981 "uuid": "bdd4bfa7-6948-5014-8b68-cf9b72768661", 00:38:43.981 "is_configured": true, 00:38:43.981 "data_offset": 0, 00:38:43.981 "data_size": 65536 00:38:43.981 }, 00:38:43.981 { 00:38:43.981 "name": "BaseBdev3", 00:38:43.981 "uuid": "49aec4bf-33ce-5c17-9a99-d3d4f6514ef3", 00:38:43.981 "is_configured": true, 00:38:43.981 "data_offset": 0, 00:38:43.981 "data_size": 65536 00:38:43.981 }, 00:38:43.981 { 00:38:43.981 "name": "BaseBdev4", 00:38:43.981 "uuid": "42424b82-4034-5dd7-9dde-ac28228d5c6a", 00:38:43.981 "is_configured": true, 00:38:43.981 "data_offset": 0, 00:38:43.981 "data_size": 65536 00:38:43.981 } 00:38:43.981 ] 00:38:43.981 }' 00:38:43.981 12:02:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:44.240 12:02:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:44.240 12:02:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:44.240 12:02:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:44.240 12:02:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:44.806 [2024-06-10 12:02:16.825757] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:44.806 [2024-06-10 12:02:16.826061] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:44.806 [2024-06-10 12:02:16.826230] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:45.117 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:45.117 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:45.117 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:45.117 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:45.117 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:45.117 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:45.117 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:45.117 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:45.378 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:45.378 "name": "raid_bdev1", 00:38:45.378 "uuid": "6f391b22-fad8-417d-86fd-27a6270f54d0", 00:38:45.378 "strip_size_kb": 64, 00:38:45.378 "state": "online", 00:38:45.378 "raid_level": "raid5f", 00:38:45.378 "superblock": false, 00:38:45.378 "num_base_bdevs": 4, 00:38:45.378 "num_base_bdevs_discovered": 4, 00:38:45.378 "num_base_bdevs_operational": 4, 00:38:45.378 "base_bdevs_list": [ 00:38:45.378 { 00:38:45.378 "name": "spare", 00:38:45.378 "uuid": "b1dc5eee-caaf-59f4-8cfe-78631533cd64", 00:38:45.378 "is_configured": true, 00:38:45.378 "data_offset": 0, 00:38:45.378 "data_size": 65536 00:38:45.378 }, 00:38:45.378 { 00:38:45.378 "name": "BaseBdev2", 00:38:45.378 "uuid": "bdd4bfa7-6948-5014-8b68-cf9b72768661", 00:38:45.378 "is_configured": true, 00:38:45.378 "data_offset": 0, 00:38:45.378 "data_size": 65536 00:38:45.378 }, 00:38:45.378 { 00:38:45.378 "name": "BaseBdev3", 00:38:45.378 "uuid": "49aec4bf-33ce-5c17-9a99-d3d4f6514ef3", 00:38:45.378 "is_configured": true, 00:38:45.378 "data_offset": 0, 00:38:45.378 "data_size": 65536 00:38:45.378 }, 00:38:45.378 { 00:38:45.378 "name": "BaseBdev4", 00:38:45.378 "uuid": "42424b82-4034-5dd7-9dde-ac28228d5c6a", 00:38:45.378 "is_configured": true, 00:38:45.378 "data_offset": 0, 00:38:45.378 "data_size": 65536 00:38:45.378 } 00:38:45.378 ] 00:38:45.378 }' 00:38:45.378 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:45.378 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:45.378 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:45.637 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:38:45.637 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:38:45.637 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:45.637 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:45.637 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:45.637 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:45.637 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:45.637 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:45.637 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:45.896 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:45.896 "name": "raid_bdev1", 00:38:45.896 "uuid": "6f391b22-fad8-417d-86fd-27a6270f54d0", 00:38:45.896 "strip_size_kb": 64, 00:38:45.896 "state": "online", 00:38:45.896 "raid_level": "raid5f", 00:38:45.896 "superblock": false, 00:38:45.896 "num_base_bdevs": 4, 00:38:45.896 "num_base_bdevs_discovered": 4, 00:38:45.896 "num_base_bdevs_operational": 4, 00:38:45.896 "base_bdevs_list": [ 00:38:45.896 { 00:38:45.896 "name": "spare", 00:38:45.896 "uuid": "b1dc5eee-caaf-59f4-8cfe-78631533cd64", 00:38:45.896 "is_configured": true, 00:38:45.896 "data_offset": 0, 00:38:45.896 "data_size": 65536 00:38:45.896 }, 00:38:45.896 { 00:38:45.896 "name": "BaseBdev2", 00:38:45.896 "uuid": "bdd4bfa7-6948-5014-8b68-cf9b72768661", 00:38:45.896 "is_configured": true, 00:38:45.896 "data_offset": 0, 00:38:45.896 "data_size": 65536 00:38:45.896 }, 00:38:45.896 { 00:38:45.896 "name": "BaseBdev3", 00:38:45.896 "uuid": "49aec4bf-33ce-5c17-9a99-d3d4f6514ef3", 00:38:45.896 "is_configured": true, 00:38:45.896 "data_offset": 0, 00:38:45.896 "data_size": 65536 00:38:45.896 }, 00:38:45.896 { 00:38:45.896 "name": "BaseBdev4", 00:38:45.896 "uuid": "42424b82-4034-5dd7-9dde-ac28228d5c6a", 00:38:45.896 "is_configured": true, 00:38:45.896 "data_offset": 0, 00:38:45.896 "data_size": 65536 00:38:45.896 } 00:38:45.896 ] 00:38:45.896 }' 00:38:45.896 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:45.896 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:45.896 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:45.896 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:45.896 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:38:45.896 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:45.896 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:45.896 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:45.896 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:45.896 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:38:45.896 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:45.896 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:45.896 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:45.896 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:45.896 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:45.896 12:02:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:46.155 12:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:46.155 "name": "raid_bdev1", 00:38:46.155 "uuid": "6f391b22-fad8-417d-86fd-27a6270f54d0", 00:38:46.155 "strip_size_kb": 64, 00:38:46.155 "state": "online", 00:38:46.155 "raid_level": "raid5f", 00:38:46.155 "superblock": false, 00:38:46.155 "num_base_bdevs": 4, 00:38:46.155 "num_base_bdevs_discovered": 4, 00:38:46.155 "num_base_bdevs_operational": 4, 00:38:46.155 "base_bdevs_list": [ 00:38:46.155 { 00:38:46.155 "name": "spare", 00:38:46.155 "uuid": "b1dc5eee-caaf-59f4-8cfe-78631533cd64", 00:38:46.155 "is_configured": true, 00:38:46.155 "data_offset": 0, 00:38:46.155 "data_size": 65536 00:38:46.155 }, 00:38:46.155 { 00:38:46.155 "name": "BaseBdev2", 00:38:46.155 "uuid": "bdd4bfa7-6948-5014-8b68-cf9b72768661", 00:38:46.155 "is_configured": true, 00:38:46.155 "data_offset": 0, 00:38:46.155 "data_size": 65536 00:38:46.155 }, 00:38:46.155 { 00:38:46.155 "name": "BaseBdev3", 00:38:46.155 "uuid": "49aec4bf-33ce-5c17-9a99-d3d4f6514ef3", 00:38:46.155 "is_configured": true, 00:38:46.155 "data_offset": 0, 00:38:46.155 "data_size": 65536 00:38:46.155 }, 00:38:46.155 { 00:38:46.155 "name": "BaseBdev4", 00:38:46.155 "uuid": "42424b82-4034-5dd7-9dde-ac28228d5c6a", 00:38:46.155 "is_configured": true, 00:38:46.155 "data_offset": 0, 00:38:46.155 "data_size": 65536 00:38:46.155 } 00:38:46.155 ] 00:38:46.155 }' 00:38:46.155 12:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:46.155 12:02:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:46.720 12:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:46.976 [2024-06-10 12:02:18.895950] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:46.976 [2024-06-10 12:02:18.896237] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:46.976 [2024-06-10 12:02:18.896437] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:46.977 [2024-06-10 12:02:18.896632] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:46.977 [2024-06-10 12:02:18.896733] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:38:46.977 12:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:38:46.977 12:02:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:47.233 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:38:47.233 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:38:47.233 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:38:47.233 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:38:47.234 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:47.234 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:38:47.234 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:47.234 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:47.234 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:47.234 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:38:47.234 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:47.234 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:47.234 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:38:47.490 /dev/nbd0 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # break 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:47.490 1+0 records in 00:38:47.490 1+0 records out 00:38:47.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370404 s, 11.1 MB/s 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:47.490 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:38:47.747 /dev/nbd1 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # break 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:47.748 1+0 records in 00:38:47.748 1+0 records out 00:38:47.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533827 s, 7.7 MB/s 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:47.748 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:38:48.005 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:38:48.005 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:48.005 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:48.005 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:48.005 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:38:48.005 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:48.005 12:02:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:38:48.262 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:48.262 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:48.262 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:48.262 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:48.262 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:48.262 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:48.262 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:38:48.262 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:38:48.262 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:48.262 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 159160 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@949 -- # '[' -z 159160 ']' 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # kill -0 159160 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # uname 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 159160 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 159160' 00:38:48.520 killing process with pid 159160 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # kill 159160 00:38:48.520 Received shutdown signal, test time was about 60.000000 seconds 00:38:48.520 00:38:48.520 Latency(us) 00:38:48.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:48.520 =================================================================================================================== 00:38:48.520 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:48.520 12:02:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # wait 159160 00:38:48.520 [2024-06-10 12:02:20.577573] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:49.452 [2024-06-10 12:02:21.147570] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:50.848 12:02:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:38:50.848 00:38:50.848 real 0m27.433s 00:38:50.848 user 0m39.314s 00:38:50.848 sys 0m3.557s 00:38:50.848 12:02:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:50.848 12:02:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.848 ************************************ 00:38:50.848 END TEST raid5f_rebuild_test 00:38:50.848 ************************************ 00:38:50.848 12:02:22 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:38:50.848 12:02:22 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:38:50.848 12:02:22 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:50.848 12:02:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:50.848 ************************************ 00:38:50.848 START TEST raid5f_rebuild_test_sb 00:38:50.848 ************************************ 00:38:50.848 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid5f 4 true false true 00:38:50.848 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:38:50.848 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:38:50.848 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:38:50.848 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:38:50.848 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:38:50.848 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=159787 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 159787 /var/tmp/spdk-raid.sock 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@830 -- # '[' -z 159787 ']' 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:50.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:50.849 12:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:50.849 [2024-06-10 12:02:22.824389] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:38:50.849 [2024-06-10 12:02:22.824740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159787 ] 00:38:50.849 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:50.849 Zero copy mechanism will not be used. 00:38:51.107 [2024-06-10 12:02:22.988362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:51.366 [2024-06-10 12:02:23.218886] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:51.624 [2024-06-10 12:02:23.467572] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:51.882 12:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:51.882 12:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@863 -- # return 0 00:38:51.882 12:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:51.882 12:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:38:52.140 BaseBdev1_malloc 00:38:52.140 12:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:52.398 [2024-06-10 12:02:24.204507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:52.398 [2024-06-10 12:02:24.204852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:52.398 [2024-06-10 12:02:24.204938] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:38:52.398 [2024-06-10 12:02:24.205204] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:52.399 [2024-06-10 12:02:24.207825] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:52.399 [2024-06-10 12:02:24.207985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:52.399 BaseBdev1 00:38:52.399 12:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:52.399 12:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:38:52.656 BaseBdev2_malloc 00:38:52.657 12:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:52.915 [2024-06-10 12:02:24.772632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:52.915 [2024-06-10 12:02:24.772922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:52.915 [2024-06-10 12:02:24.773026] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:38:52.915 [2024-06-10 12:02:24.773270] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:52.915 [2024-06-10 12:02:24.775746] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:52.915 [2024-06-10 12:02:24.775907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:52.915 BaseBdev2 00:38:52.915 12:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:52.915 12:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:38:53.172 BaseBdev3_malloc 00:38:53.172 12:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:38:53.430 [2024-06-10 12:02:25.318865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:38:53.430 [2024-06-10 12:02:25.319173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:53.430 [2024-06-10 12:02:25.319252] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:38:53.430 [2024-06-10 12:02:25.319373] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:53.430 [2024-06-10 12:02:25.321956] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:53.430 [2024-06-10 12:02:25.322132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:38:53.430 BaseBdev3 00:38:53.430 12:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:53.430 12:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:38:53.688 BaseBdev4_malloc 00:38:53.688 12:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:38:53.945 [2024-06-10 12:02:25.788312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:38:53.945 [2024-06-10 12:02:25.788651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:53.945 [2024-06-10 12:02:25.788726] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:38:53.946 [2024-06-10 12:02:25.788834] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:53.946 [2024-06-10 12:02:25.791288] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:53.946 [2024-06-10 12:02:25.791453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:38:53.946 BaseBdev4 00:38:53.946 12:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:38:54.263 spare_malloc 00:38:54.263 12:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:54.263 spare_delay 00:38:54.520 12:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:54.520 [2024-06-10 12:02:26.509551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:54.520 [2024-06-10 12:02:26.509893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:54.520 [2024-06-10 12:02:26.509971] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:38:54.520 [2024-06-10 12:02:26.510253] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:54.520 [2024-06-10 12:02:26.512901] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:54.520 [2024-06-10 12:02:26.513091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:54.520 spare 00:38:54.520 12:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:38:54.778 [2024-06-10 12:02:26.793685] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:54.778 [2024-06-10 12:02:26.796123] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:54.778 [2024-06-10 12:02:26.796350] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:54.778 [2024-06-10 12:02:26.796436] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:54.778 [2024-06-10 12:02:26.796807] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:38:54.778 [2024-06-10 12:02:26.796873] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:38:54.778 [2024-06-10 12:02:26.797132] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:38:54.778 [2024-06-10 12:02:26.806127] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:38:54.778 [2024-06-10 12:02:26.806251] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:38:54.778 [2024-06-10 12:02:26.806599] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:54.778 12:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:38:54.778 12:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:54.778 12:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:54.778 12:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:54.778 12:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:54.778 12:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:38:54.778 12:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:54.778 12:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:54.778 12:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:54.778 12:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:54.778 12:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:54.778 12:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:55.036 12:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:55.036 "name": "raid_bdev1", 00:38:55.036 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:38:55.036 "strip_size_kb": 64, 00:38:55.036 "state": "online", 00:38:55.036 "raid_level": "raid5f", 00:38:55.036 "superblock": true, 00:38:55.036 "num_base_bdevs": 4, 00:38:55.036 "num_base_bdevs_discovered": 4, 00:38:55.036 "num_base_bdevs_operational": 4, 00:38:55.036 "base_bdevs_list": [ 00:38:55.036 { 00:38:55.036 "name": "BaseBdev1", 00:38:55.036 "uuid": "fcd3a2c8-f0f8-558e-80ec-e211fe4b5672", 00:38:55.036 "is_configured": true, 00:38:55.036 "data_offset": 2048, 00:38:55.036 "data_size": 63488 00:38:55.036 }, 00:38:55.036 { 00:38:55.036 "name": "BaseBdev2", 00:38:55.036 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:38:55.036 "is_configured": true, 00:38:55.036 "data_offset": 2048, 00:38:55.036 "data_size": 63488 00:38:55.036 }, 00:38:55.036 { 00:38:55.036 "name": "BaseBdev3", 00:38:55.036 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:38:55.036 "is_configured": true, 00:38:55.036 "data_offset": 2048, 00:38:55.036 "data_size": 63488 00:38:55.036 }, 00:38:55.036 { 00:38:55.036 "name": "BaseBdev4", 00:38:55.036 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:38:55.036 "is_configured": true, 00:38:55.036 "data_offset": 2048, 00:38:55.036 "data_size": 63488 00:38:55.036 } 00:38:55.036 ] 00:38:55.036 }' 00:38:55.036 12:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:55.036 12:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:55.603 12:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:55.603 12:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:38:55.862 [2024-06-10 12:02:27.885177] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:55.862 12:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=190464 00:38:55.862 12:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:55.862 12:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:56.121 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:38:56.121 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:38:56.121 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:38:56.121 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:38:56.121 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:38:56.121 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:56.121 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:38:56.121 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:56.121 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:38:56.121 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:56.121 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:38:56.121 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:56.121 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:56.121 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:38:56.380 [2024-06-10 12:02:28.313199] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:38:56.380 /dev/nbd0 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:56.380 1+0 records in 00:38:56.380 1+0 records out 00:38:56.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565967 s, 7.2 MB/s 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 192 00:38:56.380 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:38:56.947 496+0 records in 00:38:56.947 496+0 records out 00:38:56.947 97517568 bytes (98 MB, 93 MiB) copied, 0.604788 s, 161 MB/s 00:38:56.947 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:38:56.947 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:56.947 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:56.947 12:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:56.947 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:38:56.947 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:56.947 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:38:57.515 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:57.515 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:57.515 [2024-06-10 12:02:29.284657] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:57.515 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:57.515 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:57.515 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:57.515 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:57.515 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:38:57.515 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:38:57.516 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:38:57.516 [2024-06-10 12:02:29.478098] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:57.516 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:57.516 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:57.516 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:57.516 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:57.516 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:57.516 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:38:57.516 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:57.516 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:57.516 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:57.516 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:57.516 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:57.516 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:57.774 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:57.774 "name": "raid_bdev1", 00:38:57.774 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:38:57.774 "strip_size_kb": 64, 00:38:57.774 "state": "online", 00:38:57.774 "raid_level": "raid5f", 00:38:57.774 "superblock": true, 00:38:57.774 "num_base_bdevs": 4, 00:38:57.774 "num_base_bdevs_discovered": 3, 00:38:57.774 "num_base_bdevs_operational": 3, 00:38:57.774 "base_bdevs_list": [ 00:38:57.774 { 00:38:57.774 "name": null, 00:38:57.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:57.774 "is_configured": false, 00:38:57.774 "data_offset": 2048, 00:38:57.774 "data_size": 63488 00:38:57.774 }, 00:38:57.774 { 00:38:57.774 "name": "BaseBdev2", 00:38:57.774 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:38:57.774 "is_configured": true, 00:38:57.774 "data_offset": 2048, 00:38:57.775 "data_size": 63488 00:38:57.775 }, 00:38:57.775 { 00:38:57.775 "name": "BaseBdev3", 00:38:57.775 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:38:57.775 "is_configured": true, 00:38:57.775 "data_offset": 2048, 00:38:57.775 "data_size": 63488 00:38:57.775 }, 00:38:57.775 { 00:38:57.775 "name": "BaseBdev4", 00:38:57.775 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:38:57.775 "is_configured": true, 00:38:57.775 "data_offset": 2048, 00:38:57.775 "data_size": 63488 00:38:57.775 } 00:38:57.775 ] 00:38:57.775 }' 00:38:57.775 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:57.775 12:02:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:58.343 12:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:58.602 [2024-06-10 12:02:30.570459] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:58.602 [2024-06-10 12:02:30.589249] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:38:58.602 [2024-06-10 12:02:30.601337] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:58.602 12:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:38:59.630 12:02:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:59.630 12:02:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:59.630 12:02:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:59.630 12:02:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:59.630 12:02:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:59.630 12:02:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:59.630 12:02:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:59.910 12:02:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:59.910 "name": "raid_bdev1", 00:38:59.910 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:38:59.910 "strip_size_kb": 64, 00:38:59.910 "state": "online", 00:38:59.910 "raid_level": "raid5f", 00:38:59.910 "superblock": true, 00:38:59.910 "num_base_bdevs": 4, 00:38:59.910 "num_base_bdevs_discovered": 4, 00:38:59.910 "num_base_bdevs_operational": 4, 00:38:59.910 "process": { 00:38:59.910 "type": "rebuild", 00:38:59.910 "target": "spare", 00:38:59.910 "progress": { 00:38:59.910 "blocks": 23040, 00:38:59.910 "percent": 12 00:38:59.910 } 00:38:59.910 }, 00:38:59.910 "base_bdevs_list": [ 00:38:59.910 { 00:38:59.910 "name": "spare", 00:38:59.910 "uuid": "65d7f8ff-81fd-5842-9908-deafde05066c", 00:38:59.910 "is_configured": true, 00:38:59.910 "data_offset": 2048, 00:38:59.910 "data_size": 63488 00:38:59.910 }, 00:38:59.910 { 00:38:59.910 "name": "BaseBdev2", 00:38:59.910 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:38:59.910 "is_configured": true, 00:38:59.910 "data_offset": 2048, 00:38:59.910 "data_size": 63488 00:38:59.910 }, 00:38:59.910 { 00:38:59.910 "name": "BaseBdev3", 00:38:59.910 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:38:59.910 "is_configured": true, 00:38:59.910 "data_offset": 2048, 00:38:59.910 "data_size": 63488 00:38:59.910 }, 00:38:59.910 { 00:38:59.910 "name": "BaseBdev4", 00:38:59.910 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:38:59.910 "is_configured": true, 00:38:59.910 "data_offset": 2048, 00:38:59.910 "data_size": 63488 00:38:59.910 } 00:38:59.910 ] 00:38:59.910 }' 00:38:59.910 12:02:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:59.910 12:02:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:59.910 12:02:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:00.169 12:02:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:00.169 12:02:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:39:00.169 [2024-06-10 12:02:32.219581] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:00.428 [2024-06-10 12:02:32.318197] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:00.428 [2024-06-10 12:02:32.318593] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:00.428 [2024-06-10 12:02:32.318790] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:00.428 [2024-06-10 12:02:32.318845] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:00.428 12:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:39:00.428 12:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:00.428 12:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:00.428 12:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:39:00.428 12:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:39:00.428 12:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:39:00.428 12:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:00.428 12:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:00.428 12:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:00.428 12:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:00.428 12:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:00.428 12:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:00.686 12:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:00.686 "name": "raid_bdev1", 00:39:00.686 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:00.686 "strip_size_kb": 64, 00:39:00.686 "state": "online", 00:39:00.686 "raid_level": "raid5f", 00:39:00.686 "superblock": true, 00:39:00.686 "num_base_bdevs": 4, 00:39:00.686 "num_base_bdevs_discovered": 3, 00:39:00.686 "num_base_bdevs_operational": 3, 00:39:00.686 "base_bdevs_list": [ 00:39:00.686 { 00:39:00.686 "name": null, 00:39:00.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:00.686 "is_configured": false, 00:39:00.686 "data_offset": 2048, 00:39:00.686 "data_size": 63488 00:39:00.686 }, 00:39:00.686 { 00:39:00.686 "name": "BaseBdev2", 00:39:00.686 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:00.686 "is_configured": true, 00:39:00.686 "data_offset": 2048, 00:39:00.686 "data_size": 63488 00:39:00.686 }, 00:39:00.686 { 00:39:00.686 "name": "BaseBdev3", 00:39:00.686 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:00.686 "is_configured": true, 00:39:00.686 "data_offset": 2048, 00:39:00.686 "data_size": 63488 00:39:00.686 }, 00:39:00.686 { 00:39:00.686 "name": "BaseBdev4", 00:39:00.686 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:00.686 "is_configured": true, 00:39:00.686 "data_offset": 2048, 00:39:00.686 "data_size": 63488 00:39:00.686 } 00:39:00.686 ] 00:39:00.686 }' 00:39:00.686 12:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:00.686 12:02:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:01.256 12:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:01.256 12:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:01.256 12:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:01.256 12:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:01.256 12:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:01.256 12:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:01.256 12:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:01.518 12:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:01.518 "name": "raid_bdev1", 00:39:01.518 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:01.518 "strip_size_kb": 64, 00:39:01.518 "state": "online", 00:39:01.518 "raid_level": "raid5f", 00:39:01.518 "superblock": true, 00:39:01.518 "num_base_bdevs": 4, 00:39:01.518 "num_base_bdevs_discovered": 3, 00:39:01.518 "num_base_bdevs_operational": 3, 00:39:01.518 "base_bdevs_list": [ 00:39:01.518 { 00:39:01.518 "name": null, 00:39:01.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:01.518 "is_configured": false, 00:39:01.518 "data_offset": 2048, 00:39:01.518 "data_size": 63488 00:39:01.518 }, 00:39:01.518 { 00:39:01.518 "name": "BaseBdev2", 00:39:01.518 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:01.518 "is_configured": true, 00:39:01.518 "data_offset": 2048, 00:39:01.518 "data_size": 63488 00:39:01.518 }, 00:39:01.518 { 00:39:01.518 "name": "BaseBdev3", 00:39:01.518 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:01.518 "is_configured": true, 00:39:01.518 "data_offset": 2048, 00:39:01.518 "data_size": 63488 00:39:01.518 }, 00:39:01.518 { 00:39:01.518 "name": "BaseBdev4", 00:39:01.518 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:01.518 "is_configured": true, 00:39:01.518 "data_offset": 2048, 00:39:01.518 "data_size": 63488 00:39:01.518 } 00:39:01.518 ] 00:39:01.518 }' 00:39:01.518 12:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:01.518 12:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:01.518 12:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:01.518 12:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:01.518 12:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:39:01.777 [2024-06-10 12:02:33.772597] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:01.777 [2024-06-10 12:02:33.791111] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:39:01.777 [2024-06-10 12:02:33.803003] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:01.777 12:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:39:03.154 12:02:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:03.154 12:02:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:03.154 12:02:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:03.154 12:02:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:03.154 12:02:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:03.154 12:02:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:03.155 12:02:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:03.155 "name": "raid_bdev1", 00:39:03.155 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:03.155 "strip_size_kb": 64, 00:39:03.155 "state": "online", 00:39:03.155 "raid_level": "raid5f", 00:39:03.155 "superblock": true, 00:39:03.155 "num_base_bdevs": 4, 00:39:03.155 "num_base_bdevs_discovered": 4, 00:39:03.155 "num_base_bdevs_operational": 4, 00:39:03.155 "process": { 00:39:03.155 "type": "rebuild", 00:39:03.155 "target": "spare", 00:39:03.155 "progress": { 00:39:03.155 "blocks": 21120, 00:39:03.155 "percent": 11 00:39:03.155 } 00:39:03.155 }, 00:39:03.155 "base_bdevs_list": [ 00:39:03.155 { 00:39:03.155 "name": "spare", 00:39:03.155 "uuid": "65d7f8ff-81fd-5842-9908-deafde05066c", 00:39:03.155 "is_configured": true, 00:39:03.155 "data_offset": 2048, 00:39:03.155 "data_size": 63488 00:39:03.155 }, 00:39:03.155 { 00:39:03.155 "name": "BaseBdev2", 00:39:03.155 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:03.155 "is_configured": true, 00:39:03.155 "data_offset": 2048, 00:39:03.155 "data_size": 63488 00:39:03.155 }, 00:39:03.155 { 00:39:03.155 "name": "BaseBdev3", 00:39:03.155 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:03.155 "is_configured": true, 00:39:03.155 "data_offset": 2048, 00:39:03.155 "data_size": 63488 00:39:03.155 }, 00:39:03.155 { 00:39:03.155 "name": "BaseBdev4", 00:39:03.155 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:03.155 "is_configured": true, 00:39:03.155 "data_offset": 2048, 00:39:03.155 "data_size": 63488 00:39:03.155 } 00:39:03.155 ] 00:39:03.155 }' 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:39:03.155 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1415 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:03.155 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:03.414 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:03.414 "name": "raid_bdev1", 00:39:03.414 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:03.414 "strip_size_kb": 64, 00:39:03.414 "state": "online", 00:39:03.414 "raid_level": "raid5f", 00:39:03.414 "superblock": true, 00:39:03.414 "num_base_bdevs": 4, 00:39:03.414 "num_base_bdevs_discovered": 4, 00:39:03.414 "num_base_bdevs_operational": 4, 00:39:03.414 "process": { 00:39:03.414 "type": "rebuild", 00:39:03.414 "target": "spare", 00:39:03.414 "progress": { 00:39:03.414 "blocks": 28800, 00:39:03.414 "percent": 15 00:39:03.414 } 00:39:03.414 }, 00:39:03.414 "base_bdevs_list": [ 00:39:03.414 { 00:39:03.414 "name": "spare", 00:39:03.414 "uuid": "65d7f8ff-81fd-5842-9908-deafde05066c", 00:39:03.414 "is_configured": true, 00:39:03.414 "data_offset": 2048, 00:39:03.414 "data_size": 63488 00:39:03.414 }, 00:39:03.414 { 00:39:03.414 "name": "BaseBdev2", 00:39:03.414 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:03.414 "is_configured": true, 00:39:03.414 "data_offset": 2048, 00:39:03.414 "data_size": 63488 00:39:03.414 }, 00:39:03.414 { 00:39:03.414 "name": "BaseBdev3", 00:39:03.414 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:03.414 "is_configured": true, 00:39:03.414 "data_offset": 2048, 00:39:03.414 "data_size": 63488 00:39:03.414 }, 00:39:03.414 { 00:39:03.414 "name": "BaseBdev4", 00:39:03.414 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:03.414 "is_configured": true, 00:39:03.414 "data_offset": 2048, 00:39:03.414 "data_size": 63488 00:39:03.414 } 00:39:03.414 ] 00:39:03.414 }' 00:39:03.414 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:03.414 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:03.414 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:03.414 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:03.414 12:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:39:04.791 12:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:39:04.791 12:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:04.791 12:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:04.791 12:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:04.791 12:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:04.791 12:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:04.791 12:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:04.791 12:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:04.791 12:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:04.791 "name": "raid_bdev1", 00:39:04.791 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:04.791 "strip_size_kb": 64, 00:39:04.791 "state": "online", 00:39:04.791 "raid_level": "raid5f", 00:39:04.791 "superblock": true, 00:39:04.791 "num_base_bdevs": 4, 00:39:04.791 "num_base_bdevs_discovered": 4, 00:39:04.791 "num_base_bdevs_operational": 4, 00:39:04.791 "process": { 00:39:04.791 "type": "rebuild", 00:39:04.791 "target": "spare", 00:39:04.791 "progress": { 00:39:04.791 "blocks": 55680, 00:39:04.791 "percent": 29 00:39:04.791 } 00:39:04.791 }, 00:39:04.791 "base_bdevs_list": [ 00:39:04.791 { 00:39:04.791 "name": "spare", 00:39:04.791 "uuid": "65d7f8ff-81fd-5842-9908-deafde05066c", 00:39:04.791 "is_configured": true, 00:39:04.791 "data_offset": 2048, 00:39:04.791 "data_size": 63488 00:39:04.791 }, 00:39:04.791 { 00:39:04.791 "name": "BaseBdev2", 00:39:04.791 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:04.791 "is_configured": true, 00:39:04.791 "data_offset": 2048, 00:39:04.791 "data_size": 63488 00:39:04.791 }, 00:39:04.791 { 00:39:04.791 "name": "BaseBdev3", 00:39:04.791 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:04.791 "is_configured": true, 00:39:04.791 "data_offset": 2048, 00:39:04.791 "data_size": 63488 00:39:04.791 }, 00:39:04.791 { 00:39:04.791 "name": "BaseBdev4", 00:39:04.791 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:04.791 "is_configured": true, 00:39:04.791 "data_offset": 2048, 00:39:04.791 "data_size": 63488 00:39:04.791 } 00:39:04.791 ] 00:39:04.791 }' 00:39:04.791 12:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:04.791 12:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:04.791 12:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:05.050 12:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:05.050 12:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:39:05.988 12:02:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:39:05.988 12:02:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:05.988 12:02:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:05.988 12:02:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:05.988 12:02:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:05.988 12:02:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:05.988 12:02:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:05.988 12:02:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:06.247 12:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:06.247 "name": "raid_bdev1", 00:39:06.247 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:06.247 "strip_size_kb": 64, 00:39:06.247 "state": "online", 00:39:06.247 "raid_level": "raid5f", 00:39:06.247 "superblock": true, 00:39:06.247 "num_base_bdevs": 4, 00:39:06.247 "num_base_bdevs_discovered": 4, 00:39:06.247 "num_base_bdevs_operational": 4, 00:39:06.247 "process": { 00:39:06.247 "type": "rebuild", 00:39:06.247 "target": "spare", 00:39:06.247 "progress": { 00:39:06.247 "blocks": 80640, 00:39:06.247 "percent": 42 00:39:06.247 } 00:39:06.247 }, 00:39:06.247 "base_bdevs_list": [ 00:39:06.247 { 00:39:06.247 "name": "spare", 00:39:06.247 "uuid": "65d7f8ff-81fd-5842-9908-deafde05066c", 00:39:06.247 "is_configured": true, 00:39:06.247 "data_offset": 2048, 00:39:06.247 "data_size": 63488 00:39:06.247 }, 00:39:06.247 { 00:39:06.247 "name": "BaseBdev2", 00:39:06.247 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:06.247 "is_configured": true, 00:39:06.247 "data_offset": 2048, 00:39:06.247 "data_size": 63488 00:39:06.247 }, 00:39:06.247 { 00:39:06.247 "name": "BaseBdev3", 00:39:06.247 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:06.247 "is_configured": true, 00:39:06.247 "data_offset": 2048, 00:39:06.247 "data_size": 63488 00:39:06.247 }, 00:39:06.247 { 00:39:06.247 "name": "BaseBdev4", 00:39:06.247 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:06.247 "is_configured": true, 00:39:06.247 "data_offset": 2048, 00:39:06.247 "data_size": 63488 00:39:06.247 } 00:39:06.247 ] 00:39:06.247 }' 00:39:06.247 12:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:06.247 12:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:06.247 12:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:06.247 12:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:06.247 12:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:39:07.624 12:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:39:07.624 12:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:07.624 12:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:07.624 12:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:07.624 12:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:07.624 12:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:07.624 12:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:07.624 12:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:07.624 12:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:07.624 "name": "raid_bdev1", 00:39:07.624 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:07.624 "strip_size_kb": 64, 00:39:07.624 "state": "online", 00:39:07.624 "raid_level": "raid5f", 00:39:07.624 "superblock": true, 00:39:07.624 "num_base_bdevs": 4, 00:39:07.624 "num_base_bdevs_discovered": 4, 00:39:07.624 "num_base_bdevs_operational": 4, 00:39:07.624 "process": { 00:39:07.624 "type": "rebuild", 00:39:07.624 "target": "spare", 00:39:07.624 "progress": { 00:39:07.624 "blocks": 107520, 00:39:07.624 "percent": 56 00:39:07.624 } 00:39:07.624 }, 00:39:07.624 "base_bdevs_list": [ 00:39:07.624 { 00:39:07.624 "name": "spare", 00:39:07.624 "uuid": "65d7f8ff-81fd-5842-9908-deafde05066c", 00:39:07.624 "is_configured": true, 00:39:07.624 "data_offset": 2048, 00:39:07.624 "data_size": 63488 00:39:07.624 }, 00:39:07.624 { 00:39:07.624 "name": "BaseBdev2", 00:39:07.624 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:07.624 "is_configured": true, 00:39:07.624 "data_offset": 2048, 00:39:07.624 "data_size": 63488 00:39:07.624 }, 00:39:07.624 { 00:39:07.624 "name": "BaseBdev3", 00:39:07.624 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:07.624 "is_configured": true, 00:39:07.624 "data_offset": 2048, 00:39:07.624 "data_size": 63488 00:39:07.624 }, 00:39:07.624 { 00:39:07.624 "name": "BaseBdev4", 00:39:07.624 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:07.624 "is_configured": true, 00:39:07.624 "data_offset": 2048, 00:39:07.624 "data_size": 63488 00:39:07.624 } 00:39:07.624 ] 00:39:07.624 }' 00:39:07.624 12:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:07.624 12:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:07.624 12:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:07.624 12:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:07.624 12:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:39:08.559 12:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:39:08.559 12:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:08.559 12:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:08.559 12:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:08.559 12:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:08.559 12:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:08.559 12:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:08.559 12:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:08.818 12:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:08.818 "name": "raid_bdev1", 00:39:08.818 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:08.818 "strip_size_kb": 64, 00:39:08.818 "state": "online", 00:39:08.818 "raid_level": "raid5f", 00:39:08.818 "superblock": true, 00:39:08.818 "num_base_bdevs": 4, 00:39:08.818 "num_base_bdevs_discovered": 4, 00:39:08.818 "num_base_bdevs_operational": 4, 00:39:08.818 "process": { 00:39:08.818 "type": "rebuild", 00:39:08.818 "target": "spare", 00:39:08.818 "progress": { 00:39:08.818 "blocks": 130560, 00:39:08.818 "percent": 68 00:39:08.818 } 00:39:08.818 }, 00:39:08.818 "base_bdevs_list": [ 00:39:08.818 { 00:39:08.818 "name": "spare", 00:39:08.818 "uuid": "65d7f8ff-81fd-5842-9908-deafde05066c", 00:39:08.818 "is_configured": true, 00:39:08.818 "data_offset": 2048, 00:39:08.818 "data_size": 63488 00:39:08.818 }, 00:39:08.818 { 00:39:08.818 "name": "BaseBdev2", 00:39:08.818 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:08.818 "is_configured": true, 00:39:08.818 "data_offset": 2048, 00:39:08.818 "data_size": 63488 00:39:08.818 }, 00:39:08.818 { 00:39:08.818 "name": "BaseBdev3", 00:39:08.818 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:08.818 "is_configured": true, 00:39:08.818 "data_offset": 2048, 00:39:08.818 "data_size": 63488 00:39:08.818 }, 00:39:08.818 { 00:39:08.818 "name": "BaseBdev4", 00:39:08.818 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:08.818 "is_configured": true, 00:39:08.818 "data_offset": 2048, 00:39:08.818 "data_size": 63488 00:39:08.818 } 00:39:08.818 ] 00:39:08.818 }' 00:39:08.818 12:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:08.818 12:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:08.818 12:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:09.076 12:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:09.076 12:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:39:10.052 12:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:39:10.052 12:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:10.052 12:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:10.052 12:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:10.052 12:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:10.052 12:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:10.052 12:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:10.052 12:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:10.332 12:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:10.332 "name": "raid_bdev1", 00:39:10.332 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:10.332 "strip_size_kb": 64, 00:39:10.332 "state": "online", 00:39:10.332 "raid_level": "raid5f", 00:39:10.332 "superblock": true, 00:39:10.332 "num_base_bdevs": 4, 00:39:10.332 "num_base_bdevs_discovered": 4, 00:39:10.332 "num_base_bdevs_operational": 4, 00:39:10.332 "process": { 00:39:10.332 "type": "rebuild", 00:39:10.332 "target": "spare", 00:39:10.332 "progress": { 00:39:10.332 "blocks": 157440, 00:39:10.332 "percent": 82 00:39:10.332 } 00:39:10.332 }, 00:39:10.332 "base_bdevs_list": [ 00:39:10.332 { 00:39:10.332 "name": "spare", 00:39:10.332 "uuid": "65d7f8ff-81fd-5842-9908-deafde05066c", 00:39:10.332 "is_configured": true, 00:39:10.332 "data_offset": 2048, 00:39:10.332 "data_size": 63488 00:39:10.332 }, 00:39:10.332 { 00:39:10.332 "name": "BaseBdev2", 00:39:10.332 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:10.332 "is_configured": true, 00:39:10.332 "data_offset": 2048, 00:39:10.332 "data_size": 63488 00:39:10.332 }, 00:39:10.332 { 00:39:10.332 "name": "BaseBdev3", 00:39:10.332 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:10.332 "is_configured": true, 00:39:10.332 "data_offset": 2048, 00:39:10.332 "data_size": 63488 00:39:10.332 }, 00:39:10.332 { 00:39:10.332 "name": "BaseBdev4", 00:39:10.332 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:10.332 "is_configured": true, 00:39:10.332 "data_offset": 2048, 00:39:10.332 "data_size": 63488 00:39:10.332 } 00:39:10.332 ] 00:39:10.332 }' 00:39:10.332 12:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:10.332 12:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:10.332 12:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:10.332 12:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:10.332 12:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:39:11.268 12:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:39:11.268 12:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:11.268 12:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:11.268 12:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:11.268 12:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:11.268 12:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:11.268 12:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:11.268 12:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:11.527 12:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:11.527 "name": "raid_bdev1", 00:39:11.527 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:11.527 "strip_size_kb": 64, 00:39:11.527 "state": "online", 00:39:11.527 "raid_level": "raid5f", 00:39:11.527 "superblock": true, 00:39:11.527 "num_base_bdevs": 4, 00:39:11.527 "num_base_bdevs_discovered": 4, 00:39:11.527 "num_base_bdevs_operational": 4, 00:39:11.527 "process": { 00:39:11.527 "type": "rebuild", 00:39:11.527 "target": "spare", 00:39:11.527 "progress": { 00:39:11.527 "blocks": 182400, 00:39:11.527 "percent": 95 00:39:11.527 } 00:39:11.527 }, 00:39:11.527 "base_bdevs_list": [ 00:39:11.527 { 00:39:11.527 "name": "spare", 00:39:11.527 "uuid": "65d7f8ff-81fd-5842-9908-deafde05066c", 00:39:11.527 "is_configured": true, 00:39:11.527 "data_offset": 2048, 00:39:11.527 "data_size": 63488 00:39:11.527 }, 00:39:11.527 { 00:39:11.527 "name": "BaseBdev2", 00:39:11.527 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:11.527 "is_configured": true, 00:39:11.527 "data_offset": 2048, 00:39:11.527 "data_size": 63488 00:39:11.527 }, 00:39:11.527 { 00:39:11.527 "name": "BaseBdev3", 00:39:11.527 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:11.527 "is_configured": true, 00:39:11.527 "data_offset": 2048, 00:39:11.527 "data_size": 63488 00:39:11.527 }, 00:39:11.527 { 00:39:11.527 "name": "BaseBdev4", 00:39:11.527 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:11.527 "is_configured": true, 00:39:11.527 "data_offset": 2048, 00:39:11.527 "data_size": 63488 00:39:11.527 } 00:39:11.527 ] 00:39:11.527 }' 00:39:11.527 12:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:11.527 12:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:11.527 12:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:11.527 12:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:11.527 12:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:39:12.096 [2024-06-10 12:02:43.889097] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:39:12.096 [2024-06-10 12:02:43.889362] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:39:12.096 [2024-06-10 12:02:43.889628] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:12.664 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:39:12.664 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:12.664 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:12.664 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:12.664 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:12.664 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:12.664 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:12.664 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:12.923 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:12.923 "name": "raid_bdev1", 00:39:12.923 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:12.923 "strip_size_kb": 64, 00:39:12.923 "state": "online", 00:39:12.923 "raid_level": "raid5f", 00:39:12.923 "superblock": true, 00:39:12.923 "num_base_bdevs": 4, 00:39:12.923 "num_base_bdevs_discovered": 4, 00:39:12.923 "num_base_bdevs_operational": 4, 00:39:12.923 "base_bdevs_list": [ 00:39:12.923 { 00:39:12.923 "name": "spare", 00:39:12.923 "uuid": "65d7f8ff-81fd-5842-9908-deafde05066c", 00:39:12.923 "is_configured": true, 00:39:12.923 "data_offset": 2048, 00:39:12.923 "data_size": 63488 00:39:12.923 }, 00:39:12.923 { 00:39:12.923 "name": "BaseBdev2", 00:39:12.923 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:12.923 "is_configured": true, 00:39:12.923 "data_offset": 2048, 00:39:12.923 "data_size": 63488 00:39:12.923 }, 00:39:12.923 { 00:39:12.923 "name": "BaseBdev3", 00:39:12.923 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:12.923 "is_configured": true, 00:39:12.923 "data_offset": 2048, 00:39:12.923 "data_size": 63488 00:39:12.923 }, 00:39:12.923 { 00:39:12.923 "name": "BaseBdev4", 00:39:12.923 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:12.923 "is_configured": true, 00:39:12.923 "data_offset": 2048, 00:39:12.923 "data_size": 63488 00:39:12.923 } 00:39:12.923 ] 00:39:12.923 }' 00:39:12.923 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:12.923 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:39:12.923 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:12.923 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:39:12.923 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:39:12.923 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:12.923 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:12.923 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:12.923 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:12.923 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:12.923 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:12.923 12:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:13.182 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:13.182 "name": "raid_bdev1", 00:39:13.182 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:13.182 "strip_size_kb": 64, 00:39:13.182 "state": "online", 00:39:13.182 "raid_level": "raid5f", 00:39:13.182 "superblock": true, 00:39:13.182 "num_base_bdevs": 4, 00:39:13.182 "num_base_bdevs_discovered": 4, 00:39:13.182 "num_base_bdevs_operational": 4, 00:39:13.182 "base_bdevs_list": [ 00:39:13.182 { 00:39:13.182 "name": "spare", 00:39:13.182 "uuid": "65d7f8ff-81fd-5842-9908-deafde05066c", 00:39:13.182 "is_configured": true, 00:39:13.182 "data_offset": 2048, 00:39:13.182 "data_size": 63488 00:39:13.182 }, 00:39:13.182 { 00:39:13.182 "name": "BaseBdev2", 00:39:13.182 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:13.182 "is_configured": true, 00:39:13.182 "data_offset": 2048, 00:39:13.182 "data_size": 63488 00:39:13.182 }, 00:39:13.182 { 00:39:13.182 "name": "BaseBdev3", 00:39:13.182 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:13.182 "is_configured": true, 00:39:13.182 "data_offset": 2048, 00:39:13.182 "data_size": 63488 00:39:13.182 }, 00:39:13.182 { 00:39:13.182 "name": "BaseBdev4", 00:39:13.182 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:13.182 "is_configured": true, 00:39:13.182 "data_offset": 2048, 00:39:13.182 "data_size": 63488 00:39:13.182 } 00:39:13.182 ] 00:39:13.182 }' 00:39:13.182 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:13.182 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:13.182 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:13.441 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:13.441 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:39:13.441 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:13.441 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:13.441 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:39:13.441 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:39:13.441 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:39:13.441 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:13.441 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:13.441 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:13.441 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:13.441 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:13.441 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:13.441 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:13.441 "name": "raid_bdev1", 00:39:13.441 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:13.441 "strip_size_kb": 64, 00:39:13.441 "state": "online", 00:39:13.441 "raid_level": "raid5f", 00:39:13.441 "superblock": true, 00:39:13.441 "num_base_bdevs": 4, 00:39:13.441 "num_base_bdevs_discovered": 4, 00:39:13.441 "num_base_bdevs_operational": 4, 00:39:13.441 "base_bdevs_list": [ 00:39:13.441 { 00:39:13.441 "name": "spare", 00:39:13.441 "uuid": "65d7f8ff-81fd-5842-9908-deafde05066c", 00:39:13.441 "is_configured": true, 00:39:13.441 "data_offset": 2048, 00:39:13.441 "data_size": 63488 00:39:13.441 }, 00:39:13.441 { 00:39:13.441 "name": "BaseBdev2", 00:39:13.441 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:13.441 "is_configured": true, 00:39:13.441 "data_offset": 2048, 00:39:13.441 "data_size": 63488 00:39:13.441 }, 00:39:13.441 { 00:39:13.441 "name": "BaseBdev3", 00:39:13.441 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:13.441 "is_configured": true, 00:39:13.441 "data_offset": 2048, 00:39:13.441 "data_size": 63488 00:39:13.441 }, 00:39:13.441 { 00:39:13.441 "name": "BaseBdev4", 00:39:13.441 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:13.441 "is_configured": true, 00:39:13.441 "data_offset": 2048, 00:39:13.441 "data_size": 63488 00:39:13.441 } 00:39:13.441 ] 00:39:13.441 }' 00:39:13.441 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:13.441 12:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:14.009 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:14.267 [2024-06-10 12:02:46.273718] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:14.267 [2024-06-10 12:02:46.273957] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:14.267 [2024-06-10 12:02:46.274160] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:14.267 [2024-06-10 12:02:46.274367] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:14.267 [2024-06-10 12:02:46.274474] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:39:14.267 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:14.267 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:39:14.836 /dev/nbd0 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:14.836 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:39:15.094 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:39:15.094 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:39:15.094 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:39:15.094 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:15.094 1+0 records in 00:39:15.094 1+0 records out 00:39:15.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355777 s, 11.5 MB/s 00:39:15.095 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:15.095 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:39:15.095 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:15.095 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:39:15.095 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:39:15.095 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:15.095 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:15.095 12:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:39:15.095 /dev/nbd1 00:39:15.095 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:15.095 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:15.095 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:39:15.095 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:39:15.095 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:39:15.095 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:39:15.095 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:39:15.095 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:39:15.095 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:39:15.095 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:39:15.095 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:15.095 1+0 records in 00:39:15.095 1+0 records out 00:39:15.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661722 s, 6.2 MB/s 00:39:15.354 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:15.354 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:39:15.354 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:15.354 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:39:15.354 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:39:15.354 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:15.354 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:15.354 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:39:15.354 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:39:15.354 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:15.354 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:15.354 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:15.354 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:39:15.354 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:15.354 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:39:15.613 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:15.613 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:15.613 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:15.613 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:15.613 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:15.613 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:15.613 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:39:15.613 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:39:15.613 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:15.613 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:39:15.872 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:15.872 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:15.872 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:15.872 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:15.872 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:15.872 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:15.872 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:39:15.872 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:39:15.872 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:39:15.872 12:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:39:16.130 12:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:39:16.389 [2024-06-10 12:02:48.327313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:16.389 [2024-06-10 12:02:48.327622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:16.389 [2024-06-10 12:02:48.327800] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:39:16.389 [2024-06-10 12:02:48.327920] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:16.389 [2024-06-10 12:02:48.330797] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:16.389 [2024-06-10 12:02:48.331000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:16.389 [2024-06-10 12:02:48.331242] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:16.389 [2024-06-10 12:02:48.331388] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:16.389 [2024-06-10 12:02:48.331650] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:16.389 [2024-06-10 12:02:48.331936] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:16.389 [2024-06-10 12:02:48.332139] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:16.389 spare 00:39:16.389 12:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:39:16.389 12:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:16.389 12:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:16.389 12:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:39:16.389 12:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:39:16.389 12:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:39:16.389 12:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:16.389 12:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:16.389 12:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:16.389 12:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:16.389 12:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:16.389 12:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:16.389 [2024-06-10 12:02:48.432334] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:39:16.389 [2024-06-10 12:02:48.432597] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:39:16.389 [2024-06-10 12:02:48.432791] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049440 00:39:16.389 [2024-06-10 12:02:48.441809] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:39:16.389 [2024-06-10 12:02:48.441945] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:39:16.389 [2024-06-10 12:02:48.442207] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:16.648 12:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:16.648 "name": "raid_bdev1", 00:39:16.648 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:16.648 "strip_size_kb": 64, 00:39:16.648 "state": "online", 00:39:16.648 "raid_level": "raid5f", 00:39:16.648 "superblock": true, 00:39:16.648 "num_base_bdevs": 4, 00:39:16.648 "num_base_bdevs_discovered": 4, 00:39:16.648 "num_base_bdevs_operational": 4, 00:39:16.648 "base_bdevs_list": [ 00:39:16.648 { 00:39:16.648 "name": "spare", 00:39:16.648 "uuid": "65d7f8ff-81fd-5842-9908-deafde05066c", 00:39:16.648 "is_configured": true, 00:39:16.648 "data_offset": 2048, 00:39:16.648 "data_size": 63488 00:39:16.648 }, 00:39:16.648 { 00:39:16.648 "name": "BaseBdev2", 00:39:16.648 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:16.648 "is_configured": true, 00:39:16.648 "data_offset": 2048, 00:39:16.648 "data_size": 63488 00:39:16.648 }, 00:39:16.648 { 00:39:16.648 "name": "BaseBdev3", 00:39:16.648 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:16.648 "is_configured": true, 00:39:16.648 "data_offset": 2048, 00:39:16.648 "data_size": 63488 00:39:16.648 }, 00:39:16.648 { 00:39:16.648 "name": "BaseBdev4", 00:39:16.648 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:16.648 "is_configured": true, 00:39:16.648 "data_offset": 2048, 00:39:16.648 "data_size": 63488 00:39:16.648 } 00:39:16.648 ] 00:39:16.648 }' 00:39:16.648 12:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:16.648 12:02:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:17.216 12:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:17.216 12:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:17.216 12:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:17.216 12:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:17.216 12:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:17.216 12:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:17.216 12:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:17.784 12:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:17.784 "name": "raid_bdev1", 00:39:17.784 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:17.784 "strip_size_kb": 64, 00:39:17.784 "state": "online", 00:39:17.784 "raid_level": "raid5f", 00:39:17.784 "superblock": true, 00:39:17.784 "num_base_bdevs": 4, 00:39:17.784 "num_base_bdevs_discovered": 4, 00:39:17.784 "num_base_bdevs_operational": 4, 00:39:17.784 "base_bdevs_list": [ 00:39:17.784 { 00:39:17.784 "name": "spare", 00:39:17.784 "uuid": "65d7f8ff-81fd-5842-9908-deafde05066c", 00:39:17.784 "is_configured": true, 00:39:17.784 "data_offset": 2048, 00:39:17.784 "data_size": 63488 00:39:17.784 }, 00:39:17.784 { 00:39:17.784 "name": "BaseBdev2", 00:39:17.784 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:17.784 "is_configured": true, 00:39:17.784 "data_offset": 2048, 00:39:17.784 "data_size": 63488 00:39:17.784 }, 00:39:17.784 { 00:39:17.785 "name": "BaseBdev3", 00:39:17.785 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:17.785 "is_configured": true, 00:39:17.785 "data_offset": 2048, 00:39:17.785 "data_size": 63488 00:39:17.785 }, 00:39:17.785 { 00:39:17.785 "name": "BaseBdev4", 00:39:17.785 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:17.785 "is_configured": true, 00:39:17.785 "data_offset": 2048, 00:39:17.785 "data_size": 63488 00:39:17.785 } 00:39:17.785 ] 00:39:17.785 }' 00:39:17.785 12:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:17.785 12:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:17.785 12:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:17.785 12:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:17.785 12:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:17.785 12:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:39:18.129 12:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:39:18.129 12:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:39:18.129 [2024-06-10 12:02:50.118238] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:18.129 12:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:39:18.129 12:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:18.129 12:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:18.129 12:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:39:18.129 12:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:39:18.129 12:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:39:18.129 12:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:18.129 12:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:18.129 12:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:18.129 12:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:18.129 12:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:18.129 12:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:18.402 12:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:18.402 "name": "raid_bdev1", 00:39:18.402 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:18.402 "strip_size_kb": 64, 00:39:18.402 "state": "online", 00:39:18.402 "raid_level": "raid5f", 00:39:18.402 "superblock": true, 00:39:18.402 "num_base_bdevs": 4, 00:39:18.402 "num_base_bdevs_discovered": 3, 00:39:18.402 "num_base_bdevs_operational": 3, 00:39:18.402 "base_bdevs_list": [ 00:39:18.402 { 00:39:18.402 "name": null, 00:39:18.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:18.402 "is_configured": false, 00:39:18.402 "data_offset": 2048, 00:39:18.402 "data_size": 63488 00:39:18.402 }, 00:39:18.402 { 00:39:18.402 "name": "BaseBdev2", 00:39:18.402 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:18.402 "is_configured": true, 00:39:18.402 "data_offset": 2048, 00:39:18.402 "data_size": 63488 00:39:18.402 }, 00:39:18.402 { 00:39:18.402 "name": "BaseBdev3", 00:39:18.402 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:18.402 "is_configured": true, 00:39:18.402 "data_offset": 2048, 00:39:18.402 "data_size": 63488 00:39:18.402 }, 00:39:18.402 { 00:39:18.402 "name": "BaseBdev4", 00:39:18.402 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:18.402 "is_configured": true, 00:39:18.402 "data_offset": 2048, 00:39:18.402 "data_size": 63488 00:39:18.402 } 00:39:18.402 ] 00:39:18.402 }' 00:39:18.402 12:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:18.402 12:02:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:19.340 12:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:39:19.340 [2024-06-10 12:02:51.266548] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:19.341 [2024-06-10 12:02:51.266991] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:39:19.341 [2024-06-10 12:02:51.267127] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:19.341 [2024-06-10 12:02:51.267327] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:19.341 [2024-06-10 12:02:51.287143] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000495e0 00:39:19.341 12:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:39:19.341 [2024-06-10 12:02:51.314433] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:20.277 12:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:20.277 12:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:20.277 12:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:20.277 12:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:20.277 12:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:20.277 12:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:20.277 12:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:20.534 12:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:20.534 "name": "raid_bdev1", 00:39:20.534 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:20.534 "strip_size_kb": 64, 00:39:20.534 "state": "online", 00:39:20.534 "raid_level": "raid5f", 00:39:20.534 "superblock": true, 00:39:20.534 "num_base_bdevs": 4, 00:39:20.534 "num_base_bdevs_discovered": 4, 00:39:20.534 "num_base_bdevs_operational": 4, 00:39:20.534 "process": { 00:39:20.534 "type": "rebuild", 00:39:20.534 "target": "spare", 00:39:20.534 "progress": { 00:39:20.534 "blocks": 21120, 00:39:20.534 "percent": 11 00:39:20.534 } 00:39:20.534 }, 00:39:20.534 "base_bdevs_list": [ 00:39:20.534 { 00:39:20.534 "name": "spare", 00:39:20.534 "uuid": "65d7f8ff-81fd-5842-9908-deafde05066c", 00:39:20.534 "is_configured": true, 00:39:20.534 "data_offset": 2048, 00:39:20.534 "data_size": 63488 00:39:20.534 }, 00:39:20.534 { 00:39:20.535 "name": "BaseBdev2", 00:39:20.535 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:20.535 "is_configured": true, 00:39:20.535 "data_offset": 2048, 00:39:20.535 "data_size": 63488 00:39:20.535 }, 00:39:20.535 { 00:39:20.535 "name": "BaseBdev3", 00:39:20.535 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:20.535 "is_configured": true, 00:39:20.535 "data_offset": 2048, 00:39:20.535 "data_size": 63488 00:39:20.535 }, 00:39:20.535 { 00:39:20.535 "name": "BaseBdev4", 00:39:20.535 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:20.535 "is_configured": true, 00:39:20.535 "data_offset": 2048, 00:39:20.535 "data_size": 63488 00:39:20.535 } 00:39:20.535 ] 00:39:20.535 }' 00:39:20.535 12:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:20.792 12:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:20.792 12:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:20.792 12:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:20.792 12:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:39:20.792 [2024-06-10 12:02:52.847234] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:21.051 [2024-06-10 12:02:52.942876] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:21.051 [2024-06-10 12:02:52.943208] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:21.051 [2024-06-10 12:02:52.943340] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:21.051 [2024-06-10 12:02:52.943384] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:21.051 12:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:39:21.051 12:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:21.051 12:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:21.051 12:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:39:21.051 12:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:39:21.051 12:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:39:21.051 12:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:21.051 12:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:21.051 12:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:21.051 12:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:21.051 12:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:21.051 12:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:21.309 12:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:21.309 "name": "raid_bdev1", 00:39:21.309 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:21.309 "strip_size_kb": 64, 00:39:21.309 "state": "online", 00:39:21.309 "raid_level": "raid5f", 00:39:21.309 "superblock": true, 00:39:21.309 "num_base_bdevs": 4, 00:39:21.309 "num_base_bdevs_discovered": 3, 00:39:21.309 "num_base_bdevs_operational": 3, 00:39:21.309 "base_bdevs_list": [ 00:39:21.309 { 00:39:21.309 "name": null, 00:39:21.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:21.309 "is_configured": false, 00:39:21.309 "data_offset": 2048, 00:39:21.309 "data_size": 63488 00:39:21.309 }, 00:39:21.309 { 00:39:21.309 "name": "BaseBdev2", 00:39:21.309 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:21.309 "is_configured": true, 00:39:21.309 "data_offset": 2048, 00:39:21.309 "data_size": 63488 00:39:21.309 }, 00:39:21.309 { 00:39:21.309 "name": "BaseBdev3", 00:39:21.309 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:21.309 "is_configured": true, 00:39:21.309 "data_offset": 2048, 00:39:21.309 "data_size": 63488 00:39:21.309 }, 00:39:21.309 { 00:39:21.309 "name": "BaseBdev4", 00:39:21.309 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:21.309 "is_configured": true, 00:39:21.309 "data_offset": 2048, 00:39:21.309 "data_size": 63488 00:39:21.309 } 00:39:21.309 ] 00:39:21.309 }' 00:39:21.309 12:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:21.309 12:02:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:21.885 12:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:39:22.143 [2024-06-10 12:02:54.074434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:22.143 [2024-06-10 12:02:54.074863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:22.143 [2024-06-10 12:02:54.074963] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:39:22.143 [2024-06-10 12:02:54.075114] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:22.143 [2024-06-10 12:02:54.075849] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:22.143 [2024-06-10 12:02:54.076030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:22.143 [2024-06-10 12:02:54.076365] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:22.143 [2024-06-10 12:02:54.076468] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:39:22.143 [2024-06-10 12:02:54.076553] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:22.143 [2024-06-10 12:02:54.076694] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:22.143 [2024-06-10 12:02:54.095269] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049920 00:39:22.143 spare 00:39:22.143 [2024-06-10 12:02:54.106095] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:22.143 12:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:39:23.077 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:23.077 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:23.077 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:23.077 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:23.077 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:23.077 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:23.077 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:23.643 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:23.643 "name": "raid_bdev1", 00:39:23.643 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:23.643 "strip_size_kb": 64, 00:39:23.643 "state": "online", 00:39:23.643 "raid_level": "raid5f", 00:39:23.643 "superblock": true, 00:39:23.643 "num_base_bdevs": 4, 00:39:23.643 "num_base_bdevs_discovered": 4, 00:39:23.643 "num_base_bdevs_operational": 4, 00:39:23.643 "process": { 00:39:23.643 "type": "rebuild", 00:39:23.643 "target": "spare", 00:39:23.643 "progress": { 00:39:23.643 "blocks": 24960, 00:39:23.643 "percent": 13 00:39:23.643 } 00:39:23.643 }, 00:39:23.643 "base_bdevs_list": [ 00:39:23.643 { 00:39:23.643 "name": "spare", 00:39:23.643 "uuid": "65d7f8ff-81fd-5842-9908-deafde05066c", 00:39:23.643 "is_configured": true, 00:39:23.643 "data_offset": 2048, 00:39:23.643 "data_size": 63488 00:39:23.643 }, 00:39:23.643 { 00:39:23.643 "name": "BaseBdev2", 00:39:23.643 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:23.643 "is_configured": true, 00:39:23.643 "data_offset": 2048, 00:39:23.643 "data_size": 63488 00:39:23.643 }, 00:39:23.643 { 00:39:23.643 "name": "BaseBdev3", 00:39:23.643 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:23.643 "is_configured": true, 00:39:23.643 "data_offset": 2048, 00:39:23.643 "data_size": 63488 00:39:23.643 }, 00:39:23.643 { 00:39:23.643 "name": "BaseBdev4", 00:39:23.643 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:23.643 "is_configured": true, 00:39:23.643 "data_offset": 2048, 00:39:23.643 "data_size": 63488 00:39:23.643 } 00:39:23.643 ] 00:39:23.643 }' 00:39:23.643 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:23.643 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:23.643 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:23.643 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:23.643 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:39:23.901 [2024-06-10 12:02:55.908892] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:23.901 [2024-06-10 12:02:55.924632] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:23.901 [2024-06-10 12:02:55.924913] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:23.901 [2024-06-10 12:02:55.924972] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:23.901 [2024-06-10 12:02:55.925054] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:24.158 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:39:24.159 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:24.159 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:24.159 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:39:24.159 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:39:24.159 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:39:24.159 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:24.159 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:24.159 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:24.159 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:24.159 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:24.159 12:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:24.159 12:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:24.159 "name": "raid_bdev1", 00:39:24.159 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:24.159 "strip_size_kb": 64, 00:39:24.159 "state": "online", 00:39:24.159 "raid_level": "raid5f", 00:39:24.159 "superblock": true, 00:39:24.159 "num_base_bdevs": 4, 00:39:24.159 "num_base_bdevs_discovered": 3, 00:39:24.159 "num_base_bdevs_operational": 3, 00:39:24.159 "base_bdevs_list": [ 00:39:24.159 { 00:39:24.159 "name": null, 00:39:24.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:24.159 "is_configured": false, 00:39:24.159 "data_offset": 2048, 00:39:24.159 "data_size": 63488 00:39:24.159 }, 00:39:24.159 { 00:39:24.159 "name": "BaseBdev2", 00:39:24.159 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:24.159 "is_configured": true, 00:39:24.159 "data_offset": 2048, 00:39:24.159 "data_size": 63488 00:39:24.159 }, 00:39:24.159 { 00:39:24.159 "name": "BaseBdev3", 00:39:24.159 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:24.159 "is_configured": true, 00:39:24.159 "data_offset": 2048, 00:39:24.159 "data_size": 63488 00:39:24.159 }, 00:39:24.159 { 00:39:24.159 "name": "BaseBdev4", 00:39:24.159 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:24.159 "is_configured": true, 00:39:24.159 "data_offset": 2048, 00:39:24.159 "data_size": 63488 00:39:24.159 } 00:39:24.159 ] 00:39:24.159 }' 00:39:24.159 12:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:24.159 12:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:25.094 12:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:25.094 12:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:25.094 12:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:25.094 12:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:25.094 12:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:25.094 12:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:25.094 12:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:25.094 12:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:25.094 "name": "raid_bdev1", 00:39:25.094 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:25.094 "strip_size_kb": 64, 00:39:25.094 "state": "online", 00:39:25.094 "raid_level": "raid5f", 00:39:25.094 "superblock": true, 00:39:25.094 "num_base_bdevs": 4, 00:39:25.094 "num_base_bdevs_discovered": 3, 00:39:25.094 "num_base_bdevs_operational": 3, 00:39:25.094 "base_bdevs_list": [ 00:39:25.094 { 00:39:25.094 "name": null, 00:39:25.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:25.094 "is_configured": false, 00:39:25.094 "data_offset": 2048, 00:39:25.094 "data_size": 63488 00:39:25.094 }, 00:39:25.094 { 00:39:25.094 "name": "BaseBdev2", 00:39:25.094 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:25.094 "is_configured": true, 00:39:25.094 "data_offset": 2048, 00:39:25.094 "data_size": 63488 00:39:25.094 }, 00:39:25.094 { 00:39:25.094 "name": "BaseBdev3", 00:39:25.094 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:25.094 "is_configured": true, 00:39:25.094 "data_offset": 2048, 00:39:25.094 "data_size": 63488 00:39:25.094 }, 00:39:25.094 { 00:39:25.094 "name": "BaseBdev4", 00:39:25.094 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:25.094 "is_configured": true, 00:39:25.094 "data_offset": 2048, 00:39:25.094 "data_size": 63488 00:39:25.094 } 00:39:25.094 ] 00:39:25.094 }' 00:39:25.094 12:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:25.352 12:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:25.352 12:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:25.352 12:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:25.352 12:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:39:25.612 12:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:25.612 [2024-06-10 12:02:57.622132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:25.612 [2024-06-10 12:02:57.622477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:25.612 [2024-06-10 12:02:57.622631] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:39:25.612 [2024-06-10 12:02:57.622778] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:25.612 [2024-06-10 12:02:57.623439] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:25.612 [2024-06-10 12:02:57.623604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:25.612 [2024-06-10 12:02:57.623928] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:39:25.612 [2024-06-10 12:02:57.624039] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:39:25.612 [2024-06-10 12:02:57.624118] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:25.612 BaseBdev1 00:39:25.612 12:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:39:26.987 12:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:39:26.987 12:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:26.987 12:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:26.987 12:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:39:26.987 12:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:39:26.987 12:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:39:26.987 12:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:26.987 12:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:26.987 12:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:26.987 12:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:26.987 12:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:26.987 12:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:26.987 12:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:26.987 "name": "raid_bdev1", 00:39:26.987 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:26.987 "strip_size_kb": 64, 00:39:26.987 "state": "online", 00:39:26.987 "raid_level": "raid5f", 00:39:26.987 "superblock": true, 00:39:26.987 "num_base_bdevs": 4, 00:39:26.987 "num_base_bdevs_discovered": 3, 00:39:26.987 "num_base_bdevs_operational": 3, 00:39:26.987 "base_bdevs_list": [ 00:39:26.987 { 00:39:26.987 "name": null, 00:39:26.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:26.987 "is_configured": false, 00:39:26.987 "data_offset": 2048, 00:39:26.987 "data_size": 63488 00:39:26.987 }, 00:39:26.987 { 00:39:26.987 "name": "BaseBdev2", 00:39:26.987 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:26.987 "is_configured": true, 00:39:26.987 "data_offset": 2048, 00:39:26.987 "data_size": 63488 00:39:26.987 }, 00:39:26.987 { 00:39:26.987 "name": "BaseBdev3", 00:39:26.987 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:26.987 "is_configured": true, 00:39:26.987 "data_offset": 2048, 00:39:26.987 "data_size": 63488 00:39:26.987 }, 00:39:26.987 { 00:39:26.987 "name": "BaseBdev4", 00:39:26.987 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:26.987 "is_configured": true, 00:39:26.987 "data_offset": 2048, 00:39:26.987 "data_size": 63488 00:39:26.987 } 00:39:26.987 ] 00:39:26.987 }' 00:39:26.987 12:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:26.987 12:02:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:27.553 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:27.553 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:27.553 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:27.553 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:27.553 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:27.553 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:27.553 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:27.811 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:27.811 "name": "raid_bdev1", 00:39:27.811 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:27.811 "strip_size_kb": 64, 00:39:27.811 "state": "online", 00:39:27.811 "raid_level": "raid5f", 00:39:27.811 "superblock": true, 00:39:27.811 "num_base_bdevs": 4, 00:39:27.811 "num_base_bdevs_discovered": 3, 00:39:27.811 "num_base_bdevs_operational": 3, 00:39:27.811 "base_bdevs_list": [ 00:39:27.811 { 00:39:27.811 "name": null, 00:39:27.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:27.811 "is_configured": false, 00:39:27.811 "data_offset": 2048, 00:39:27.811 "data_size": 63488 00:39:27.811 }, 00:39:27.811 { 00:39:27.812 "name": "BaseBdev2", 00:39:27.812 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:27.812 "is_configured": true, 00:39:27.812 "data_offset": 2048, 00:39:27.812 "data_size": 63488 00:39:27.812 }, 00:39:27.812 { 00:39:27.812 "name": "BaseBdev3", 00:39:27.812 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:27.812 "is_configured": true, 00:39:27.812 "data_offset": 2048, 00:39:27.812 "data_size": 63488 00:39:27.812 }, 00:39:27.812 { 00:39:27.812 "name": "BaseBdev4", 00:39:27.812 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:27.812 "is_configured": true, 00:39:27.812 "data_offset": 2048, 00:39:27.812 "data_size": 63488 00:39:27.812 } 00:39:27.812 ] 00:39:27.812 }' 00:39:27.812 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:27.812 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:27.812 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:27.812 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:27.812 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:27.812 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@649 -- # local es=0 00:39:27.812 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:27.812 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:27.812 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:27.812 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:27.812 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:27.812 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:27.812 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:27.812 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:27.812 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:39:27.812 12:02:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:28.071 [2024-06-10 12:03:00.026824] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:28.071 [2024-06-10 12:03:00.027375] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:39:28.071 [2024-06-10 12:03:00.027525] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:28.071 request: 00:39:28.071 { 00:39:28.071 "base_bdev": "BaseBdev1", 00:39:28.071 "raid_bdev": "raid_bdev1", 00:39:28.071 "method": "bdev_raid_add_base_bdev", 00:39:28.071 "req_id": 1 00:39:28.071 } 00:39:28.071 Got JSON-RPC error response 00:39:28.071 response: 00:39:28.071 { 00:39:28.071 "code": -22, 00:39:28.071 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:39:28.071 } 00:39:28.071 12:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # es=1 00:39:28.071 12:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:39:28.071 12:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:39:28.071 12:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:39:28.071 12:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:39:29.008 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:39:29.008 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:29.008 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:29.008 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:39:29.008 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:39:29.008 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:39:29.008 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:29.008 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:29.008 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:29.008 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:29.008 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:29.008 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:29.266 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:29.266 "name": "raid_bdev1", 00:39:29.266 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:29.266 "strip_size_kb": 64, 00:39:29.266 "state": "online", 00:39:29.266 "raid_level": "raid5f", 00:39:29.266 "superblock": true, 00:39:29.266 "num_base_bdevs": 4, 00:39:29.266 "num_base_bdevs_discovered": 3, 00:39:29.266 "num_base_bdevs_operational": 3, 00:39:29.266 "base_bdevs_list": [ 00:39:29.266 { 00:39:29.266 "name": null, 00:39:29.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:29.266 "is_configured": false, 00:39:29.266 "data_offset": 2048, 00:39:29.266 "data_size": 63488 00:39:29.266 }, 00:39:29.266 { 00:39:29.266 "name": "BaseBdev2", 00:39:29.266 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:29.266 "is_configured": true, 00:39:29.266 "data_offset": 2048, 00:39:29.266 "data_size": 63488 00:39:29.266 }, 00:39:29.266 { 00:39:29.266 "name": "BaseBdev3", 00:39:29.266 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:29.266 "is_configured": true, 00:39:29.266 "data_offset": 2048, 00:39:29.266 "data_size": 63488 00:39:29.266 }, 00:39:29.266 { 00:39:29.266 "name": "BaseBdev4", 00:39:29.266 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:29.266 "is_configured": true, 00:39:29.266 "data_offset": 2048, 00:39:29.266 "data_size": 63488 00:39:29.266 } 00:39:29.266 ] 00:39:29.266 }' 00:39:29.266 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:29.266 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:30.209 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:30.209 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:30.209 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:30.209 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:30.209 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:30.209 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:30.209 12:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:30.209 12:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:30.209 "name": "raid_bdev1", 00:39:30.209 "uuid": "d8759c15-6664-421d-842b-bb672d5e6225", 00:39:30.209 "strip_size_kb": 64, 00:39:30.209 "state": "online", 00:39:30.209 "raid_level": "raid5f", 00:39:30.209 "superblock": true, 00:39:30.209 "num_base_bdevs": 4, 00:39:30.209 "num_base_bdevs_discovered": 3, 00:39:30.209 "num_base_bdevs_operational": 3, 00:39:30.209 "base_bdevs_list": [ 00:39:30.209 { 00:39:30.209 "name": null, 00:39:30.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:30.209 "is_configured": false, 00:39:30.209 "data_offset": 2048, 00:39:30.209 "data_size": 63488 00:39:30.209 }, 00:39:30.209 { 00:39:30.209 "name": "BaseBdev2", 00:39:30.209 "uuid": "f644230b-a04f-5af3-98a5-9177e975658e", 00:39:30.209 "is_configured": true, 00:39:30.209 "data_offset": 2048, 00:39:30.209 "data_size": 63488 00:39:30.210 }, 00:39:30.210 { 00:39:30.210 "name": "BaseBdev3", 00:39:30.210 "uuid": "fdf20029-28e0-5612-a5cc-7f64129f33d0", 00:39:30.210 "is_configured": true, 00:39:30.210 "data_offset": 2048, 00:39:30.210 "data_size": 63488 00:39:30.210 }, 00:39:30.210 { 00:39:30.210 "name": "BaseBdev4", 00:39:30.210 "uuid": "9456f185-6cb6-5255-b06c-049bb6704083", 00:39:30.210 "is_configured": true, 00:39:30.210 "data_offset": 2048, 00:39:30.210 "data_size": 63488 00:39:30.210 } 00:39:30.210 ] 00:39:30.210 }' 00:39:30.210 12:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:30.210 12:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:30.210 12:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:30.475 12:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:30.475 12:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 159787 00:39:30.475 12:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@949 -- # '[' -z 159787 ']' 00:39:30.475 12:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # kill -0 159787 00:39:30.475 12:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # uname 00:39:30.475 12:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:30.475 12:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 159787 00:39:30.475 12:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:39:30.476 12:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:39:30.476 12:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 159787' 00:39:30.476 killing process with pid 159787 00:39:30.476 12:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # kill 159787 00:39:30.476 Received shutdown signal, test time was about 60.000000 seconds 00:39:30.476 00:39:30.476 Latency(us) 00:39:30.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:30.476 =================================================================================================================== 00:39:30.476 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:30.476 12:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # wait 159787 00:39:30.476 [2024-06-10 12:03:02.311295] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:30.476 [2024-06-10 12:03:02.311582] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:30.476 [2024-06-10 12:03:02.311681] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:30.476 [2024-06-10 12:03:02.311693] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:39:31.057 [2024-06-10 12:03:02.944227] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:32.958 ************************************ 00:39:32.958 END TEST raid5f_rebuild_test_sb 00:39:32.958 ************************************ 00:39:32.958 12:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:39:32.958 00:39:32.958 real 0m41.929s 00:39:32.958 user 1m2.844s 00:39:32.958 sys 0m5.018s 00:39:32.958 12:03:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:32.958 12:03:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:32.958 12:03:04 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:39:32.958 12:03:04 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:39:32.958 12:03:04 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:39:32.958 12:03:04 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:32.958 12:03:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:32.958 ************************************ 00:39:32.958 START TEST raid_state_function_test_sb_4k 00:39:32.958 ************************************ 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 2 true 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=160818 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:39:32.958 Process raid pid: 160818 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 160818' 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 160818 /var/tmp/spdk-raid.sock 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@830 -- # '[' -z 160818 ']' 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:39:32.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:32.958 12:03:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:32.958 [2024-06-10 12:03:04.840588] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:39:32.958 [2024-06-10 12:03:04.841004] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:32.958 [2024-06-10 12:03:05.010314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:33.217 [2024-06-10 12:03:05.275863] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:39:33.475 [2024-06-10 12:03:05.511880] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:33.734 12:03:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:33.734 12:03:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@863 -- # return 0 00:39:33.734 12:03:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:39:33.993 [2024-06-10 12:03:05.881618] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:33.993 [2024-06-10 12:03:05.881916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:33.993 [2024-06-10 12:03:05.882013] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:33.993 [2024-06-10 12:03:05.882079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:33.993 12:03:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:39:33.993 12:03:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:39:33.993 12:03:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:39:33.993 12:03:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:33.993 12:03:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:33.993 12:03:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:33.993 12:03:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:33.993 12:03:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:33.993 12:03:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:33.993 12:03:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:33.993 12:03:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:33.993 12:03:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:34.252 12:03:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:34.252 "name": "Existed_Raid", 00:39:34.252 "uuid": "a186ea5f-fc5f-497f-b7c8-3d218d56f0e4", 00:39:34.252 "strip_size_kb": 0, 00:39:34.252 "state": "configuring", 00:39:34.252 "raid_level": "raid1", 00:39:34.252 "superblock": true, 00:39:34.252 "num_base_bdevs": 2, 00:39:34.252 "num_base_bdevs_discovered": 0, 00:39:34.252 "num_base_bdevs_operational": 2, 00:39:34.252 "base_bdevs_list": [ 00:39:34.252 { 00:39:34.252 "name": "BaseBdev1", 00:39:34.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:34.252 "is_configured": false, 00:39:34.252 "data_offset": 0, 00:39:34.252 "data_size": 0 00:39:34.252 }, 00:39:34.252 { 00:39:34.252 "name": "BaseBdev2", 00:39:34.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:34.252 "is_configured": false, 00:39:34.252 "data_offset": 0, 00:39:34.252 "data_size": 0 00:39:34.252 } 00:39:34.252 ] 00:39:34.252 }' 00:39:34.252 12:03:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:34.252 12:03:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:34.819 12:03:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:39:35.085 [2024-06-10 12:03:06.893701] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:35.085 [2024-06-10 12:03:06.893920] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:39:35.085 12:03:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:39:35.085 [2024-06-10 12:03:07.093757] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:35.085 [2024-06-10 12:03:07.093975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:35.085 [2024-06-10 12:03:07.094110] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:35.085 [2024-06-10 12:03:07.094177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:35.085 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:39:35.356 [2024-06-10 12:03:07.329669] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:35.356 BaseBdev1 00:39:35.356 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:39:35.356 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:39:35.356 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:39:35.356 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local i 00:39:35.356 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:39:35.356 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:39:35.356 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:39:35.614 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:39:35.873 [ 00:39:35.873 { 00:39:35.873 "name": "BaseBdev1", 00:39:35.873 "aliases": [ 00:39:35.873 "12c6eec8-d4b2-42b0-9fd1-41cb4be57ebf" 00:39:35.873 ], 00:39:35.873 "product_name": "Malloc disk", 00:39:35.873 "block_size": 4096, 00:39:35.873 "num_blocks": 8192, 00:39:35.873 "uuid": "12c6eec8-d4b2-42b0-9fd1-41cb4be57ebf", 00:39:35.873 "assigned_rate_limits": { 00:39:35.873 "rw_ios_per_sec": 0, 00:39:35.873 "rw_mbytes_per_sec": 0, 00:39:35.873 "r_mbytes_per_sec": 0, 00:39:35.873 "w_mbytes_per_sec": 0 00:39:35.873 }, 00:39:35.873 "claimed": true, 00:39:35.873 "claim_type": "exclusive_write", 00:39:35.873 "zoned": false, 00:39:35.873 "supported_io_types": { 00:39:35.873 "read": true, 00:39:35.873 "write": true, 00:39:35.873 "unmap": true, 00:39:35.873 "write_zeroes": true, 00:39:35.873 "flush": true, 00:39:35.873 "reset": true, 00:39:35.873 "compare": false, 00:39:35.873 "compare_and_write": false, 00:39:35.873 "abort": true, 00:39:35.873 "nvme_admin": false, 00:39:35.873 "nvme_io": false 00:39:35.873 }, 00:39:35.873 "memory_domains": [ 00:39:35.873 { 00:39:35.873 "dma_device_id": "system", 00:39:35.873 "dma_device_type": 1 00:39:35.873 }, 00:39:35.873 { 00:39:35.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:35.873 "dma_device_type": 2 00:39:35.873 } 00:39:35.873 ], 00:39:35.873 "driver_specific": {} 00:39:35.873 } 00:39:35.873 ] 00:39:35.873 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # return 0 00:39:35.873 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:39:35.873 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:39:35.873 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:39:35.873 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:35.873 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:35.873 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:35.873 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:35.873 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:35.873 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:35.873 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:35.873 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:35.873 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:36.131 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:36.131 "name": "Existed_Raid", 00:39:36.131 "uuid": "2fd219e5-868e-428a-859c-9edff07dae98", 00:39:36.131 "strip_size_kb": 0, 00:39:36.131 "state": "configuring", 00:39:36.132 "raid_level": "raid1", 00:39:36.132 "superblock": true, 00:39:36.132 "num_base_bdevs": 2, 00:39:36.132 "num_base_bdevs_discovered": 1, 00:39:36.132 "num_base_bdevs_operational": 2, 00:39:36.132 "base_bdevs_list": [ 00:39:36.132 { 00:39:36.132 "name": "BaseBdev1", 00:39:36.132 "uuid": "12c6eec8-d4b2-42b0-9fd1-41cb4be57ebf", 00:39:36.132 "is_configured": true, 00:39:36.132 "data_offset": 256, 00:39:36.132 "data_size": 7936 00:39:36.132 }, 00:39:36.132 { 00:39:36.132 "name": "BaseBdev2", 00:39:36.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:36.132 "is_configured": false, 00:39:36.132 "data_offset": 0, 00:39:36.132 "data_size": 0 00:39:36.132 } 00:39:36.132 ] 00:39:36.132 }' 00:39:36.132 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:36.132 12:03:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:36.697 12:03:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:39:36.697 [2024-06-10 12:03:08.734062] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:36.698 [2024-06-10 12:03:08.734262] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:39:36.698 12:03:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:39:37.265 [2024-06-10 12:03:09.054157] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:37.265 [2024-06-10 12:03:09.056534] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:37.265 [2024-06-10 12:03:09.056712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:37.265 12:03:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:39:37.265 12:03:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:39:37.265 12:03:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:39:37.265 12:03:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:39:37.265 12:03:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:39:37.265 12:03:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:37.265 12:03:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:37.265 12:03:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:37.265 12:03:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:37.265 12:03:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:37.265 12:03:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:37.265 12:03:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:37.265 12:03:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:37.266 12:03:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:37.524 12:03:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:37.524 "name": "Existed_Raid", 00:39:37.524 "uuid": "333a7716-76a1-4499-9299-f5588308557f", 00:39:37.524 "strip_size_kb": 0, 00:39:37.524 "state": "configuring", 00:39:37.524 "raid_level": "raid1", 00:39:37.524 "superblock": true, 00:39:37.524 "num_base_bdevs": 2, 00:39:37.524 "num_base_bdevs_discovered": 1, 00:39:37.524 "num_base_bdevs_operational": 2, 00:39:37.524 "base_bdevs_list": [ 00:39:37.524 { 00:39:37.524 "name": "BaseBdev1", 00:39:37.524 "uuid": "12c6eec8-d4b2-42b0-9fd1-41cb4be57ebf", 00:39:37.524 "is_configured": true, 00:39:37.524 "data_offset": 256, 00:39:37.524 "data_size": 7936 00:39:37.524 }, 00:39:37.524 { 00:39:37.524 "name": "BaseBdev2", 00:39:37.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:37.524 "is_configured": false, 00:39:37.524 "data_offset": 0, 00:39:37.524 "data_size": 0 00:39:37.524 } 00:39:37.524 ] 00:39:37.524 }' 00:39:37.524 12:03:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:37.524 12:03:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:38.091 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:39:38.350 [2024-06-10 12:03:10.393504] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:38.350 [2024-06-10 12:03:10.393980] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:39:38.350 [2024-06-10 12:03:10.394106] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:38.350 [2024-06-10 12:03:10.394327] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:39:38.350 [2024-06-10 12:03:10.394767] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:39:38.350 [2024-06-10 12:03:10.394885] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:39:38.350 BaseBdev2 00:39:38.350 [2024-06-10 12:03:10.395146] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:38.609 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:39:38.609 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:39:38.609 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:39:38.609 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local i 00:39:38.609 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:39:38.609 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:39:38.609 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:39:38.609 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:39:38.867 [ 00:39:38.867 { 00:39:38.867 "name": "BaseBdev2", 00:39:38.867 "aliases": [ 00:39:38.867 "5686b054-29b3-47e6-87cd-e823b011c375" 00:39:38.867 ], 00:39:38.867 "product_name": "Malloc disk", 00:39:38.867 "block_size": 4096, 00:39:38.867 "num_blocks": 8192, 00:39:38.867 "uuid": "5686b054-29b3-47e6-87cd-e823b011c375", 00:39:38.867 "assigned_rate_limits": { 00:39:38.867 "rw_ios_per_sec": 0, 00:39:38.867 "rw_mbytes_per_sec": 0, 00:39:38.867 "r_mbytes_per_sec": 0, 00:39:38.867 "w_mbytes_per_sec": 0 00:39:38.867 }, 00:39:38.867 "claimed": true, 00:39:38.867 "claim_type": "exclusive_write", 00:39:38.867 "zoned": false, 00:39:38.867 "supported_io_types": { 00:39:38.867 "read": true, 00:39:38.867 "write": true, 00:39:38.867 "unmap": true, 00:39:38.867 "write_zeroes": true, 00:39:38.867 "flush": true, 00:39:38.867 "reset": true, 00:39:38.867 "compare": false, 00:39:38.867 "compare_and_write": false, 00:39:38.867 "abort": true, 00:39:38.867 "nvme_admin": false, 00:39:38.867 "nvme_io": false 00:39:38.867 }, 00:39:38.867 "memory_domains": [ 00:39:38.867 { 00:39:38.867 "dma_device_id": "system", 00:39:38.867 "dma_device_type": 1 00:39:38.867 }, 00:39:38.867 { 00:39:38.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:38.867 "dma_device_type": 2 00:39:38.867 } 00:39:38.867 ], 00:39:38.867 "driver_specific": {} 00:39:38.867 } 00:39:38.867 ] 00:39:38.867 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # return 0 00:39:38.867 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:39:38.867 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:39:38.867 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:39:38.868 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:39:38.868 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:38.868 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:38.868 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:38.868 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:38.868 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:38.868 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:38.868 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:38.868 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:38.868 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:38.868 12:03:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:39.125 12:03:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:39.125 "name": "Existed_Raid", 00:39:39.125 "uuid": "333a7716-76a1-4499-9299-f5588308557f", 00:39:39.125 "strip_size_kb": 0, 00:39:39.125 "state": "online", 00:39:39.125 "raid_level": "raid1", 00:39:39.125 "superblock": true, 00:39:39.125 "num_base_bdevs": 2, 00:39:39.125 "num_base_bdevs_discovered": 2, 00:39:39.125 "num_base_bdevs_operational": 2, 00:39:39.125 "base_bdevs_list": [ 00:39:39.125 { 00:39:39.125 "name": "BaseBdev1", 00:39:39.125 "uuid": "12c6eec8-d4b2-42b0-9fd1-41cb4be57ebf", 00:39:39.125 "is_configured": true, 00:39:39.125 "data_offset": 256, 00:39:39.125 "data_size": 7936 00:39:39.125 }, 00:39:39.125 { 00:39:39.125 "name": "BaseBdev2", 00:39:39.125 "uuid": "5686b054-29b3-47e6-87cd-e823b011c375", 00:39:39.125 "is_configured": true, 00:39:39.125 "data_offset": 256, 00:39:39.125 "data_size": 7936 00:39:39.125 } 00:39:39.125 ] 00:39:39.125 }' 00:39:39.125 12:03:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:39.125 12:03:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:39.690 12:03:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:39:39.690 12:03:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:39:39.690 12:03:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:39:39.690 12:03:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:39:39.690 12:03:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:39:39.690 12:03:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:39:39.690 12:03:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:39:39.690 12:03:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:39:39.948 [2024-06-10 12:03:11.898200] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:39.948 12:03:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:39:39.948 "name": "Existed_Raid", 00:39:39.948 "aliases": [ 00:39:39.948 "333a7716-76a1-4499-9299-f5588308557f" 00:39:39.948 ], 00:39:39.948 "product_name": "Raid Volume", 00:39:39.948 "block_size": 4096, 00:39:39.948 "num_blocks": 7936, 00:39:39.948 "uuid": "333a7716-76a1-4499-9299-f5588308557f", 00:39:39.948 "assigned_rate_limits": { 00:39:39.948 "rw_ios_per_sec": 0, 00:39:39.948 "rw_mbytes_per_sec": 0, 00:39:39.948 "r_mbytes_per_sec": 0, 00:39:39.948 "w_mbytes_per_sec": 0 00:39:39.948 }, 00:39:39.948 "claimed": false, 00:39:39.948 "zoned": false, 00:39:39.948 "supported_io_types": { 00:39:39.948 "read": true, 00:39:39.948 "write": true, 00:39:39.948 "unmap": false, 00:39:39.948 "write_zeroes": true, 00:39:39.948 "flush": false, 00:39:39.948 "reset": true, 00:39:39.948 "compare": false, 00:39:39.948 "compare_and_write": false, 00:39:39.948 "abort": false, 00:39:39.948 "nvme_admin": false, 00:39:39.948 "nvme_io": false 00:39:39.948 }, 00:39:39.948 "memory_domains": [ 00:39:39.948 { 00:39:39.948 "dma_device_id": "system", 00:39:39.948 "dma_device_type": 1 00:39:39.948 }, 00:39:39.948 { 00:39:39.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:39.948 "dma_device_type": 2 00:39:39.948 }, 00:39:39.948 { 00:39:39.948 "dma_device_id": "system", 00:39:39.948 "dma_device_type": 1 00:39:39.948 }, 00:39:39.948 { 00:39:39.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:39.948 "dma_device_type": 2 00:39:39.948 } 00:39:39.948 ], 00:39:39.948 "driver_specific": { 00:39:39.948 "raid": { 00:39:39.948 "uuid": "333a7716-76a1-4499-9299-f5588308557f", 00:39:39.948 "strip_size_kb": 0, 00:39:39.948 "state": "online", 00:39:39.948 "raid_level": "raid1", 00:39:39.948 "superblock": true, 00:39:39.948 "num_base_bdevs": 2, 00:39:39.948 "num_base_bdevs_discovered": 2, 00:39:39.948 "num_base_bdevs_operational": 2, 00:39:39.948 "base_bdevs_list": [ 00:39:39.948 { 00:39:39.948 "name": "BaseBdev1", 00:39:39.948 "uuid": "12c6eec8-d4b2-42b0-9fd1-41cb4be57ebf", 00:39:39.948 "is_configured": true, 00:39:39.948 "data_offset": 256, 00:39:39.948 "data_size": 7936 00:39:39.948 }, 00:39:39.948 { 00:39:39.948 "name": "BaseBdev2", 00:39:39.948 "uuid": "5686b054-29b3-47e6-87cd-e823b011c375", 00:39:39.948 "is_configured": true, 00:39:39.948 "data_offset": 256, 00:39:39.948 "data_size": 7936 00:39:39.948 } 00:39:39.948 ] 00:39:39.948 } 00:39:39.948 } 00:39:39.948 }' 00:39:39.948 12:03:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:39.948 12:03:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:39:39.948 BaseBdev2' 00:39:39.948 12:03:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:39:39.948 12:03:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:39:39.948 12:03:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:40.216 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:40.216 "name": "BaseBdev1", 00:39:40.216 "aliases": [ 00:39:40.216 "12c6eec8-d4b2-42b0-9fd1-41cb4be57ebf" 00:39:40.216 ], 00:39:40.216 "product_name": "Malloc disk", 00:39:40.216 "block_size": 4096, 00:39:40.216 "num_blocks": 8192, 00:39:40.216 "uuid": "12c6eec8-d4b2-42b0-9fd1-41cb4be57ebf", 00:39:40.216 "assigned_rate_limits": { 00:39:40.216 "rw_ios_per_sec": 0, 00:39:40.216 "rw_mbytes_per_sec": 0, 00:39:40.217 "r_mbytes_per_sec": 0, 00:39:40.217 "w_mbytes_per_sec": 0 00:39:40.217 }, 00:39:40.217 "claimed": true, 00:39:40.217 "claim_type": "exclusive_write", 00:39:40.217 "zoned": false, 00:39:40.217 "supported_io_types": { 00:39:40.217 "read": true, 00:39:40.217 "write": true, 00:39:40.217 "unmap": true, 00:39:40.217 "write_zeroes": true, 00:39:40.217 "flush": true, 00:39:40.217 "reset": true, 00:39:40.217 "compare": false, 00:39:40.217 "compare_and_write": false, 00:39:40.217 "abort": true, 00:39:40.217 "nvme_admin": false, 00:39:40.217 "nvme_io": false 00:39:40.217 }, 00:39:40.217 "memory_domains": [ 00:39:40.217 { 00:39:40.217 "dma_device_id": "system", 00:39:40.217 "dma_device_type": 1 00:39:40.217 }, 00:39:40.217 { 00:39:40.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:40.217 "dma_device_type": 2 00:39:40.217 } 00:39:40.217 ], 00:39:40.217 "driver_specific": {} 00:39:40.217 }' 00:39:40.217 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:40.217 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:40.480 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:39:40.480 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:40.480 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:40.480 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:39:40.480 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:40.480 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:40.480 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:39:40.480 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:40.737 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:40.737 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:39:40.737 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:39:40.737 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:39:40.737 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:40.995 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:40.995 "name": "BaseBdev2", 00:39:40.995 "aliases": [ 00:39:40.995 "5686b054-29b3-47e6-87cd-e823b011c375" 00:39:40.995 ], 00:39:40.995 "product_name": "Malloc disk", 00:39:40.995 "block_size": 4096, 00:39:40.995 "num_blocks": 8192, 00:39:40.995 "uuid": "5686b054-29b3-47e6-87cd-e823b011c375", 00:39:40.995 "assigned_rate_limits": { 00:39:40.995 "rw_ios_per_sec": 0, 00:39:40.995 "rw_mbytes_per_sec": 0, 00:39:40.995 "r_mbytes_per_sec": 0, 00:39:40.995 "w_mbytes_per_sec": 0 00:39:40.995 }, 00:39:40.995 "claimed": true, 00:39:40.995 "claim_type": "exclusive_write", 00:39:40.995 "zoned": false, 00:39:40.995 "supported_io_types": { 00:39:40.995 "read": true, 00:39:40.995 "write": true, 00:39:40.995 "unmap": true, 00:39:40.995 "write_zeroes": true, 00:39:40.995 "flush": true, 00:39:40.995 "reset": true, 00:39:40.995 "compare": false, 00:39:40.995 "compare_and_write": false, 00:39:40.995 "abort": true, 00:39:40.995 "nvme_admin": false, 00:39:40.995 "nvme_io": false 00:39:40.995 }, 00:39:40.995 "memory_domains": [ 00:39:40.995 { 00:39:40.995 "dma_device_id": "system", 00:39:40.995 "dma_device_type": 1 00:39:40.995 }, 00:39:40.995 { 00:39:40.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:40.995 "dma_device_type": 2 00:39:40.995 } 00:39:40.995 ], 00:39:40.995 "driver_specific": {} 00:39:40.995 }' 00:39:40.995 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:40.995 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:40.995 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:39:40.995 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:40.995 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:40.995 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:39:40.995 12:03:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:40.995 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:41.253 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:39:41.253 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:41.253 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:41.253 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:39:41.253 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:39:41.510 [2024-06-10 12:03:13.378274] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:41.510 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:39:41.510 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:39:41.510 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:39:41.510 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:39:41.510 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:39:41.511 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:39:41.511 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:39:41.511 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:41.511 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:41.511 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:41.511 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:41.511 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:41.511 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:41.511 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:41.511 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:41.511 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:41.511 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:41.769 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:41.769 "name": "Existed_Raid", 00:39:41.769 "uuid": "333a7716-76a1-4499-9299-f5588308557f", 00:39:41.769 "strip_size_kb": 0, 00:39:41.769 "state": "online", 00:39:41.769 "raid_level": "raid1", 00:39:41.769 "superblock": true, 00:39:41.769 "num_base_bdevs": 2, 00:39:41.769 "num_base_bdevs_discovered": 1, 00:39:41.769 "num_base_bdevs_operational": 1, 00:39:41.769 "base_bdevs_list": [ 00:39:41.769 { 00:39:41.769 "name": null, 00:39:41.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:41.769 "is_configured": false, 00:39:41.769 "data_offset": 256, 00:39:41.769 "data_size": 7936 00:39:41.769 }, 00:39:41.769 { 00:39:41.769 "name": "BaseBdev2", 00:39:41.769 "uuid": "5686b054-29b3-47e6-87cd-e823b011c375", 00:39:41.769 "is_configured": true, 00:39:41.769 "data_offset": 256, 00:39:41.769 "data_size": 7936 00:39:41.769 } 00:39:41.769 ] 00:39:41.769 }' 00:39:41.769 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:41.769 12:03:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:42.703 12:03:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:39:42.703 12:03:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:39:42.703 12:03:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:42.703 12:03:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:39:42.703 12:03:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:39:42.703 12:03:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:42.703 12:03:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:39:42.989 [2024-06-10 12:03:14.870446] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:42.989 [2024-06-10 12:03:14.870888] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:42.989 [2024-06-10 12:03:14.986768] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:42.989 [2024-06-10 12:03:14.987083] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:42.989 [2024-06-10 12:03:14.987180] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:39:42.989 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:39:42.989 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:39:42.989 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:39:42.989 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:43.247 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:39:43.247 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:39:43.247 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:39:43.247 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 160818 00:39:43.247 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@949 -- # '[' -z 160818 ']' 00:39:43.247 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # kill -0 160818 00:39:43.247 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # uname 00:39:43.247 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:43.247 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 160818 00:39:43.247 killing process with pid 160818 00:39:43.247 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:39:43.248 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:39:43.248 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # echo 'killing process with pid 160818' 00:39:43.248 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # kill 160818 00:39:43.248 12:03:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # wait 160818 00:39:43.248 [2024-06-10 12:03:15.260266] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:43.248 [2024-06-10 12:03:15.260486] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:45.160 ************************************ 00:39:45.160 END TEST raid_state_function_test_sb_4k 00:39:45.160 ************************************ 00:39:45.160 12:03:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:39:45.160 00:39:45.160 real 0m12.133s 00:39:45.160 user 0m20.489s 00:39:45.160 sys 0m1.725s 00:39:45.160 12:03:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:45.160 12:03:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:45.160 12:03:16 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:39:45.160 12:03:16 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:39:45.160 12:03:16 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:45.160 12:03:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:45.160 ************************************ 00:39:45.160 START TEST raid_superblock_test_4k 00:39:45.160 ************************************ 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 2 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=161195 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 161195 /var/tmp/spdk-raid.sock 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@830 -- # '[' -z 161195 ']' 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:45.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:45.161 12:03:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:39:45.161 [2024-06-10 12:03:17.020789] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:39:45.161 [2024-06-10 12:03:17.021443] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161195 ] 00:39:45.161 [2024-06-10 12:03:17.193318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:45.421 [2024-06-10 12:03:17.413764] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.680 [2024-06-10 12:03:17.641728] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:45.939 12:03:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:45.939 12:03:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@863 -- # return 0 00:39:45.939 12:03:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:39:45.939 12:03:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:39:45.939 12:03:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:39:45.939 12:03:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:39:45.939 12:03:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:39:45.939 12:03:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:45.939 12:03:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:39:45.939 12:03:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:45.939 12:03:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:39:46.196 malloc1 00:39:46.196 12:03:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:46.454 [2024-06-10 12:03:18.398440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:46.454 [2024-06-10 12:03:18.398857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:46.454 [2024-06-10 12:03:18.399086] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:39:46.454 [2024-06-10 12:03:18.399276] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:46.454 [2024-06-10 12:03:18.402630] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:46.454 [2024-06-10 12:03:18.402891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:46.454 pt1 00:39:46.454 12:03:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:39:46.454 12:03:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:39:46.454 12:03:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:39:46.454 12:03:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:39:46.454 12:03:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:39:46.454 12:03:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:46.454 12:03:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:39:46.454 12:03:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:46.454 12:03:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:39:46.712 malloc2 00:39:46.712 12:03:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:47.017 [2024-06-10 12:03:18.916562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:47.017 [2024-06-10 12:03:18.916854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:47.017 [2024-06-10 12:03:18.916945] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:39:47.017 [2024-06-10 12:03:18.917079] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:47.017 [2024-06-10 12:03:18.919652] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:47.017 [2024-06-10 12:03:18.919824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:47.017 pt2 00:39:47.017 12:03:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:39:47.017 12:03:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:39:47.017 12:03:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:39:47.312 [2024-06-10 12:03:19.192714] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:47.312 [2024-06-10 12:03:19.195159] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:47.312 [2024-06-10 12:03:19.195551] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:39:47.312 [2024-06-10 12:03:19.195679] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:47.312 [2024-06-10 12:03:19.195878] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:39:47.312 [2024-06-10 12:03:19.196409] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:39:47.312 [2024-06-10 12:03:19.196546] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:39:47.312 [2024-06-10 12:03:19.196895] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:47.312 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:47.312 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:47.312 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:47.312 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:47.312 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:47.312 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:47.312 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:47.312 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:47.312 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:47.312 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:47.312 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:47.312 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:47.570 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:47.570 "name": "raid_bdev1", 00:39:47.570 "uuid": "84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358", 00:39:47.570 "strip_size_kb": 0, 00:39:47.570 "state": "online", 00:39:47.570 "raid_level": "raid1", 00:39:47.570 "superblock": true, 00:39:47.570 "num_base_bdevs": 2, 00:39:47.570 "num_base_bdevs_discovered": 2, 00:39:47.570 "num_base_bdevs_operational": 2, 00:39:47.570 "base_bdevs_list": [ 00:39:47.570 { 00:39:47.570 "name": "pt1", 00:39:47.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:47.570 "is_configured": true, 00:39:47.570 "data_offset": 256, 00:39:47.570 "data_size": 7936 00:39:47.570 }, 00:39:47.570 { 00:39:47.570 "name": "pt2", 00:39:47.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:47.570 "is_configured": true, 00:39:47.570 "data_offset": 256, 00:39:47.570 "data_size": 7936 00:39:47.570 } 00:39:47.570 ] 00:39:47.570 }' 00:39:47.570 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:47.570 12:03:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:39:48.137 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:39:48.137 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:39:48.137 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:39:48.137 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:39:48.137 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:39:48.137 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:39:48.137 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:48.137 12:03:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:39:48.137 [2024-06-10 12:03:20.101154] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:48.137 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:39:48.137 "name": "raid_bdev1", 00:39:48.137 "aliases": [ 00:39:48.137 "84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358" 00:39:48.137 ], 00:39:48.137 "product_name": "Raid Volume", 00:39:48.137 "block_size": 4096, 00:39:48.137 "num_blocks": 7936, 00:39:48.137 "uuid": "84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358", 00:39:48.137 "assigned_rate_limits": { 00:39:48.137 "rw_ios_per_sec": 0, 00:39:48.137 "rw_mbytes_per_sec": 0, 00:39:48.137 "r_mbytes_per_sec": 0, 00:39:48.137 "w_mbytes_per_sec": 0 00:39:48.137 }, 00:39:48.137 "claimed": false, 00:39:48.137 "zoned": false, 00:39:48.137 "supported_io_types": { 00:39:48.137 "read": true, 00:39:48.137 "write": true, 00:39:48.137 "unmap": false, 00:39:48.137 "write_zeroes": true, 00:39:48.137 "flush": false, 00:39:48.137 "reset": true, 00:39:48.137 "compare": false, 00:39:48.137 "compare_and_write": false, 00:39:48.137 "abort": false, 00:39:48.137 "nvme_admin": false, 00:39:48.137 "nvme_io": false 00:39:48.137 }, 00:39:48.137 "memory_domains": [ 00:39:48.137 { 00:39:48.137 "dma_device_id": "system", 00:39:48.137 "dma_device_type": 1 00:39:48.137 }, 00:39:48.137 { 00:39:48.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:48.137 "dma_device_type": 2 00:39:48.137 }, 00:39:48.137 { 00:39:48.137 "dma_device_id": "system", 00:39:48.137 "dma_device_type": 1 00:39:48.137 }, 00:39:48.137 { 00:39:48.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:48.137 "dma_device_type": 2 00:39:48.137 } 00:39:48.137 ], 00:39:48.137 "driver_specific": { 00:39:48.137 "raid": { 00:39:48.137 "uuid": "84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358", 00:39:48.137 "strip_size_kb": 0, 00:39:48.137 "state": "online", 00:39:48.137 "raid_level": "raid1", 00:39:48.137 "superblock": true, 00:39:48.137 "num_base_bdevs": 2, 00:39:48.137 "num_base_bdevs_discovered": 2, 00:39:48.137 "num_base_bdevs_operational": 2, 00:39:48.137 "base_bdevs_list": [ 00:39:48.137 { 00:39:48.137 "name": "pt1", 00:39:48.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:48.137 "is_configured": true, 00:39:48.137 "data_offset": 256, 00:39:48.137 "data_size": 7936 00:39:48.137 }, 00:39:48.137 { 00:39:48.137 "name": "pt2", 00:39:48.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:48.137 "is_configured": true, 00:39:48.137 "data_offset": 256, 00:39:48.137 "data_size": 7936 00:39:48.137 } 00:39:48.137 ] 00:39:48.137 } 00:39:48.137 } 00:39:48.137 }' 00:39:48.137 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:48.137 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:39:48.137 pt2' 00:39:48.137 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:39:48.137 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:48.137 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:39:48.396 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:48.396 "name": "pt1", 00:39:48.396 "aliases": [ 00:39:48.396 "00000000-0000-0000-0000-000000000001" 00:39:48.396 ], 00:39:48.396 "product_name": "passthru", 00:39:48.396 "block_size": 4096, 00:39:48.396 "num_blocks": 8192, 00:39:48.396 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:48.396 "assigned_rate_limits": { 00:39:48.396 "rw_ios_per_sec": 0, 00:39:48.396 "rw_mbytes_per_sec": 0, 00:39:48.396 "r_mbytes_per_sec": 0, 00:39:48.396 "w_mbytes_per_sec": 0 00:39:48.396 }, 00:39:48.396 "claimed": true, 00:39:48.396 "claim_type": "exclusive_write", 00:39:48.396 "zoned": false, 00:39:48.396 "supported_io_types": { 00:39:48.396 "read": true, 00:39:48.396 "write": true, 00:39:48.396 "unmap": true, 00:39:48.396 "write_zeroes": true, 00:39:48.396 "flush": true, 00:39:48.396 "reset": true, 00:39:48.396 "compare": false, 00:39:48.396 "compare_and_write": false, 00:39:48.396 "abort": true, 00:39:48.396 "nvme_admin": false, 00:39:48.396 "nvme_io": false 00:39:48.396 }, 00:39:48.396 "memory_domains": [ 00:39:48.396 { 00:39:48.396 "dma_device_id": "system", 00:39:48.396 "dma_device_type": 1 00:39:48.396 }, 00:39:48.396 { 00:39:48.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:48.396 "dma_device_type": 2 00:39:48.396 } 00:39:48.396 ], 00:39:48.396 "driver_specific": { 00:39:48.396 "passthru": { 00:39:48.396 "name": "pt1", 00:39:48.396 "base_bdev_name": "malloc1" 00:39:48.396 } 00:39:48.396 } 00:39:48.396 }' 00:39:48.396 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:48.396 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:48.396 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:39:48.396 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:48.655 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:48.655 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:39:48.655 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:48.655 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:48.655 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:39:48.655 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:48.655 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:48.655 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:39:48.655 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:39:48.655 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:48.655 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:39:48.914 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:48.914 "name": "pt2", 00:39:48.914 "aliases": [ 00:39:48.914 "00000000-0000-0000-0000-000000000002" 00:39:48.914 ], 00:39:48.914 "product_name": "passthru", 00:39:48.914 "block_size": 4096, 00:39:48.914 "num_blocks": 8192, 00:39:48.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:48.914 "assigned_rate_limits": { 00:39:48.914 "rw_ios_per_sec": 0, 00:39:48.914 "rw_mbytes_per_sec": 0, 00:39:48.914 "r_mbytes_per_sec": 0, 00:39:48.914 "w_mbytes_per_sec": 0 00:39:48.914 }, 00:39:48.914 "claimed": true, 00:39:48.914 "claim_type": "exclusive_write", 00:39:48.914 "zoned": false, 00:39:48.914 "supported_io_types": { 00:39:48.914 "read": true, 00:39:48.914 "write": true, 00:39:48.914 "unmap": true, 00:39:48.914 "write_zeroes": true, 00:39:48.914 "flush": true, 00:39:48.914 "reset": true, 00:39:48.914 "compare": false, 00:39:48.914 "compare_and_write": false, 00:39:48.914 "abort": true, 00:39:48.914 "nvme_admin": false, 00:39:48.914 "nvme_io": false 00:39:48.914 }, 00:39:48.914 "memory_domains": [ 00:39:48.914 { 00:39:48.914 "dma_device_id": "system", 00:39:48.914 "dma_device_type": 1 00:39:48.914 }, 00:39:48.914 { 00:39:48.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:48.914 "dma_device_type": 2 00:39:48.914 } 00:39:48.914 ], 00:39:48.914 "driver_specific": { 00:39:48.914 "passthru": { 00:39:48.914 "name": "pt2", 00:39:48.914 "base_bdev_name": "malloc2" 00:39:48.914 } 00:39:48.914 } 00:39:48.914 }' 00:39:48.914 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:49.173 12:03:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:49.173 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:39:49.173 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:49.173 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:49.173 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:39:49.173 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:49.173 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:49.173 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:39:49.173 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:49.173 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:49.432 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:39:49.432 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:49.432 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:39:49.432 [2024-06-10 12:03:21.489373] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:49.691 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358 00:39:49.691 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358 ']' 00:39:49.691 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:49.691 [2024-06-10 12:03:21.685215] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:49.691 [2024-06-10 12:03:21.685430] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:49.691 [2024-06-10 12:03:21.685635] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:49.691 [2024-06-10 12:03:21.685787] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:49.691 [2024-06-10 12:03:21.685895] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:39:49.691 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:39:49.691 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:49.949 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:39:49.949 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:39:49.949 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:39:49.949 12:03:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:39:50.209 12:03:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:39:50.209 12:03:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:39:50.468 12:03:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:39:50.468 12:03:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@649 -- # local es=0 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:39:50.726 [2024-06-10 12:03:22.741437] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:39:50.726 [2024-06-10 12:03:22.743885] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:39:50.726 [2024-06-10 12:03:22.744124] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:39:50.726 [2024-06-10 12:03:22.744359] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:39:50.726 [2024-06-10 12:03:22.744497] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:50.726 [2024-06-10 12:03:22.744541] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:39:50.726 request: 00:39:50.726 { 00:39:50.726 "name": "raid_bdev1", 00:39:50.726 "raid_level": "raid1", 00:39:50.726 "base_bdevs": [ 00:39:50.726 "malloc1", 00:39:50.726 "malloc2" 00:39:50.726 ], 00:39:50.726 "superblock": false, 00:39:50.726 "method": "bdev_raid_create", 00:39:50.726 "req_id": 1 00:39:50.726 } 00:39:50.726 Got JSON-RPC error response 00:39:50.726 response: 00:39:50.726 { 00:39:50.726 "code": -17, 00:39:50.726 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:39:50.726 } 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # es=1 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:39:50.726 12:03:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:50.984 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:39:50.984 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:39:50.984 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:51.244 [2024-06-10 12:03:23.237511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:51.244 [2024-06-10 12:03:23.237804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:51.244 [2024-06-10 12:03:23.237933] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:39:51.244 [2024-06-10 12:03:23.238042] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:51.244 [2024-06-10 12:03:23.240705] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:51.244 [2024-06-10 12:03:23.240912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:51.244 [2024-06-10 12:03:23.241146] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:39:51.244 [2024-06-10 12:03:23.241328] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:51.244 pt1 00:39:51.244 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:39:51.244 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:51.244 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:39:51.244 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:51.244 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:51.244 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:51.244 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:51.244 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:51.244 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:51.244 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:51.245 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:51.245 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:51.503 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:51.503 "name": "raid_bdev1", 00:39:51.503 "uuid": "84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358", 00:39:51.503 "strip_size_kb": 0, 00:39:51.503 "state": "configuring", 00:39:51.503 "raid_level": "raid1", 00:39:51.503 "superblock": true, 00:39:51.503 "num_base_bdevs": 2, 00:39:51.503 "num_base_bdevs_discovered": 1, 00:39:51.503 "num_base_bdevs_operational": 2, 00:39:51.503 "base_bdevs_list": [ 00:39:51.503 { 00:39:51.503 "name": "pt1", 00:39:51.503 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:51.503 "is_configured": true, 00:39:51.503 "data_offset": 256, 00:39:51.503 "data_size": 7936 00:39:51.503 }, 00:39:51.503 { 00:39:51.503 "name": null, 00:39:51.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:51.503 "is_configured": false, 00:39:51.503 "data_offset": 256, 00:39:51.503 "data_size": 7936 00:39:51.503 } 00:39:51.503 ] 00:39:51.503 }' 00:39:51.503 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:51.503 12:03:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:39:52.071 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:39:52.071 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:39:52.071 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:39:52.071 12:03:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:52.329 [2024-06-10 12:03:24.173772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:52.329 [2024-06-10 12:03:24.174060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:52.329 [2024-06-10 12:03:24.174125] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:39:52.329 [2024-06-10 12:03:24.174217] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:52.329 [2024-06-10 12:03:24.174706] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:52.329 [2024-06-10 12:03:24.174870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:52.329 [2024-06-10 12:03:24.175060] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:52.329 [2024-06-10 12:03:24.175181] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:52.329 [2024-06-10 12:03:24.175332] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:39:52.329 [2024-06-10 12:03:24.175482] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:52.329 [2024-06-10 12:03:24.175614] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:39:52.329 [2024-06-10 12:03:24.176046] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:39:52.329 [2024-06-10 12:03:24.176155] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:39:52.329 [2024-06-10 12:03:24.176393] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:52.329 pt2 00:39:52.329 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:39:52.329 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:39:52.329 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:52.329 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:52.329 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:52.329 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:52.329 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:52.329 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:52.329 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:52.329 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:52.329 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:52.329 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:52.329 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:52.329 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:52.587 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:52.587 "name": "raid_bdev1", 00:39:52.587 "uuid": "84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358", 00:39:52.587 "strip_size_kb": 0, 00:39:52.587 "state": "online", 00:39:52.587 "raid_level": "raid1", 00:39:52.587 "superblock": true, 00:39:52.587 "num_base_bdevs": 2, 00:39:52.587 "num_base_bdevs_discovered": 2, 00:39:52.587 "num_base_bdevs_operational": 2, 00:39:52.587 "base_bdevs_list": [ 00:39:52.587 { 00:39:52.587 "name": "pt1", 00:39:52.587 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:52.587 "is_configured": true, 00:39:52.587 "data_offset": 256, 00:39:52.587 "data_size": 7936 00:39:52.587 }, 00:39:52.587 { 00:39:52.587 "name": "pt2", 00:39:52.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:52.587 "is_configured": true, 00:39:52.587 "data_offset": 256, 00:39:52.587 "data_size": 7936 00:39:52.587 } 00:39:52.587 ] 00:39:52.587 }' 00:39:52.587 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:52.587 12:03:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:39:53.167 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:39:53.167 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:39:53.167 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:39:53.167 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:39:53.167 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:39:53.167 12:03:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:39:53.167 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:53.167 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:39:53.426 [2024-06-10 12:03:25.234206] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:53.426 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:39:53.426 "name": "raid_bdev1", 00:39:53.426 "aliases": [ 00:39:53.426 "84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358" 00:39:53.426 ], 00:39:53.426 "product_name": "Raid Volume", 00:39:53.426 "block_size": 4096, 00:39:53.426 "num_blocks": 7936, 00:39:53.426 "uuid": "84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358", 00:39:53.426 "assigned_rate_limits": { 00:39:53.426 "rw_ios_per_sec": 0, 00:39:53.426 "rw_mbytes_per_sec": 0, 00:39:53.426 "r_mbytes_per_sec": 0, 00:39:53.426 "w_mbytes_per_sec": 0 00:39:53.426 }, 00:39:53.426 "claimed": false, 00:39:53.426 "zoned": false, 00:39:53.426 "supported_io_types": { 00:39:53.426 "read": true, 00:39:53.426 "write": true, 00:39:53.426 "unmap": false, 00:39:53.426 "write_zeroes": true, 00:39:53.426 "flush": false, 00:39:53.426 "reset": true, 00:39:53.426 "compare": false, 00:39:53.426 "compare_and_write": false, 00:39:53.426 "abort": false, 00:39:53.426 "nvme_admin": false, 00:39:53.426 "nvme_io": false 00:39:53.426 }, 00:39:53.426 "memory_domains": [ 00:39:53.426 { 00:39:53.426 "dma_device_id": "system", 00:39:53.426 "dma_device_type": 1 00:39:53.426 }, 00:39:53.426 { 00:39:53.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:53.426 "dma_device_type": 2 00:39:53.426 }, 00:39:53.426 { 00:39:53.426 "dma_device_id": "system", 00:39:53.426 "dma_device_type": 1 00:39:53.426 }, 00:39:53.426 { 00:39:53.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:53.426 "dma_device_type": 2 00:39:53.426 } 00:39:53.426 ], 00:39:53.426 "driver_specific": { 00:39:53.426 "raid": { 00:39:53.426 "uuid": "84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358", 00:39:53.426 "strip_size_kb": 0, 00:39:53.426 "state": "online", 00:39:53.426 "raid_level": "raid1", 00:39:53.426 "superblock": true, 00:39:53.426 "num_base_bdevs": 2, 00:39:53.426 "num_base_bdevs_discovered": 2, 00:39:53.426 "num_base_bdevs_operational": 2, 00:39:53.426 "base_bdevs_list": [ 00:39:53.426 { 00:39:53.426 "name": "pt1", 00:39:53.426 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:53.426 "is_configured": true, 00:39:53.426 "data_offset": 256, 00:39:53.426 "data_size": 7936 00:39:53.426 }, 00:39:53.426 { 00:39:53.426 "name": "pt2", 00:39:53.426 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:53.426 "is_configured": true, 00:39:53.426 "data_offset": 256, 00:39:53.426 "data_size": 7936 00:39:53.426 } 00:39:53.426 ] 00:39:53.426 } 00:39:53.426 } 00:39:53.426 }' 00:39:53.426 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:53.426 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:39:53.426 pt2' 00:39:53.426 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:39:53.426 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:53.426 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:39:53.426 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:53.426 "name": "pt1", 00:39:53.426 "aliases": [ 00:39:53.426 "00000000-0000-0000-0000-000000000001" 00:39:53.426 ], 00:39:53.426 "product_name": "passthru", 00:39:53.426 "block_size": 4096, 00:39:53.426 "num_blocks": 8192, 00:39:53.426 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:53.426 "assigned_rate_limits": { 00:39:53.426 "rw_ios_per_sec": 0, 00:39:53.426 "rw_mbytes_per_sec": 0, 00:39:53.426 "r_mbytes_per_sec": 0, 00:39:53.426 "w_mbytes_per_sec": 0 00:39:53.426 }, 00:39:53.426 "claimed": true, 00:39:53.426 "claim_type": "exclusive_write", 00:39:53.426 "zoned": false, 00:39:53.426 "supported_io_types": { 00:39:53.426 "read": true, 00:39:53.426 "write": true, 00:39:53.426 "unmap": true, 00:39:53.426 "write_zeroes": true, 00:39:53.426 "flush": true, 00:39:53.426 "reset": true, 00:39:53.426 "compare": false, 00:39:53.427 "compare_and_write": false, 00:39:53.427 "abort": true, 00:39:53.427 "nvme_admin": false, 00:39:53.427 "nvme_io": false 00:39:53.427 }, 00:39:53.427 "memory_domains": [ 00:39:53.427 { 00:39:53.427 "dma_device_id": "system", 00:39:53.427 "dma_device_type": 1 00:39:53.427 }, 00:39:53.427 { 00:39:53.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:53.427 "dma_device_type": 2 00:39:53.427 } 00:39:53.427 ], 00:39:53.427 "driver_specific": { 00:39:53.427 "passthru": { 00:39:53.427 "name": "pt1", 00:39:53.427 "base_bdev_name": "malloc1" 00:39:53.427 } 00:39:53.427 } 00:39:53.427 }' 00:39:53.427 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:53.728 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:53.728 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:39:53.728 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:53.728 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:53.728 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:39:53.728 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:53.728 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:53.728 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:39:53.728 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:53.987 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:53.987 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:39:53.987 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:39:53.987 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:39:53.987 12:03:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:54.272 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:54.272 "name": "pt2", 00:39:54.272 "aliases": [ 00:39:54.272 "00000000-0000-0000-0000-000000000002" 00:39:54.272 ], 00:39:54.272 "product_name": "passthru", 00:39:54.272 "block_size": 4096, 00:39:54.272 "num_blocks": 8192, 00:39:54.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:54.272 "assigned_rate_limits": { 00:39:54.272 "rw_ios_per_sec": 0, 00:39:54.272 "rw_mbytes_per_sec": 0, 00:39:54.272 "r_mbytes_per_sec": 0, 00:39:54.272 "w_mbytes_per_sec": 0 00:39:54.272 }, 00:39:54.272 "claimed": true, 00:39:54.272 "claim_type": "exclusive_write", 00:39:54.272 "zoned": false, 00:39:54.272 "supported_io_types": { 00:39:54.272 "read": true, 00:39:54.272 "write": true, 00:39:54.272 "unmap": true, 00:39:54.272 "write_zeroes": true, 00:39:54.272 "flush": true, 00:39:54.272 "reset": true, 00:39:54.272 "compare": false, 00:39:54.272 "compare_and_write": false, 00:39:54.272 "abort": true, 00:39:54.272 "nvme_admin": false, 00:39:54.272 "nvme_io": false 00:39:54.272 }, 00:39:54.272 "memory_domains": [ 00:39:54.272 { 00:39:54.272 "dma_device_id": "system", 00:39:54.272 "dma_device_type": 1 00:39:54.272 }, 00:39:54.272 { 00:39:54.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:54.272 "dma_device_type": 2 00:39:54.272 } 00:39:54.272 ], 00:39:54.272 "driver_specific": { 00:39:54.272 "passthru": { 00:39:54.272 "name": "pt2", 00:39:54.272 "base_bdev_name": "malloc2" 00:39:54.272 } 00:39:54.272 } 00:39:54.272 }' 00:39:54.272 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:54.272 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:54.272 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:39:54.272 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:54.272 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:54.551 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:39:54.551 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:54.551 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:54.551 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:39:54.551 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:54.551 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:54.551 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:39:54.551 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:54.551 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:39:54.809 [2024-06-10 12:03:26.670493] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:54.809 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358 '!=' 84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358 ']' 00:39:54.809 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:39:54.809 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:39:54.809 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:39:54.809 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:39:55.145 [2024-06-10 12:03:26.942413] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:39:55.145 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:55.145 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:55.145 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:55.145 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:55.145 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:55.145 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:55.145 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:55.145 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:55.145 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:55.145 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:55.145 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:55.145 12:03:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:55.424 12:03:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:55.424 "name": "raid_bdev1", 00:39:55.424 "uuid": "84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358", 00:39:55.424 "strip_size_kb": 0, 00:39:55.424 "state": "online", 00:39:55.424 "raid_level": "raid1", 00:39:55.424 "superblock": true, 00:39:55.424 "num_base_bdevs": 2, 00:39:55.424 "num_base_bdevs_discovered": 1, 00:39:55.424 "num_base_bdevs_operational": 1, 00:39:55.424 "base_bdevs_list": [ 00:39:55.424 { 00:39:55.424 "name": null, 00:39:55.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:55.424 "is_configured": false, 00:39:55.424 "data_offset": 256, 00:39:55.424 "data_size": 7936 00:39:55.424 }, 00:39:55.424 { 00:39:55.424 "name": "pt2", 00:39:55.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:55.424 "is_configured": true, 00:39:55.424 "data_offset": 256, 00:39:55.424 "data_size": 7936 00:39:55.424 } 00:39:55.424 ] 00:39:55.424 }' 00:39:55.424 12:03:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:55.424 12:03:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:39:56.039 12:03:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:56.039 [2024-06-10 12:03:28.024071] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:56.039 [2024-06-10 12:03:28.024301] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:56.039 [2024-06-10 12:03:28.024506] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:56.039 [2024-06-10 12:03:28.024590] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:56.039 [2024-06-10 12:03:28.024781] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:39:56.039 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:56.039 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:39:56.312 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:39:56.312 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:39:56.312 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:39:56.312 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:39:56.312 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:39:56.570 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:39:56.570 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:39:56.570 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:39:56.570 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:39:56.570 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:39:56.570 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:56.829 [2024-06-10 12:03:28.744197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:56.829 [2024-06-10 12:03:28.744512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:56.829 [2024-06-10 12:03:28.744586] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:39:56.829 [2024-06-10 12:03:28.744679] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:56.829 [2024-06-10 12:03:28.747033] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:56.829 [2024-06-10 12:03:28.747216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:56.829 [2024-06-10 12:03:28.747450] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:56.829 [2024-06-10 12:03:28.747586] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:56.829 [2024-06-10 12:03:28.747735] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:39:56.829 [2024-06-10 12:03:28.747902] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:56.829 [2024-06-10 12:03:28.748037] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:39:56.829 [2024-06-10 12:03:28.748474] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:39:56.829 [2024-06-10 12:03:28.748582] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:39:56.829 [2024-06-10 12:03:28.748856] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:56.829 pt2 00:39:56.829 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:56.829 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:56.829 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:56.829 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:56.829 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:56.829 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:56.829 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:56.829 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:56.829 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:56.829 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:56.829 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:56.829 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:57.088 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:57.088 "name": "raid_bdev1", 00:39:57.088 "uuid": "84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358", 00:39:57.088 "strip_size_kb": 0, 00:39:57.088 "state": "online", 00:39:57.088 "raid_level": "raid1", 00:39:57.088 "superblock": true, 00:39:57.088 "num_base_bdevs": 2, 00:39:57.088 "num_base_bdevs_discovered": 1, 00:39:57.088 "num_base_bdevs_operational": 1, 00:39:57.088 "base_bdevs_list": [ 00:39:57.088 { 00:39:57.088 "name": null, 00:39:57.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:57.088 "is_configured": false, 00:39:57.088 "data_offset": 256, 00:39:57.088 "data_size": 7936 00:39:57.088 }, 00:39:57.088 { 00:39:57.088 "name": "pt2", 00:39:57.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:57.088 "is_configured": true, 00:39:57.088 "data_offset": 256, 00:39:57.088 "data_size": 7936 00:39:57.088 } 00:39:57.088 ] 00:39:57.088 }' 00:39:57.088 12:03:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:57.088 12:03:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:39:57.656 12:03:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:57.656 [2024-06-10 12:03:29.700969] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:57.656 [2024-06-10 12:03:29.701234] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:57.656 [2024-06-10 12:03:29.701438] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:57.656 [2024-06-10 12:03:29.701598] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:57.656 [2024-06-10 12:03:29.701693] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:39:57.915 12:03:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:57.915 12:03:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:39:58.202 12:03:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:39:58.202 12:03:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:39:58.202 12:03:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:39:58.202 12:03:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:58.202 [2024-06-10 12:03:30.149029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:58.202 [2024-06-10 12:03:30.149309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:58.202 [2024-06-10 12:03:30.149387] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:39:58.202 [2024-06-10 12:03:30.149491] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:58.202 [2024-06-10 12:03:30.151777] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:58.202 [2024-06-10 12:03:30.151949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:58.202 [2024-06-10 12:03:30.152139] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:39:58.202 [2024-06-10 12:03:30.152271] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:58.202 [2024-06-10 12:03:30.152437] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:39:58.202 [2024-06-10 12:03:30.152571] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:58.202 [2024-06-10 12:03:30.152617] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:39:58.202 [2024-06-10 12:03:30.152695] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:58.202 [2024-06-10 12:03:30.152951] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:39:58.202 [2024-06-10 12:03:30.153060] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:58.202 [2024-06-10 12:03:30.153181] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:39:58.202 [2024-06-10 12:03:30.153587] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:39:58.202 [2024-06-10 12:03:30.153682] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:39:58.202 [2024-06-10 12:03:30.153935] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:58.202 pt1 00:39:58.202 12:03:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:39:58.202 12:03:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:58.202 12:03:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:58.203 12:03:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:58.203 12:03:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:58.203 12:03:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:58.203 12:03:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:58.203 12:03:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:58.203 12:03:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:58.203 12:03:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:58.203 12:03:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:58.203 12:03:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:58.203 12:03:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:58.460 12:03:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:58.460 "name": "raid_bdev1", 00:39:58.460 "uuid": "84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358", 00:39:58.460 "strip_size_kb": 0, 00:39:58.460 "state": "online", 00:39:58.460 "raid_level": "raid1", 00:39:58.460 "superblock": true, 00:39:58.460 "num_base_bdevs": 2, 00:39:58.460 "num_base_bdevs_discovered": 1, 00:39:58.460 "num_base_bdevs_operational": 1, 00:39:58.460 "base_bdevs_list": [ 00:39:58.460 { 00:39:58.460 "name": null, 00:39:58.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:58.460 "is_configured": false, 00:39:58.460 "data_offset": 256, 00:39:58.460 "data_size": 7936 00:39:58.460 }, 00:39:58.460 { 00:39:58.460 "name": "pt2", 00:39:58.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:58.460 "is_configured": true, 00:39:58.460 "data_offset": 256, 00:39:58.460 "data_size": 7936 00:39:58.460 } 00:39:58.460 ] 00:39:58.460 }' 00:39:58.460 12:03:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:58.460 12:03:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:39:59.026 12:03:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:39:59.026 12:03:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:39:59.284 12:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:39:59.284 12:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:59.284 12:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:39:59.284 [2024-06-10 12:03:31.325430] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:59.543 12:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358 '!=' 84ea9e57-a0aa-4b2e-8e0d-1f7b5e10c358 ']' 00:39:59.543 12:03:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 161195 00:39:59.543 12:03:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@949 -- # '[' -z 161195 ']' 00:39:59.543 12:03:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # kill -0 161195 00:39:59.543 12:03:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # uname 00:39:59.543 12:03:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:59.543 12:03:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 161195 00:39:59.543 12:03:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:39:59.543 12:03:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:39:59.543 12:03:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # echo 'killing process with pid 161195' 00:39:59.543 killing process with pid 161195 00:39:59.543 12:03:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # kill 161195 00:39:59.543 [2024-06-10 12:03:31.380745] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:59.543 [2024-06-10 12:03:31.380938] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:59.543 [2024-06-10 12:03:31.381061] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:59.543 [2024-06-10 12:03:31.381134] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:39:59.543 12:03:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # wait 161195 00:39:59.543 [2024-06-10 12:03:31.579287] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:00.923 ************************************ 00:40:00.923 END TEST raid_superblock_test_4k 00:40:00.923 ************************************ 00:40:00.923 12:03:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:40:00.923 00:40:00.923 real 0m15.904s 00:40:00.923 user 0m27.875s 00:40:00.923 sys 0m2.591s 00:40:00.923 12:03:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:00.923 12:03:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:40:00.923 12:03:32 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:40:00.923 12:03:32 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:40:00.923 12:03:32 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:40:00.923 12:03:32 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:00.923 12:03:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:00.923 ************************************ 00:40:00.923 START TEST raid_rebuild_test_sb_4k 00:40:00.923 ************************************ 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 2 true false true 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=161713 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 161713 /var/tmp/spdk-raid.sock 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@830 -- # '[' -z 161713 ']' 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:40:00.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:00.923 12:03:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:40:01.183 [2024-06-10 12:03:33.019374] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:40:01.183 [2024-06-10 12:03:33.020296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161713 ] 00:40:01.183 I/O size of 3145728 is greater than zero copy threshold (65536). 00:40:01.183 Zero copy mechanism will not be used. 00:40:01.183 [2024-06-10 12:03:33.203568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:01.442 [2024-06-10 12:03:33.483966] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:40:01.700 [2024-06-10 12:03:33.713377] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:01.958 12:03:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:01.958 12:03:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@863 -- # return 0 00:40:01.958 12:03:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:40:01.959 12:03:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:40:02.217 BaseBdev1_malloc 00:40:02.217 12:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:40:02.475 [2024-06-10 12:03:34.417530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:40:02.475 [2024-06-10 12:03:34.417668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:02.475 [2024-06-10 12:03:34.417719] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:40:02.475 [2024-06-10 12:03:34.417741] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:02.475 [2024-06-10 12:03:34.420602] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:02.475 [2024-06-10 12:03:34.420663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:02.475 BaseBdev1 00:40:02.475 12:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:40:02.475 12:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:40:02.732 BaseBdev2_malloc 00:40:02.732 12:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:40:02.989 [2024-06-10 12:03:34.959218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:40:02.989 [2024-06-10 12:03:34.959378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:02.989 [2024-06-10 12:03:34.959448] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:40:02.989 [2024-06-10 12:03:34.959473] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:02.989 [2024-06-10 12:03:34.962209] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:02.989 [2024-06-10 12:03:34.962278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:40:02.989 BaseBdev2 00:40:02.989 12:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:40:03.248 spare_malloc 00:40:03.248 12:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:40:03.506 spare_delay 00:40:03.506 12:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:40:03.765 [2024-06-10 12:03:35.606231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:03.765 [2024-06-10 12:03:35.606343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:03.765 [2024-06-10 12:03:35.606382] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:40:03.765 [2024-06-10 12:03:35.606413] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:03.765 [2024-06-10 12:03:35.609006] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:03.765 [2024-06-10 12:03:35.609066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:03.765 spare 00:40:03.765 12:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:40:03.765 [2024-06-10 12:03:35.806358] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:03.765 [2024-06-10 12:03:35.808705] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:03.765 [2024-06-10 12:03:35.808960] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:40:03.765 [2024-06-10 12:03:35.808974] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:40:03.765 [2024-06-10 12:03:35.809138] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:40:03.765 [2024-06-10 12:03:35.809554] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:40:03.765 [2024-06-10 12:03:35.809578] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:40:03.765 [2024-06-10 12:03:35.809759] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:04.023 12:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:04.023 12:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:04.023 12:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:04.023 12:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:04.023 12:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:04.023 12:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:04.023 12:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:04.023 12:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:04.023 12:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:04.023 12:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:04.023 12:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:04.023 12:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:04.282 12:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:04.282 "name": "raid_bdev1", 00:40:04.282 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:04.282 "strip_size_kb": 0, 00:40:04.282 "state": "online", 00:40:04.282 "raid_level": "raid1", 00:40:04.282 "superblock": true, 00:40:04.282 "num_base_bdevs": 2, 00:40:04.282 "num_base_bdevs_discovered": 2, 00:40:04.282 "num_base_bdevs_operational": 2, 00:40:04.282 "base_bdevs_list": [ 00:40:04.282 { 00:40:04.282 "name": "BaseBdev1", 00:40:04.282 "uuid": "321f0753-36a7-527c-ab63-291e4aad5a61", 00:40:04.282 "is_configured": true, 00:40:04.282 "data_offset": 256, 00:40:04.282 "data_size": 7936 00:40:04.282 }, 00:40:04.282 { 00:40:04.282 "name": "BaseBdev2", 00:40:04.282 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:04.282 "is_configured": true, 00:40:04.282 "data_offset": 256, 00:40:04.282 "data_size": 7936 00:40:04.282 } 00:40:04.282 ] 00:40:04.282 }' 00:40:04.282 12:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:04.282 12:03:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:40:04.893 12:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:40:04.893 12:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:40:05.188 [2024-06-10 12:03:36.974791] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:05.188 12:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:40:05.188 12:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:40:05.188 12:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:05.188 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:40:05.188 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:40:05.188 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:40:05.188 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:40:05.188 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:40:05.188 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:05.188 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:40:05.188 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:05.188 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:40:05.188 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:05.188 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:40:05.188 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:05.188 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:05.188 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:40:05.446 [2024-06-10 12:03:37.478841] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:40:05.705 /dev/nbd0 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local i 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # break 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:05.705 1+0 records in 00:40:05.705 1+0 records out 00:40:05.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332001 s, 12.3 MB/s 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # size=4096 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # return 0 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:05.705 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:40:05.706 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:40:05.706 12:03:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:40:06.273 7936+0 records in 00:40:06.273 7936+0 records out 00:40:06.273 32505856 bytes (33 MB, 31 MiB) copied, 0.74564 s, 43.6 MB/s 00:40:06.273 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:40:06.273 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:06.273 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:06.273 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:06.273 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:40:06.273 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:06.273 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:40:06.531 [2024-06-10 12:03:38.586198] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:06.531 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:06.790 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:06.790 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:06.790 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:06.790 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:06.790 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:06.790 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:40:06.790 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:40:06.791 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:40:07.050 [2024-06-10 12:03:38.853945] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:07.050 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:07.050 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:07.050 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:07.050 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:07.050 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:07.050 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:07.050 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:07.050 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:07.050 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:07.050 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:07.050 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:07.050 12:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:07.050 12:03:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:07.050 "name": "raid_bdev1", 00:40:07.050 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:07.050 "strip_size_kb": 0, 00:40:07.050 "state": "online", 00:40:07.050 "raid_level": "raid1", 00:40:07.050 "superblock": true, 00:40:07.050 "num_base_bdevs": 2, 00:40:07.050 "num_base_bdevs_discovered": 1, 00:40:07.050 "num_base_bdevs_operational": 1, 00:40:07.050 "base_bdevs_list": [ 00:40:07.050 { 00:40:07.050 "name": null, 00:40:07.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:07.050 "is_configured": false, 00:40:07.050 "data_offset": 256, 00:40:07.050 "data_size": 7936 00:40:07.050 }, 00:40:07.050 { 00:40:07.050 "name": "BaseBdev2", 00:40:07.050 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:07.050 "is_configured": true, 00:40:07.050 "data_offset": 256, 00:40:07.050 "data_size": 7936 00:40:07.050 } 00:40:07.050 ] 00:40:07.050 }' 00:40:07.050 12:03:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:07.050 12:03:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:40:07.986 12:03:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:40:07.986 [2024-06-10 12:03:39.995830] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:07.986 [2024-06-10 12:03:40.017573] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018cff0 00:40:07.986 [2024-06-10 12:03:40.019839] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:07.986 12:03:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:40:09.362 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:09.362 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:09.362 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:40:09.362 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:40:09.362 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:09.362 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:09.362 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:09.362 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:09.362 "name": "raid_bdev1", 00:40:09.362 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:09.362 "strip_size_kb": 0, 00:40:09.362 "state": "online", 00:40:09.362 "raid_level": "raid1", 00:40:09.362 "superblock": true, 00:40:09.362 "num_base_bdevs": 2, 00:40:09.362 "num_base_bdevs_discovered": 2, 00:40:09.362 "num_base_bdevs_operational": 2, 00:40:09.362 "process": { 00:40:09.362 "type": "rebuild", 00:40:09.362 "target": "spare", 00:40:09.362 "progress": { 00:40:09.362 "blocks": 3072, 00:40:09.362 "percent": 38 00:40:09.362 } 00:40:09.362 }, 00:40:09.362 "base_bdevs_list": [ 00:40:09.362 { 00:40:09.362 "name": "spare", 00:40:09.362 "uuid": "7f6e72c5-3d08-5239-8a86-06dd079dd3d2", 00:40:09.362 "is_configured": true, 00:40:09.362 "data_offset": 256, 00:40:09.362 "data_size": 7936 00:40:09.362 }, 00:40:09.362 { 00:40:09.362 "name": "BaseBdev2", 00:40:09.362 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:09.362 "is_configured": true, 00:40:09.362 "data_offset": 256, 00:40:09.362 "data_size": 7936 00:40:09.362 } 00:40:09.362 ] 00:40:09.362 }' 00:40:09.362 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:09.362 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:09.362 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:09.621 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:40:09.621 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:40:09.880 [2024-06-10 12:03:41.721370] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:09.880 [2024-06-10 12:03:41.731164] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:09.880 [2024-06-10 12:03:41.731238] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:09.880 [2024-06-10 12:03:41.731255] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:09.880 [2024-06-10 12:03:41.731263] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:09.880 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:09.880 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:09.880 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:09.880 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:09.880 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:09.880 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:09.880 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:09.880 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:09.880 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:09.880 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:09.880 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:09.880 12:03:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:10.138 12:03:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:10.138 "name": "raid_bdev1", 00:40:10.138 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:10.138 "strip_size_kb": 0, 00:40:10.138 "state": "online", 00:40:10.138 "raid_level": "raid1", 00:40:10.138 "superblock": true, 00:40:10.138 "num_base_bdevs": 2, 00:40:10.138 "num_base_bdevs_discovered": 1, 00:40:10.138 "num_base_bdevs_operational": 1, 00:40:10.138 "base_bdevs_list": [ 00:40:10.138 { 00:40:10.138 "name": null, 00:40:10.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:10.138 "is_configured": false, 00:40:10.138 "data_offset": 256, 00:40:10.138 "data_size": 7936 00:40:10.138 }, 00:40:10.138 { 00:40:10.138 "name": "BaseBdev2", 00:40:10.138 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:10.138 "is_configured": true, 00:40:10.138 "data_offset": 256, 00:40:10.138 "data_size": 7936 00:40:10.138 } 00:40:10.138 ] 00:40:10.138 }' 00:40:10.138 12:03:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:10.138 12:03:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:40:10.706 12:03:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:10.706 12:03:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:10.706 12:03:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:40:10.706 12:03:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:40:10.706 12:03:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:10.706 12:03:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:10.706 12:03:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:11.273 12:03:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:11.273 "name": "raid_bdev1", 00:40:11.273 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:11.273 "strip_size_kb": 0, 00:40:11.273 "state": "online", 00:40:11.273 "raid_level": "raid1", 00:40:11.273 "superblock": true, 00:40:11.273 "num_base_bdevs": 2, 00:40:11.273 "num_base_bdevs_discovered": 1, 00:40:11.273 "num_base_bdevs_operational": 1, 00:40:11.273 "base_bdevs_list": [ 00:40:11.273 { 00:40:11.273 "name": null, 00:40:11.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:11.273 "is_configured": false, 00:40:11.273 "data_offset": 256, 00:40:11.273 "data_size": 7936 00:40:11.273 }, 00:40:11.273 { 00:40:11.274 "name": "BaseBdev2", 00:40:11.274 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:11.274 "is_configured": true, 00:40:11.274 "data_offset": 256, 00:40:11.274 "data_size": 7936 00:40:11.274 } 00:40:11.274 ] 00:40:11.274 }' 00:40:11.274 12:03:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:11.274 12:03:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:40:11.274 12:03:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:11.274 12:03:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:11.274 12:03:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:40:11.533 [2024-06-10 12:03:43.408857] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:11.533 [2024-06-10 12:03:43.427757] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:40:11.533 [2024-06-10 12:03:43.429927] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:11.533 12:03:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:40:12.470 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:12.471 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:12.471 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:40:12.471 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:40:12.471 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:12.471 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:12.471 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:12.729 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:12.730 "name": "raid_bdev1", 00:40:12.730 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:12.730 "strip_size_kb": 0, 00:40:12.730 "state": "online", 00:40:12.730 "raid_level": "raid1", 00:40:12.730 "superblock": true, 00:40:12.730 "num_base_bdevs": 2, 00:40:12.730 "num_base_bdevs_discovered": 2, 00:40:12.730 "num_base_bdevs_operational": 2, 00:40:12.730 "process": { 00:40:12.730 "type": "rebuild", 00:40:12.730 "target": "spare", 00:40:12.730 "progress": { 00:40:12.730 "blocks": 3072, 00:40:12.730 "percent": 38 00:40:12.730 } 00:40:12.730 }, 00:40:12.730 "base_bdevs_list": [ 00:40:12.730 { 00:40:12.730 "name": "spare", 00:40:12.730 "uuid": "7f6e72c5-3d08-5239-8a86-06dd079dd3d2", 00:40:12.730 "is_configured": true, 00:40:12.730 "data_offset": 256, 00:40:12.730 "data_size": 7936 00:40:12.730 }, 00:40:12.730 { 00:40:12.730 "name": "BaseBdev2", 00:40:12.730 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:12.730 "is_configured": true, 00:40:12.730 "data_offset": 256, 00:40:12.730 "data_size": 7936 00:40:12.730 } 00:40:12.730 ] 00:40:12.730 }' 00:40:12.730 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:12.989 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:12.989 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:12.989 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:40:12.989 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:40:12.989 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:40:12.989 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:40:12.989 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:40:12.989 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:40:12.989 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:40:12.989 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=1484 00:40:12.989 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:40:12.989 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:12.989 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:12.989 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:40:12.989 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:40:12.989 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:12.989 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:12.989 12:03:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:13.247 12:03:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:13.247 "name": "raid_bdev1", 00:40:13.247 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:13.247 "strip_size_kb": 0, 00:40:13.247 "state": "online", 00:40:13.247 "raid_level": "raid1", 00:40:13.247 "superblock": true, 00:40:13.247 "num_base_bdevs": 2, 00:40:13.247 "num_base_bdevs_discovered": 2, 00:40:13.247 "num_base_bdevs_operational": 2, 00:40:13.247 "process": { 00:40:13.247 "type": "rebuild", 00:40:13.247 "target": "spare", 00:40:13.247 "progress": { 00:40:13.247 "blocks": 4096, 00:40:13.247 "percent": 51 00:40:13.247 } 00:40:13.247 }, 00:40:13.247 "base_bdevs_list": [ 00:40:13.247 { 00:40:13.248 "name": "spare", 00:40:13.248 "uuid": "7f6e72c5-3d08-5239-8a86-06dd079dd3d2", 00:40:13.248 "is_configured": true, 00:40:13.248 "data_offset": 256, 00:40:13.248 "data_size": 7936 00:40:13.248 }, 00:40:13.248 { 00:40:13.248 "name": "BaseBdev2", 00:40:13.248 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:13.248 "is_configured": true, 00:40:13.248 "data_offset": 256, 00:40:13.248 "data_size": 7936 00:40:13.248 } 00:40:13.248 ] 00:40:13.248 }' 00:40:13.248 12:03:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:13.248 12:03:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:13.248 12:03:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:13.248 12:03:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:40:13.248 12:03:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:40:14.185 12:03:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:40:14.185 12:03:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:14.185 12:03:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:14.185 12:03:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:40:14.185 12:03:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:40:14.185 12:03:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:14.185 12:03:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:14.185 12:03:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:14.444 12:03:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:14.444 "name": "raid_bdev1", 00:40:14.444 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:14.444 "strip_size_kb": 0, 00:40:14.444 "state": "online", 00:40:14.444 "raid_level": "raid1", 00:40:14.444 "superblock": true, 00:40:14.444 "num_base_bdevs": 2, 00:40:14.444 "num_base_bdevs_discovered": 2, 00:40:14.444 "num_base_bdevs_operational": 2, 00:40:14.444 "process": { 00:40:14.444 "type": "rebuild", 00:40:14.444 "target": "spare", 00:40:14.444 "progress": { 00:40:14.444 "blocks": 7424, 00:40:14.444 "percent": 93 00:40:14.444 } 00:40:14.444 }, 00:40:14.444 "base_bdevs_list": [ 00:40:14.444 { 00:40:14.444 "name": "spare", 00:40:14.444 "uuid": "7f6e72c5-3d08-5239-8a86-06dd079dd3d2", 00:40:14.444 "is_configured": true, 00:40:14.444 "data_offset": 256, 00:40:14.444 "data_size": 7936 00:40:14.444 }, 00:40:14.444 { 00:40:14.444 "name": "BaseBdev2", 00:40:14.444 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:14.444 "is_configured": true, 00:40:14.444 "data_offset": 256, 00:40:14.444 "data_size": 7936 00:40:14.444 } 00:40:14.444 ] 00:40:14.444 }' 00:40:14.444 12:03:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:14.444 12:03:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:14.444 12:03:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:14.702 [2024-06-10 12:03:46.550646] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:40:14.702 [2024-06-10 12:03:46.550720] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:40:14.702 [2024-06-10 12:03:46.550844] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:14.702 12:03:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:40:14.702 12:03:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:40:15.662 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:40:15.662 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:15.662 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:15.662 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:40:15.662 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:40:15.662 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:15.662 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:15.662 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:15.921 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:15.921 "name": "raid_bdev1", 00:40:15.921 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:15.921 "strip_size_kb": 0, 00:40:15.921 "state": "online", 00:40:15.921 "raid_level": "raid1", 00:40:15.921 "superblock": true, 00:40:15.921 "num_base_bdevs": 2, 00:40:15.921 "num_base_bdevs_discovered": 2, 00:40:15.921 "num_base_bdevs_operational": 2, 00:40:15.921 "base_bdevs_list": [ 00:40:15.921 { 00:40:15.921 "name": "spare", 00:40:15.921 "uuid": "7f6e72c5-3d08-5239-8a86-06dd079dd3d2", 00:40:15.921 "is_configured": true, 00:40:15.921 "data_offset": 256, 00:40:15.921 "data_size": 7936 00:40:15.921 }, 00:40:15.921 { 00:40:15.921 "name": "BaseBdev2", 00:40:15.921 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:15.921 "is_configured": true, 00:40:15.921 "data_offset": 256, 00:40:15.921 "data_size": 7936 00:40:15.921 } 00:40:15.921 ] 00:40:15.921 }' 00:40:15.921 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:15.921 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:40:15.921 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:15.921 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:40:15.921 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:40:15.921 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:15.921 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:15.921 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:40:15.921 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:40:15.921 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:15.921 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:15.921 12:03:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:16.180 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:16.180 "name": "raid_bdev1", 00:40:16.180 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:16.180 "strip_size_kb": 0, 00:40:16.180 "state": "online", 00:40:16.180 "raid_level": "raid1", 00:40:16.180 "superblock": true, 00:40:16.180 "num_base_bdevs": 2, 00:40:16.180 "num_base_bdevs_discovered": 2, 00:40:16.180 "num_base_bdevs_operational": 2, 00:40:16.180 "base_bdevs_list": [ 00:40:16.180 { 00:40:16.180 "name": "spare", 00:40:16.180 "uuid": "7f6e72c5-3d08-5239-8a86-06dd079dd3d2", 00:40:16.180 "is_configured": true, 00:40:16.180 "data_offset": 256, 00:40:16.180 "data_size": 7936 00:40:16.180 }, 00:40:16.180 { 00:40:16.180 "name": "BaseBdev2", 00:40:16.180 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:16.180 "is_configured": true, 00:40:16.180 "data_offset": 256, 00:40:16.180 "data_size": 7936 00:40:16.180 } 00:40:16.180 ] 00:40:16.180 }' 00:40:16.180 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:16.438 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:40:16.438 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:16.438 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:16.438 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:16.438 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:16.438 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:16.438 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:16.438 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:16.438 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:16.438 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:16.438 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:16.438 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:16.438 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:16.438 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:16.438 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:16.697 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:16.697 "name": "raid_bdev1", 00:40:16.697 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:16.697 "strip_size_kb": 0, 00:40:16.697 "state": "online", 00:40:16.697 "raid_level": "raid1", 00:40:16.697 "superblock": true, 00:40:16.697 "num_base_bdevs": 2, 00:40:16.697 "num_base_bdevs_discovered": 2, 00:40:16.697 "num_base_bdevs_operational": 2, 00:40:16.697 "base_bdevs_list": [ 00:40:16.697 { 00:40:16.697 "name": "spare", 00:40:16.697 "uuid": "7f6e72c5-3d08-5239-8a86-06dd079dd3d2", 00:40:16.697 "is_configured": true, 00:40:16.697 "data_offset": 256, 00:40:16.697 "data_size": 7936 00:40:16.697 }, 00:40:16.697 { 00:40:16.697 "name": "BaseBdev2", 00:40:16.697 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:16.697 "is_configured": true, 00:40:16.697 "data_offset": 256, 00:40:16.697 "data_size": 7936 00:40:16.697 } 00:40:16.697 ] 00:40:16.697 }' 00:40:16.697 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:16.697 12:03:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:40:17.270 12:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:40:17.528 [2024-06-10 12:03:49.430630] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:17.528 [2024-06-10 12:03:49.430686] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:17.528 [2024-06-10 12:03:49.430761] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:17.528 [2024-06-10 12:03:49.430841] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:17.528 [2024-06-10 12:03:49.430854] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:40:17.528 12:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:17.528 12:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:40:17.786 12:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:40:17.786 12:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:40:17.786 12:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:40:17.786 12:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:40:17.786 12:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:17.786 12:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:40:17.786 12:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:17.786 12:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:17.786 12:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:17.786 12:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:40:17.786 12:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:17.786 12:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:17.786 12:03:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:40:18.043 /dev/nbd0 00:40:18.043 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:18.043 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:18.043 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:40:18.043 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local i 00:40:18.043 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:40:18.043 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:40:18.043 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:40:18.043 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # break 00:40:18.043 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:40:18.043 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:40:18.043 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:18.043 1+0 records in 00:40:18.043 1+0 records out 00:40:18.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000652539 s, 6.3 MB/s 00:40:18.302 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:18.302 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # size=4096 00:40:18.302 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:18.302 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:40:18.302 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # return 0 00:40:18.302 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:18.302 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:18.302 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:40:18.560 /dev/nbd1 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local i 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # break 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:18.560 1+0 records in 00:40:18.560 1+0 records out 00:40:18.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123544 s, 3.3 MB/s 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # size=4096 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # return 0 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:18.560 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:40:18.819 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:40:18.819 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:18.819 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:18.819 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:18.819 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:40:18.819 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:18.819 12:03:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:40:19.078 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:19.078 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:19.078 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:19.078 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:19.078 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:19.078 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:19.078 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:40:19.078 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:40:19.078 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:19.078 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:40:19.345 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:19.345 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:19.345 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:19.345 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:19.345 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:19.345 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:19.345 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:40:19.345 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:40:19.345 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:40:19.345 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:40:19.920 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:40:19.920 [2024-06-10 12:03:51.959493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:19.920 [2024-06-10 12:03:51.959593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:19.920 [2024-06-10 12:03:51.959657] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:40:19.920 [2024-06-10 12:03:51.959681] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:19.920 [2024-06-10 12:03:51.962350] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:19.920 [2024-06-10 12:03:51.962412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:19.920 [2024-06-10 12:03:51.962553] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:40:19.920 [2024-06-10 12:03:51.962614] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:19.920 [2024-06-10 12:03:51.962772] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:19.920 spare 00:40:20.179 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:20.179 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:20.179 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:20.179 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:20.179 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:20.179 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:20.179 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:20.179 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:20.179 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:20.179 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:20.179 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:20.179 12:03:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:20.179 [2024-06-10 12:03:52.062871] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:40:20.179 [2024-06-10 12:03:52.062942] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:40:20.179 [2024-06-10 12:03:52.063182] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:40:20.179 [2024-06-10 12:03:52.063668] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:40:20.179 [2024-06-10 12:03:52.063691] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:40:20.179 [2024-06-10 12:03:52.063898] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:20.470 12:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:20.470 "name": "raid_bdev1", 00:40:20.470 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:20.470 "strip_size_kb": 0, 00:40:20.470 "state": "online", 00:40:20.470 "raid_level": "raid1", 00:40:20.470 "superblock": true, 00:40:20.470 "num_base_bdevs": 2, 00:40:20.470 "num_base_bdevs_discovered": 2, 00:40:20.470 "num_base_bdevs_operational": 2, 00:40:20.470 "base_bdevs_list": [ 00:40:20.470 { 00:40:20.470 "name": "spare", 00:40:20.470 "uuid": "7f6e72c5-3d08-5239-8a86-06dd079dd3d2", 00:40:20.470 "is_configured": true, 00:40:20.470 "data_offset": 256, 00:40:20.470 "data_size": 7936 00:40:20.470 }, 00:40:20.470 { 00:40:20.470 "name": "BaseBdev2", 00:40:20.470 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:20.470 "is_configured": true, 00:40:20.470 "data_offset": 256, 00:40:20.470 "data_size": 7936 00:40:20.470 } 00:40:20.470 ] 00:40:20.470 }' 00:40:20.470 12:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:20.470 12:03:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:40:21.039 12:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:21.039 12:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:21.039 12:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:40:21.039 12:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:40:21.039 12:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:21.039 12:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:21.039 12:03:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:21.297 12:03:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:21.297 "name": "raid_bdev1", 00:40:21.297 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:21.297 "strip_size_kb": 0, 00:40:21.297 "state": "online", 00:40:21.297 "raid_level": "raid1", 00:40:21.297 "superblock": true, 00:40:21.297 "num_base_bdevs": 2, 00:40:21.297 "num_base_bdevs_discovered": 2, 00:40:21.297 "num_base_bdevs_operational": 2, 00:40:21.297 "base_bdevs_list": [ 00:40:21.297 { 00:40:21.297 "name": "spare", 00:40:21.297 "uuid": "7f6e72c5-3d08-5239-8a86-06dd079dd3d2", 00:40:21.297 "is_configured": true, 00:40:21.297 "data_offset": 256, 00:40:21.297 "data_size": 7936 00:40:21.297 }, 00:40:21.297 { 00:40:21.297 "name": "BaseBdev2", 00:40:21.297 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:21.297 "is_configured": true, 00:40:21.297 "data_offset": 256, 00:40:21.297 "data_size": 7936 00:40:21.297 } 00:40:21.297 ] 00:40:21.297 }' 00:40:21.297 12:03:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:21.297 12:03:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:40:21.297 12:03:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:21.556 12:03:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:21.556 12:03:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:21.556 12:03:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:40:21.814 12:03:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:40:21.814 12:03:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:40:22.073 [2024-06-10 12:03:54.008434] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:22.073 12:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:22.073 12:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:22.073 12:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:22.073 12:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:22.073 12:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:22.073 12:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:22.073 12:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:22.073 12:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:22.073 12:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:22.073 12:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:22.073 12:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:22.073 12:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:22.332 12:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:22.332 "name": "raid_bdev1", 00:40:22.332 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:22.332 "strip_size_kb": 0, 00:40:22.332 "state": "online", 00:40:22.332 "raid_level": "raid1", 00:40:22.332 "superblock": true, 00:40:22.332 "num_base_bdevs": 2, 00:40:22.332 "num_base_bdevs_discovered": 1, 00:40:22.332 "num_base_bdevs_operational": 1, 00:40:22.332 "base_bdevs_list": [ 00:40:22.332 { 00:40:22.332 "name": null, 00:40:22.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:22.332 "is_configured": false, 00:40:22.332 "data_offset": 256, 00:40:22.332 "data_size": 7936 00:40:22.332 }, 00:40:22.332 { 00:40:22.332 "name": "BaseBdev2", 00:40:22.332 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:22.332 "is_configured": true, 00:40:22.332 "data_offset": 256, 00:40:22.332 "data_size": 7936 00:40:22.332 } 00:40:22.332 ] 00:40:22.332 }' 00:40:22.332 12:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:22.332 12:03:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:40:22.899 12:03:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:40:23.467 [2024-06-10 12:03:55.220757] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:23.467 [2024-06-10 12:03:55.220993] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:40:23.467 [2024-06-10 12:03:55.221010] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:40:23.467 [2024-06-10 12:03:55.221097] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:23.467 [2024-06-10 12:03:55.240245] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1dc0 00:40:23.467 [2024-06-10 12:03:55.242594] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:23.467 12:03:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:40:24.401 12:03:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:24.401 12:03:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:24.401 12:03:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:40:24.401 12:03:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:40:24.401 12:03:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:24.401 12:03:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:24.401 12:03:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:24.659 12:03:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:24.659 "name": "raid_bdev1", 00:40:24.659 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:24.659 "strip_size_kb": 0, 00:40:24.659 "state": "online", 00:40:24.659 "raid_level": "raid1", 00:40:24.659 "superblock": true, 00:40:24.659 "num_base_bdevs": 2, 00:40:24.659 "num_base_bdevs_discovered": 2, 00:40:24.659 "num_base_bdevs_operational": 2, 00:40:24.659 "process": { 00:40:24.659 "type": "rebuild", 00:40:24.659 "target": "spare", 00:40:24.659 "progress": { 00:40:24.659 "blocks": 3072, 00:40:24.659 "percent": 38 00:40:24.659 } 00:40:24.659 }, 00:40:24.659 "base_bdevs_list": [ 00:40:24.659 { 00:40:24.659 "name": "spare", 00:40:24.659 "uuid": "7f6e72c5-3d08-5239-8a86-06dd079dd3d2", 00:40:24.659 "is_configured": true, 00:40:24.659 "data_offset": 256, 00:40:24.659 "data_size": 7936 00:40:24.659 }, 00:40:24.659 { 00:40:24.659 "name": "BaseBdev2", 00:40:24.659 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:24.659 "is_configured": true, 00:40:24.659 "data_offset": 256, 00:40:24.659 "data_size": 7936 00:40:24.659 } 00:40:24.659 ] 00:40:24.659 }' 00:40:24.659 12:03:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:24.659 12:03:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:24.659 12:03:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:24.659 12:03:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:40:24.659 12:03:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:40:24.917 [2024-06-10 12:03:56.960307] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:25.175 [2024-06-10 12:03:57.054740] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:25.175 [2024-06-10 12:03:57.054869] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:25.175 [2024-06-10 12:03:57.054888] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:25.176 [2024-06-10 12:03:57.054896] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:25.176 12:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:25.176 12:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:25.176 12:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:25.176 12:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:25.176 12:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:25.176 12:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:25.176 12:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:25.176 12:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:25.176 12:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:25.176 12:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:25.176 12:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:25.176 12:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:25.435 12:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:25.435 "name": "raid_bdev1", 00:40:25.435 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:25.435 "strip_size_kb": 0, 00:40:25.435 "state": "online", 00:40:25.435 "raid_level": "raid1", 00:40:25.435 "superblock": true, 00:40:25.435 "num_base_bdevs": 2, 00:40:25.435 "num_base_bdevs_discovered": 1, 00:40:25.435 "num_base_bdevs_operational": 1, 00:40:25.435 "base_bdevs_list": [ 00:40:25.435 { 00:40:25.435 "name": null, 00:40:25.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:25.435 "is_configured": false, 00:40:25.435 "data_offset": 256, 00:40:25.435 "data_size": 7936 00:40:25.435 }, 00:40:25.435 { 00:40:25.435 "name": "BaseBdev2", 00:40:25.435 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:25.435 "is_configured": true, 00:40:25.435 "data_offset": 256, 00:40:25.435 "data_size": 7936 00:40:25.435 } 00:40:25.435 ] 00:40:25.435 }' 00:40:25.435 12:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:25.435 12:03:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:40:26.002 12:03:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:40:26.261 [2024-06-10 12:03:58.242632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:26.261 [2024-06-10 12:03:58.242750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:26.261 [2024-06-10 12:03:58.242798] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:40:26.261 [2024-06-10 12:03:58.242825] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:26.261 [2024-06-10 12:03:58.243387] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:26.261 [2024-06-10 12:03:58.243434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:26.261 [2024-06-10 12:03:58.243571] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:40:26.261 [2024-06-10 12:03:58.243584] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:40:26.261 [2024-06-10 12:03:58.243594] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:40:26.261 [2024-06-10 12:03:58.243639] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:26.261 [2024-06-10 12:03:58.262611] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:40:26.261 spare 00:40:26.261 [2024-06-10 12:03:58.264785] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:26.261 12:03:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:40:27.635 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:27.635 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:27.635 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:40:27.635 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:40:27.635 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:27.635 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:27.635 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:27.635 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:27.635 "name": "raid_bdev1", 00:40:27.635 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:27.635 "strip_size_kb": 0, 00:40:27.635 "state": "online", 00:40:27.635 "raid_level": "raid1", 00:40:27.635 "superblock": true, 00:40:27.635 "num_base_bdevs": 2, 00:40:27.635 "num_base_bdevs_discovered": 2, 00:40:27.635 "num_base_bdevs_operational": 2, 00:40:27.635 "process": { 00:40:27.635 "type": "rebuild", 00:40:27.635 "target": "spare", 00:40:27.635 "progress": { 00:40:27.635 "blocks": 3072, 00:40:27.635 "percent": 38 00:40:27.635 } 00:40:27.635 }, 00:40:27.635 "base_bdevs_list": [ 00:40:27.635 { 00:40:27.635 "name": "spare", 00:40:27.635 "uuid": "7f6e72c5-3d08-5239-8a86-06dd079dd3d2", 00:40:27.635 "is_configured": true, 00:40:27.635 "data_offset": 256, 00:40:27.635 "data_size": 7936 00:40:27.635 }, 00:40:27.635 { 00:40:27.635 "name": "BaseBdev2", 00:40:27.635 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:27.635 "is_configured": true, 00:40:27.635 "data_offset": 256, 00:40:27.635 "data_size": 7936 00:40:27.635 } 00:40:27.635 ] 00:40:27.635 }' 00:40:27.635 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:27.635 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:27.635 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:27.635 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:40:27.635 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:40:27.894 [2024-06-10 12:03:59.822636] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:27.894 [2024-06-10 12:03:59.874535] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:27.894 [2024-06-10 12:03:59.874618] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:27.894 [2024-06-10 12:03:59.874634] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:27.894 [2024-06-10 12:03:59.874641] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:27.894 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:27.894 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:27.894 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:27.894 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:27.894 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:27.894 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:27.894 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:27.894 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:27.894 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:27.895 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:27.895 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:27.895 12:03:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:28.153 12:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:28.153 "name": "raid_bdev1", 00:40:28.153 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:28.153 "strip_size_kb": 0, 00:40:28.153 "state": "online", 00:40:28.153 "raid_level": "raid1", 00:40:28.153 "superblock": true, 00:40:28.153 "num_base_bdevs": 2, 00:40:28.153 "num_base_bdevs_discovered": 1, 00:40:28.153 "num_base_bdevs_operational": 1, 00:40:28.153 "base_bdevs_list": [ 00:40:28.153 { 00:40:28.153 "name": null, 00:40:28.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:28.153 "is_configured": false, 00:40:28.153 "data_offset": 256, 00:40:28.153 "data_size": 7936 00:40:28.153 }, 00:40:28.153 { 00:40:28.153 "name": "BaseBdev2", 00:40:28.153 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:28.153 "is_configured": true, 00:40:28.153 "data_offset": 256, 00:40:28.153 "data_size": 7936 00:40:28.153 } 00:40:28.153 ] 00:40:28.153 }' 00:40:28.153 12:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:28.153 12:04:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:40:28.719 12:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:28.719 12:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:28.719 12:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:40:28.719 12:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:40:28.719 12:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:28.719 12:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:28.719 12:04:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:28.977 12:04:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:28.977 "name": "raid_bdev1", 00:40:28.977 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:28.977 "strip_size_kb": 0, 00:40:28.977 "state": "online", 00:40:28.977 "raid_level": "raid1", 00:40:28.977 "superblock": true, 00:40:28.977 "num_base_bdevs": 2, 00:40:28.977 "num_base_bdevs_discovered": 1, 00:40:28.977 "num_base_bdevs_operational": 1, 00:40:28.977 "base_bdevs_list": [ 00:40:28.977 { 00:40:28.977 "name": null, 00:40:28.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:28.977 "is_configured": false, 00:40:28.977 "data_offset": 256, 00:40:28.977 "data_size": 7936 00:40:28.977 }, 00:40:28.977 { 00:40:28.977 "name": "BaseBdev2", 00:40:28.977 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:28.977 "is_configured": true, 00:40:28.977 "data_offset": 256, 00:40:28.977 "data_size": 7936 00:40:28.977 } 00:40:28.978 ] 00:40:28.978 }' 00:40:28.978 12:04:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:29.236 12:04:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:40:29.236 12:04:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:29.236 12:04:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:29.236 12:04:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:40:29.494 12:04:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:40:29.753 [2024-06-10 12:04:01.631883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:40:29.753 [2024-06-10 12:04:01.631973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:29.753 [2024-06-10 12:04:01.632014] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:40:29.753 [2024-06-10 12:04:01.632035] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:29.753 [2024-06-10 12:04:01.632505] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:29.753 [2024-06-10 12:04:01.632559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:29.753 [2024-06-10 12:04:01.632708] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:40:29.753 [2024-06-10 12:04:01.632722] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:40:29.753 [2024-06-10 12:04:01.632730] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:40:29.753 BaseBdev1 00:40:29.753 12:04:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:40:30.686 12:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:30.686 12:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:30.686 12:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:30.686 12:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:30.686 12:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:30.686 12:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:30.686 12:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:30.686 12:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:30.686 12:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:30.686 12:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:30.686 12:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:30.686 12:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:30.944 12:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:30.944 "name": "raid_bdev1", 00:40:30.944 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:30.944 "strip_size_kb": 0, 00:40:30.944 "state": "online", 00:40:30.944 "raid_level": "raid1", 00:40:30.944 "superblock": true, 00:40:30.944 "num_base_bdevs": 2, 00:40:30.944 "num_base_bdevs_discovered": 1, 00:40:30.944 "num_base_bdevs_operational": 1, 00:40:30.944 "base_bdevs_list": [ 00:40:30.944 { 00:40:30.944 "name": null, 00:40:30.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:30.944 "is_configured": false, 00:40:30.944 "data_offset": 256, 00:40:30.944 "data_size": 7936 00:40:30.944 }, 00:40:30.944 { 00:40:30.944 "name": "BaseBdev2", 00:40:30.944 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:30.944 "is_configured": true, 00:40:30.944 "data_offset": 256, 00:40:30.944 "data_size": 7936 00:40:30.944 } 00:40:30.944 ] 00:40:30.944 }' 00:40:30.944 12:04:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:30.944 12:04:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:40:31.511 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:31.511 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:31.511 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:40:31.511 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:40:31.511 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:31.511 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:31.511 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:32.078 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:32.078 "name": "raid_bdev1", 00:40:32.078 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:32.078 "strip_size_kb": 0, 00:40:32.078 "state": "online", 00:40:32.078 "raid_level": "raid1", 00:40:32.078 "superblock": true, 00:40:32.078 "num_base_bdevs": 2, 00:40:32.078 "num_base_bdevs_discovered": 1, 00:40:32.078 "num_base_bdevs_operational": 1, 00:40:32.078 "base_bdevs_list": [ 00:40:32.078 { 00:40:32.078 "name": null, 00:40:32.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:32.078 "is_configured": false, 00:40:32.078 "data_offset": 256, 00:40:32.078 "data_size": 7936 00:40:32.078 }, 00:40:32.078 { 00:40:32.078 "name": "BaseBdev2", 00:40:32.078 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:32.078 "is_configured": true, 00:40:32.078 "data_offset": 256, 00:40:32.078 "data_size": 7936 00:40:32.078 } 00:40:32.078 ] 00:40:32.078 }' 00:40:32.078 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:32.078 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:40:32.078 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:32.078 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:32.078 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:32.078 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@649 -- # local es=0 00:40:32.078 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:32.078 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:32.078 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:32.078 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:32.078 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:32.078 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:32.078 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:32.078 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:32.078 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:40:32.078 12:04:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:32.078 [2024-06-10 12:04:04.106979] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:32.078 [2024-06-10 12:04:04.107162] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:40:32.078 [2024-06-10 12:04:04.107175] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:40:32.078 request: 00:40:32.078 { 00:40:32.078 "base_bdev": "BaseBdev1", 00:40:32.078 "raid_bdev": "raid_bdev1", 00:40:32.078 "method": "bdev_raid_add_base_bdev", 00:40:32.078 "req_id": 1 00:40:32.078 } 00:40:32.078 Got JSON-RPC error response 00:40:32.078 response: 00:40:32.078 { 00:40:32.078 "code": -22, 00:40:32.078 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:40:32.078 } 00:40:32.079 12:04:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # es=1 00:40:32.079 12:04:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:40:32.079 12:04:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:40:32.079 12:04:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:40:32.079 12:04:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:40:33.491 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:33.491 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:33.491 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:33.492 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:33.492 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:33.492 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:33.492 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:33.492 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:33.492 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:33.492 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:33.492 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:33.492 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:33.492 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:33.492 "name": "raid_bdev1", 00:40:33.492 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:33.492 "strip_size_kb": 0, 00:40:33.492 "state": "online", 00:40:33.492 "raid_level": "raid1", 00:40:33.492 "superblock": true, 00:40:33.492 "num_base_bdevs": 2, 00:40:33.492 "num_base_bdevs_discovered": 1, 00:40:33.492 "num_base_bdevs_operational": 1, 00:40:33.492 "base_bdevs_list": [ 00:40:33.492 { 00:40:33.492 "name": null, 00:40:33.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:33.492 "is_configured": false, 00:40:33.492 "data_offset": 256, 00:40:33.492 "data_size": 7936 00:40:33.492 }, 00:40:33.492 { 00:40:33.492 "name": "BaseBdev2", 00:40:33.492 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:33.492 "is_configured": true, 00:40:33.492 "data_offset": 256, 00:40:33.492 "data_size": 7936 00:40:33.492 } 00:40:33.492 ] 00:40:33.492 }' 00:40:33.492 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:33.492 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:40:34.058 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:34.058 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:34.058 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:40:34.058 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:40:34.058 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:34.058 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:34.058 12:04:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:34.316 12:04:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:34.316 "name": "raid_bdev1", 00:40:34.316 "uuid": "a2f462d9-d174-40c4-8aab-ad0abef8001b", 00:40:34.316 "strip_size_kb": 0, 00:40:34.316 "state": "online", 00:40:34.316 "raid_level": "raid1", 00:40:34.316 "superblock": true, 00:40:34.316 "num_base_bdevs": 2, 00:40:34.316 "num_base_bdevs_discovered": 1, 00:40:34.316 "num_base_bdevs_operational": 1, 00:40:34.316 "base_bdevs_list": [ 00:40:34.316 { 00:40:34.316 "name": null, 00:40:34.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:34.316 "is_configured": false, 00:40:34.316 "data_offset": 256, 00:40:34.316 "data_size": 7936 00:40:34.316 }, 00:40:34.316 { 00:40:34.316 "name": "BaseBdev2", 00:40:34.316 "uuid": "78a938da-37ba-52a9-93a2-0cd2160f27f9", 00:40:34.316 "is_configured": true, 00:40:34.316 "data_offset": 256, 00:40:34.316 "data_size": 7936 00:40:34.316 } 00:40:34.316 ] 00:40:34.316 }' 00:40:34.316 12:04:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:34.316 12:04:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:40:34.316 12:04:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:34.316 12:04:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:34.316 12:04:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 161713 00:40:34.316 12:04:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@949 -- # '[' -z 161713 ']' 00:40:34.316 12:04:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # kill -0 161713 00:40:34.316 12:04:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # uname 00:40:34.316 12:04:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:40:34.316 12:04:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 161713 00:40:34.316 12:04:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:40:34.316 12:04:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:40:34.316 12:04:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@967 -- # echo 'killing process with pid 161713' 00:40:34.316 killing process with pid 161713 00:40:34.316 12:04:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # kill 161713 00:40:34.316 Received shutdown signal, test time was about 60.000000 seconds 00:40:34.316 00:40:34.316 Latency(us) 00:40:34.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:34.316 =================================================================================================================== 00:40:34.316 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:34.316 [2024-06-10 12:04:06.322839] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:34.316 [2024-06-10 12:04:06.322959] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:34.316 [2024-06-10 12:04:06.323007] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:34.316 [2024-06-10 12:04:06.323017] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:40:34.316 12:04:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # wait 161713 00:40:34.883 [2024-06-10 12:04:06.675715] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:36.258 12:04:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:40:36.258 00:40:36.258 real 0m35.212s 00:40:36.258 user 0m54.803s 00:40:36.258 sys 0m4.842s 00:40:36.258 12:04:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:36.258 12:04:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:40:36.258 ************************************ 00:40:36.258 END TEST raid_rebuild_test_sb_4k 00:40:36.258 ************************************ 00:40:36.258 12:04:08 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:40:36.258 12:04:08 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:40:36.258 12:04:08 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:40:36.258 12:04:08 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:36.258 12:04:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:36.258 ************************************ 00:40:36.258 START TEST raid_state_function_test_sb_md_separate 00:40:36.258 ************************************ 00:40:36.258 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 2 true 00:40:36.258 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:40:36.258 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:40:36.258 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:40:36.258 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:40:36.258 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=162624 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 162624' 00:40:36.259 Process raid pid: 162624 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 162624 /var/tmp/spdk-raid.sock 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@830 -- # '[' -z 162624 ']' 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:36.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:36.259 12:04:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:36.259 [2024-06-10 12:04:08.306641] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:40:36.259 [2024-06-10 12:04:08.307528] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:36.517 [2024-06-10 12:04:08.487954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:36.776 [2024-06-10 12:04:08.687706] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:40:37.033 [2024-06-10 12:04:08.902304] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:37.291 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:37.291 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@863 -- # return 0 00:40:37.291 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:40:37.550 [2024-06-10 12:04:09.482785] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:37.550 [2024-06-10 12:04:09.482865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:37.550 [2024-06-10 12:04:09.482875] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:37.550 [2024-06-10 12:04:09.482901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:37.550 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:40:37.550 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:40:37.550 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:40:37.550 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:37.550 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:37.550 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:37.550 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:37.550 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:37.550 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:37.550 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:37.550 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:37.550 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:37.808 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:37.808 "name": "Existed_Raid", 00:40:37.808 "uuid": "3b421f0a-f896-4cd2-97a2-24154d5ce3a4", 00:40:37.808 "strip_size_kb": 0, 00:40:37.808 "state": "configuring", 00:40:37.808 "raid_level": "raid1", 00:40:37.808 "superblock": true, 00:40:37.808 "num_base_bdevs": 2, 00:40:37.808 "num_base_bdevs_discovered": 0, 00:40:37.808 "num_base_bdevs_operational": 2, 00:40:37.808 "base_bdevs_list": [ 00:40:37.808 { 00:40:37.808 "name": "BaseBdev1", 00:40:37.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:37.808 "is_configured": false, 00:40:37.808 "data_offset": 0, 00:40:37.808 "data_size": 0 00:40:37.808 }, 00:40:37.808 { 00:40:37.808 "name": "BaseBdev2", 00:40:37.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:37.808 "is_configured": false, 00:40:37.808 "data_offset": 0, 00:40:37.808 "data_size": 0 00:40:37.808 } 00:40:37.808 ] 00:40:37.808 }' 00:40:37.808 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:37.808 12:04:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:38.375 12:04:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:40:38.633 [2024-06-10 12:04:10.518977] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:38.633 [2024-06-10 12:04:10.519015] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:40:38.633 12:04:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:40:38.891 [2024-06-10 12:04:10.799038] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:38.891 [2024-06-10 12:04:10.799105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:38.891 [2024-06-10 12:04:10.799115] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:38.891 [2024-06-10 12:04:10.799155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:38.891 12:04:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:40:39.149 [2024-06-10 12:04:11.078284] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:39.149 BaseBdev1 00:40:39.149 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:40:39.149 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:40:39.149 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:40:39.149 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local i 00:40:39.149 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:40:39.149 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:40:39.149 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:40:39.406 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:40:39.664 [ 00:40:39.664 { 00:40:39.664 "name": "BaseBdev1", 00:40:39.664 "aliases": [ 00:40:39.664 "6ba61b7b-97ad-4115-a681-2b90b818de20" 00:40:39.664 ], 00:40:39.664 "product_name": "Malloc disk", 00:40:39.664 "block_size": 4096, 00:40:39.664 "num_blocks": 8192, 00:40:39.664 "uuid": "6ba61b7b-97ad-4115-a681-2b90b818de20", 00:40:39.664 "md_size": 32, 00:40:39.664 "md_interleave": false, 00:40:39.664 "dif_type": 0, 00:40:39.664 "assigned_rate_limits": { 00:40:39.664 "rw_ios_per_sec": 0, 00:40:39.664 "rw_mbytes_per_sec": 0, 00:40:39.664 "r_mbytes_per_sec": 0, 00:40:39.664 "w_mbytes_per_sec": 0 00:40:39.664 }, 00:40:39.664 "claimed": true, 00:40:39.664 "claim_type": "exclusive_write", 00:40:39.664 "zoned": false, 00:40:39.664 "supported_io_types": { 00:40:39.664 "read": true, 00:40:39.664 "write": true, 00:40:39.664 "unmap": true, 00:40:39.664 "write_zeroes": true, 00:40:39.664 "flush": true, 00:40:39.664 "reset": true, 00:40:39.664 "compare": false, 00:40:39.664 "compare_and_write": false, 00:40:39.664 "abort": true, 00:40:39.664 "nvme_admin": false, 00:40:39.664 "nvme_io": false 00:40:39.664 }, 00:40:39.664 "memory_domains": [ 00:40:39.664 { 00:40:39.664 "dma_device_id": "system", 00:40:39.664 "dma_device_type": 1 00:40:39.664 }, 00:40:39.664 { 00:40:39.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:39.664 "dma_device_type": 2 00:40:39.664 } 00:40:39.664 ], 00:40:39.664 "driver_specific": {} 00:40:39.664 } 00:40:39.664 ] 00:40:39.664 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # return 0 00:40:39.664 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:40:39.664 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:40:39.664 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:40:39.664 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:39.664 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:39.664 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:39.664 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:39.664 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:39.664 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:39.664 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:39.664 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:39.664 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:39.950 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:39.950 "name": "Existed_Raid", 00:40:39.950 "uuid": "869adbb9-8a24-4072-9bc2-8486cee3b7e2", 00:40:39.950 "strip_size_kb": 0, 00:40:39.950 "state": "configuring", 00:40:39.950 "raid_level": "raid1", 00:40:39.950 "superblock": true, 00:40:39.950 "num_base_bdevs": 2, 00:40:39.950 "num_base_bdevs_discovered": 1, 00:40:39.950 "num_base_bdevs_operational": 2, 00:40:39.950 "base_bdevs_list": [ 00:40:39.950 { 00:40:39.950 "name": "BaseBdev1", 00:40:39.950 "uuid": "6ba61b7b-97ad-4115-a681-2b90b818de20", 00:40:39.950 "is_configured": true, 00:40:39.950 "data_offset": 256, 00:40:39.950 "data_size": 7936 00:40:39.951 }, 00:40:39.951 { 00:40:39.951 "name": "BaseBdev2", 00:40:39.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:39.951 "is_configured": false, 00:40:39.951 "data_offset": 0, 00:40:39.951 "data_size": 0 00:40:39.951 } 00:40:39.951 ] 00:40:39.951 }' 00:40:39.951 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:39.951 12:04:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:40.552 12:04:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:40:40.552 [2024-06-10 12:04:12.538595] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:40.552 [2024-06-10 12:04:12.538651] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:40:40.552 12:04:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:40:40.811 [2024-06-10 12:04:12.814755] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:40.811 [2024-06-10 12:04:12.816699] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:40.811 [2024-06-10 12:04:12.816757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:40.811 12:04:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:40:40.811 12:04:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:40:40.811 12:04:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:40:40.811 12:04:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:40:40.811 12:04:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:40:40.811 12:04:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:40.811 12:04:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:40.811 12:04:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:40.811 12:04:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:40.811 12:04:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:40.811 12:04:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:40.811 12:04:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:40.811 12:04:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:40.811 12:04:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:41.070 12:04:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:41.070 "name": "Existed_Raid", 00:40:41.070 "uuid": "b880abe3-1f96-40b1-bd04-4cbef098f56b", 00:40:41.070 "strip_size_kb": 0, 00:40:41.070 "state": "configuring", 00:40:41.070 "raid_level": "raid1", 00:40:41.070 "superblock": true, 00:40:41.070 "num_base_bdevs": 2, 00:40:41.070 "num_base_bdevs_discovered": 1, 00:40:41.070 "num_base_bdevs_operational": 2, 00:40:41.070 "base_bdevs_list": [ 00:40:41.070 { 00:40:41.070 "name": "BaseBdev1", 00:40:41.070 "uuid": "6ba61b7b-97ad-4115-a681-2b90b818de20", 00:40:41.070 "is_configured": true, 00:40:41.070 "data_offset": 256, 00:40:41.070 "data_size": 7936 00:40:41.070 }, 00:40:41.070 { 00:40:41.070 "name": "BaseBdev2", 00:40:41.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:41.070 "is_configured": false, 00:40:41.070 "data_offset": 0, 00:40:41.070 "data_size": 0 00:40:41.070 } 00:40:41.070 ] 00:40:41.070 }' 00:40:41.070 12:04:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:41.070 12:04:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:41.637 12:04:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:40:41.895 [2024-06-10 12:04:13.905577] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:41.895 [2024-06-10 12:04:13.905768] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:40:41.895 [2024-06-10 12:04:13.905780] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:40:41.895 [2024-06-10 12:04:13.905900] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:40:41.895 [2024-06-10 12:04:13.905983] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:40:41.895 [2024-06-10 12:04:13.905991] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:40:41.895 [2024-06-10 12:04:13.906099] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:41.895 BaseBdev2 00:40:41.895 12:04:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:40:41.895 12:04:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:40:41.895 12:04:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:40:41.895 12:04:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local i 00:40:41.895 12:04:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:40:41.895 12:04:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:40:41.895 12:04:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:40:42.153 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:40:42.411 [ 00:40:42.411 { 00:40:42.411 "name": "BaseBdev2", 00:40:42.411 "aliases": [ 00:40:42.411 "ebedadfc-065e-46ee-a272-ef48cdb7c44b" 00:40:42.411 ], 00:40:42.411 "product_name": "Malloc disk", 00:40:42.411 "block_size": 4096, 00:40:42.411 "num_blocks": 8192, 00:40:42.411 "uuid": "ebedadfc-065e-46ee-a272-ef48cdb7c44b", 00:40:42.411 "md_size": 32, 00:40:42.411 "md_interleave": false, 00:40:42.411 "dif_type": 0, 00:40:42.411 "assigned_rate_limits": { 00:40:42.411 "rw_ios_per_sec": 0, 00:40:42.411 "rw_mbytes_per_sec": 0, 00:40:42.411 "r_mbytes_per_sec": 0, 00:40:42.411 "w_mbytes_per_sec": 0 00:40:42.411 }, 00:40:42.411 "claimed": true, 00:40:42.411 "claim_type": "exclusive_write", 00:40:42.411 "zoned": false, 00:40:42.411 "supported_io_types": { 00:40:42.411 "read": true, 00:40:42.411 "write": true, 00:40:42.411 "unmap": true, 00:40:42.411 "write_zeroes": true, 00:40:42.411 "flush": true, 00:40:42.411 "reset": true, 00:40:42.411 "compare": false, 00:40:42.411 "compare_and_write": false, 00:40:42.411 "abort": true, 00:40:42.411 "nvme_admin": false, 00:40:42.411 "nvme_io": false 00:40:42.411 }, 00:40:42.411 "memory_domains": [ 00:40:42.411 { 00:40:42.411 "dma_device_id": "system", 00:40:42.411 "dma_device_type": 1 00:40:42.411 }, 00:40:42.411 { 00:40:42.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:42.411 "dma_device_type": 2 00:40:42.411 } 00:40:42.411 ], 00:40:42.411 "driver_specific": {} 00:40:42.411 } 00:40:42.411 ] 00:40:42.411 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # return 0 00:40:42.411 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:40:42.411 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:40:42.411 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:40:42.411 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:40:42.411 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:42.411 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:42.411 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:42.411 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:42.411 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:42.411 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:42.411 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:42.411 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:42.411 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:42.411 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:42.669 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:42.669 "name": "Existed_Raid", 00:40:42.669 "uuid": "b880abe3-1f96-40b1-bd04-4cbef098f56b", 00:40:42.669 "strip_size_kb": 0, 00:40:42.669 "state": "online", 00:40:42.669 "raid_level": "raid1", 00:40:42.669 "superblock": true, 00:40:42.669 "num_base_bdevs": 2, 00:40:42.669 "num_base_bdevs_discovered": 2, 00:40:42.669 "num_base_bdevs_operational": 2, 00:40:42.669 "base_bdevs_list": [ 00:40:42.669 { 00:40:42.669 "name": "BaseBdev1", 00:40:42.669 "uuid": "6ba61b7b-97ad-4115-a681-2b90b818de20", 00:40:42.669 "is_configured": true, 00:40:42.669 "data_offset": 256, 00:40:42.669 "data_size": 7936 00:40:42.669 }, 00:40:42.669 { 00:40:42.669 "name": "BaseBdev2", 00:40:42.669 "uuid": "ebedadfc-065e-46ee-a272-ef48cdb7c44b", 00:40:42.669 "is_configured": true, 00:40:42.669 "data_offset": 256, 00:40:42.669 "data_size": 7936 00:40:42.669 } 00:40:42.669 ] 00:40:42.669 }' 00:40:42.669 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:42.669 12:04:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:43.234 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:40:43.234 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:40:43.234 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:40:43.234 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:40:43.234 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:40:43.234 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:40:43.234 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:40:43.234 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:40:43.492 [2024-06-10 12:04:15.463502] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:43.492 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:40:43.492 "name": "Existed_Raid", 00:40:43.492 "aliases": [ 00:40:43.492 "b880abe3-1f96-40b1-bd04-4cbef098f56b" 00:40:43.492 ], 00:40:43.492 "product_name": "Raid Volume", 00:40:43.492 "block_size": 4096, 00:40:43.492 "num_blocks": 7936, 00:40:43.492 "uuid": "b880abe3-1f96-40b1-bd04-4cbef098f56b", 00:40:43.492 "md_size": 32, 00:40:43.492 "md_interleave": false, 00:40:43.492 "dif_type": 0, 00:40:43.492 "assigned_rate_limits": { 00:40:43.492 "rw_ios_per_sec": 0, 00:40:43.492 "rw_mbytes_per_sec": 0, 00:40:43.492 "r_mbytes_per_sec": 0, 00:40:43.492 "w_mbytes_per_sec": 0 00:40:43.492 }, 00:40:43.492 "claimed": false, 00:40:43.492 "zoned": false, 00:40:43.492 "supported_io_types": { 00:40:43.492 "read": true, 00:40:43.492 "write": true, 00:40:43.492 "unmap": false, 00:40:43.492 "write_zeroes": true, 00:40:43.492 "flush": false, 00:40:43.492 "reset": true, 00:40:43.492 "compare": false, 00:40:43.492 "compare_and_write": false, 00:40:43.492 "abort": false, 00:40:43.492 "nvme_admin": false, 00:40:43.492 "nvme_io": false 00:40:43.492 }, 00:40:43.492 "memory_domains": [ 00:40:43.492 { 00:40:43.492 "dma_device_id": "system", 00:40:43.492 "dma_device_type": 1 00:40:43.492 }, 00:40:43.492 { 00:40:43.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:43.492 "dma_device_type": 2 00:40:43.492 }, 00:40:43.492 { 00:40:43.492 "dma_device_id": "system", 00:40:43.492 "dma_device_type": 1 00:40:43.492 }, 00:40:43.492 { 00:40:43.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:43.492 "dma_device_type": 2 00:40:43.492 } 00:40:43.492 ], 00:40:43.492 "driver_specific": { 00:40:43.492 "raid": { 00:40:43.492 "uuid": "b880abe3-1f96-40b1-bd04-4cbef098f56b", 00:40:43.492 "strip_size_kb": 0, 00:40:43.492 "state": "online", 00:40:43.492 "raid_level": "raid1", 00:40:43.492 "superblock": true, 00:40:43.492 "num_base_bdevs": 2, 00:40:43.492 "num_base_bdevs_discovered": 2, 00:40:43.492 "num_base_bdevs_operational": 2, 00:40:43.492 "base_bdevs_list": [ 00:40:43.493 { 00:40:43.493 "name": "BaseBdev1", 00:40:43.493 "uuid": "6ba61b7b-97ad-4115-a681-2b90b818de20", 00:40:43.493 "is_configured": true, 00:40:43.493 "data_offset": 256, 00:40:43.493 "data_size": 7936 00:40:43.493 }, 00:40:43.493 { 00:40:43.493 "name": "BaseBdev2", 00:40:43.493 "uuid": "ebedadfc-065e-46ee-a272-ef48cdb7c44b", 00:40:43.493 "is_configured": true, 00:40:43.493 "data_offset": 256, 00:40:43.493 "data_size": 7936 00:40:43.493 } 00:40:43.493 ] 00:40:43.493 } 00:40:43.493 } 00:40:43.493 }' 00:40:43.493 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:43.493 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:40:43.493 BaseBdev2' 00:40:43.493 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:40:43.493 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:40:43.493 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:40:43.750 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:40:43.750 "name": "BaseBdev1", 00:40:43.750 "aliases": [ 00:40:43.750 "6ba61b7b-97ad-4115-a681-2b90b818de20" 00:40:43.750 ], 00:40:43.750 "product_name": "Malloc disk", 00:40:43.750 "block_size": 4096, 00:40:43.750 "num_blocks": 8192, 00:40:43.750 "uuid": "6ba61b7b-97ad-4115-a681-2b90b818de20", 00:40:43.750 "md_size": 32, 00:40:43.750 "md_interleave": false, 00:40:43.750 "dif_type": 0, 00:40:43.750 "assigned_rate_limits": { 00:40:43.750 "rw_ios_per_sec": 0, 00:40:43.750 "rw_mbytes_per_sec": 0, 00:40:43.750 "r_mbytes_per_sec": 0, 00:40:43.750 "w_mbytes_per_sec": 0 00:40:43.750 }, 00:40:43.750 "claimed": true, 00:40:43.750 "claim_type": "exclusive_write", 00:40:43.750 "zoned": false, 00:40:43.750 "supported_io_types": { 00:40:43.750 "read": true, 00:40:43.750 "write": true, 00:40:43.750 "unmap": true, 00:40:43.750 "write_zeroes": true, 00:40:43.750 "flush": true, 00:40:43.750 "reset": true, 00:40:43.750 "compare": false, 00:40:43.750 "compare_and_write": false, 00:40:43.750 "abort": true, 00:40:43.750 "nvme_admin": false, 00:40:43.750 "nvme_io": false 00:40:43.750 }, 00:40:43.750 "memory_domains": [ 00:40:43.750 { 00:40:43.750 "dma_device_id": "system", 00:40:43.750 "dma_device_type": 1 00:40:43.750 }, 00:40:43.750 { 00:40:43.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:43.750 "dma_device_type": 2 00:40:43.750 } 00:40:43.750 ], 00:40:43.750 "driver_specific": {} 00:40:43.750 }' 00:40:43.750 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:43.750 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:44.008 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:40:44.008 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:44.008 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:44.008 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:40:44.008 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:44.008 12:04:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:44.008 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:40:44.008 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:44.008 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:44.266 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:40:44.267 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:40:44.267 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:40:44.267 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:40:44.525 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:40:44.525 "name": "BaseBdev2", 00:40:44.525 "aliases": [ 00:40:44.525 "ebedadfc-065e-46ee-a272-ef48cdb7c44b" 00:40:44.525 ], 00:40:44.525 "product_name": "Malloc disk", 00:40:44.525 "block_size": 4096, 00:40:44.525 "num_blocks": 8192, 00:40:44.525 "uuid": "ebedadfc-065e-46ee-a272-ef48cdb7c44b", 00:40:44.525 "md_size": 32, 00:40:44.525 "md_interleave": false, 00:40:44.525 "dif_type": 0, 00:40:44.525 "assigned_rate_limits": { 00:40:44.525 "rw_ios_per_sec": 0, 00:40:44.525 "rw_mbytes_per_sec": 0, 00:40:44.525 "r_mbytes_per_sec": 0, 00:40:44.525 "w_mbytes_per_sec": 0 00:40:44.525 }, 00:40:44.525 "claimed": true, 00:40:44.525 "claim_type": "exclusive_write", 00:40:44.525 "zoned": false, 00:40:44.525 "supported_io_types": { 00:40:44.525 "read": true, 00:40:44.525 "write": true, 00:40:44.525 "unmap": true, 00:40:44.525 "write_zeroes": true, 00:40:44.525 "flush": true, 00:40:44.525 "reset": true, 00:40:44.525 "compare": false, 00:40:44.525 "compare_and_write": false, 00:40:44.525 "abort": true, 00:40:44.525 "nvme_admin": false, 00:40:44.525 "nvme_io": false 00:40:44.525 }, 00:40:44.525 "memory_domains": [ 00:40:44.525 { 00:40:44.525 "dma_device_id": "system", 00:40:44.525 "dma_device_type": 1 00:40:44.525 }, 00:40:44.525 { 00:40:44.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:44.525 "dma_device_type": 2 00:40:44.525 } 00:40:44.525 ], 00:40:44.525 "driver_specific": {} 00:40:44.525 }' 00:40:44.525 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:44.525 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:44.525 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:40:44.525 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:44.525 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:44.787 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:40:44.787 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:44.787 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:44.787 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:40:44.787 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:44.787 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:44.787 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:40:44.787 12:04:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:40:45.047 [2024-06-10 12:04:17.051673] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:45.305 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:40:45.305 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:40:45.305 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:40:45.305 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:40:45.305 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:40:45.305 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:40:45.305 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:40:45.305 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:45.306 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:45.306 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:45.306 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:45.306 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:45.306 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:45.306 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:45.306 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:45.306 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:45.306 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:45.564 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:45.564 "name": "Existed_Raid", 00:40:45.564 "uuid": "b880abe3-1f96-40b1-bd04-4cbef098f56b", 00:40:45.564 "strip_size_kb": 0, 00:40:45.564 "state": "online", 00:40:45.564 "raid_level": "raid1", 00:40:45.564 "superblock": true, 00:40:45.564 "num_base_bdevs": 2, 00:40:45.564 "num_base_bdevs_discovered": 1, 00:40:45.564 "num_base_bdevs_operational": 1, 00:40:45.564 "base_bdevs_list": [ 00:40:45.564 { 00:40:45.564 "name": null, 00:40:45.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:45.564 "is_configured": false, 00:40:45.564 "data_offset": 256, 00:40:45.564 "data_size": 7936 00:40:45.564 }, 00:40:45.564 { 00:40:45.564 "name": "BaseBdev2", 00:40:45.564 "uuid": "ebedadfc-065e-46ee-a272-ef48cdb7c44b", 00:40:45.564 "is_configured": true, 00:40:45.564 "data_offset": 256, 00:40:45.564 "data_size": 7936 00:40:45.564 } 00:40:45.564 ] 00:40:45.564 }' 00:40:45.564 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:45.564 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:46.131 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:40:46.131 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:40:46.131 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:46.131 12:04:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:40:46.131 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:40:46.131 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:40:46.131 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:40:46.390 [2024-06-10 12:04:18.334975] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:40:46.390 [2024-06-10 12:04:18.335096] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:46.390 [2024-06-10 12:04:18.441471] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:46.390 [2024-06-10 12:04:18.441527] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:46.390 [2024-06-10 12:04:18.441537] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:40:46.648 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:40:46.648 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:40:46.648 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:46.648 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:40:46.907 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:40:46.907 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:40:46.907 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:40:46.907 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 162624 00:40:46.907 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@949 -- # '[' -z 162624 ']' 00:40:46.907 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # kill -0 162624 00:40:46.907 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # uname 00:40:46.907 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:40:46.907 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 162624 00:40:46.907 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:40:46.907 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:40:46.907 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # echo 'killing process with pid 162624' 00:40:46.907 killing process with pid 162624 00:40:46.907 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # kill 162624 00:40:46.907 [2024-06-10 12:04:18.745253] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:46.907 [2024-06-10 12:04:18.745381] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:46.907 12:04:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # wait 162624 00:40:48.283 12:04:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:40:48.283 00:40:48.283 real 0m11.841s 00:40:48.283 user 0m20.052s 00:40:48.283 sys 0m1.866s 00:40:48.283 12:04:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:48.283 12:04:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:48.283 ************************************ 00:40:48.283 END TEST raid_state_function_test_sb_md_separate 00:40:48.283 ************************************ 00:40:48.283 12:04:20 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:40:48.283 12:04:20 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:40:48.283 12:04:20 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:48.283 12:04:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:48.283 ************************************ 00:40:48.283 START TEST raid_superblock_test_md_separate 00:40:48.283 ************************************ 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 2 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=162995 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 162995 /var/tmp/spdk-raid.sock 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@830 -- # '[' -z 162995 ']' 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:40:48.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:48.283 12:04:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:48.283 [2024-06-10 12:04:20.197203] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:40:48.284 [2024-06-10 12:04:20.198046] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162995 ] 00:40:48.542 [2024-06-10 12:04:20.350549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:48.542 [2024-06-10 12:04:20.546839] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:40:48.800 [2024-06-10 12:04:20.746990] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:49.367 12:04:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:49.367 12:04:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@863 -- # return 0 00:40:49.367 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:40:49.367 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:40:49.367 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:40:49.367 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:40:49.367 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:40:49.367 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:40:49.367 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:40:49.367 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:40:49.367 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:40:49.367 malloc1 00:40:49.367 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:40:49.630 [2024-06-10 12:04:21.522201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:40:49.630 [2024-06-10 12:04:21.522303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:49.630 [2024-06-10 12:04:21.522343] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:40:49.630 [2024-06-10 12:04:21.522369] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:49.630 [2024-06-10 12:04:21.524534] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:49.630 [2024-06-10 12:04:21.524583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:40:49.630 pt1 00:40:49.630 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:40:49.630 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:40:49.630 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:40:49.630 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:40:49.630 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:40:49.630 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:40:49.630 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:40:49.630 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:40:49.630 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:40:49.899 malloc2 00:40:49.899 12:04:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:50.158 [2024-06-10 12:04:22.096763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:50.158 [2024-06-10 12:04:22.096871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:50.158 [2024-06-10 12:04:22.096929] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:40:50.158 [2024-06-10 12:04:22.096953] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:50.158 [2024-06-10 12:04:22.099176] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:50.158 [2024-06-10 12:04:22.099226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:50.158 pt2 00:40:50.158 12:04:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:40:50.158 12:04:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:40:50.158 12:04:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:40:50.417 [2024-06-10 12:04:22.284874] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:40:50.417 [2024-06-10 12:04:22.286998] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:50.417 [2024-06-10 12:04:22.287197] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:40:50.417 [2024-06-10 12:04:22.287211] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:40:50.417 [2024-06-10 12:04:22.287349] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:40:50.417 [2024-06-10 12:04:22.287437] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:40:50.417 [2024-06-10 12:04:22.287449] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:40:50.417 [2024-06-10 12:04:22.287556] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:50.417 12:04:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:50.417 12:04:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:50.417 12:04:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:50.417 12:04:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:50.417 12:04:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:50.417 12:04:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:50.417 12:04:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:50.417 12:04:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:50.417 12:04:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:50.417 12:04:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:50.417 12:04:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:50.417 12:04:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:50.675 12:04:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:50.675 "name": "raid_bdev1", 00:40:50.675 "uuid": "8e4e0772-3141-4bb0-90c4-b1077801c87c", 00:40:50.675 "strip_size_kb": 0, 00:40:50.675 "state": "online", 00:40:50.675 "raid_level": "raid1", 00:40:50.675 "superblock": true, 00:40:50.675 "num_base_bdevs": 2, 00:40:50.675 "num_base_bdevs_discovered": 2, 00:40:50.675 "num_base_bdevs_operational": 2, 00:40:50.675 "base_bdevs_list": [ 00:40:50.675 { 00:40:50.675 "name": "pt1", 00:40:50.675 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:50.675 "is_configured": true, 00:40:50.675 "data_offset": 256, 00:40:50.675 "data_size": 7936 00:40:50.675 }, 00:40:50.675 { 00:40:50.675 "name": "pt2", 00:40:50.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:50.675 "is_configured": true, 00:40:50.675 "data_offset": 256, 00:40:50.675 "data_size": 7936 00:40:50.675 } 00:40:50.675 ] 00:40:50.675 }' 00:40:50.675 12:04:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:50.675 12:04:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:51.243 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:40:51.243 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:40:51.243 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:40:51.243 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:40:51.243 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:40:51.243 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:40:51.243 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:40:51.243 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:40:51.243 [2024-06-10 12:04:23.277178] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:51.243 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:40:51.243 "name": "raid_bdev1", 00:40:51.243 "aliases": [ 00:40:51.243 "8e4e0772-3141-4bb0-90c4-b1077801c87c" 00:40:51.243 ], 00:40:51.243 "product_name": "Raid Volume", 00:40:51.243 "block_size": 4096, 00:40:51.243 "num_blocks": 7936, 00:40:51.243 "uuid": "8e4e0772-3141-4bb0-90c4-b1077801c87c", 00:40:51.243 "md_size": 32, 00:40:51.243 "md_interleave": false, 00:40:51.243 "dif_type": 0, 00:40:51.243 "assigned_rate_limits": { 00:40:51.243 "rw_ios_per_sec": 0, 00:40:51.243 "rw_mbytes_per_sec": 0, 00:40:51.243 "r_mbytes_per_sec": 0, 00:40:51.243 "w_mbytes_per_sec": 0 00:40:51.243 }, 00:40:51.243 "claimed": false, 00:40:51.243 "zoned": false, 00:40:51.243 "supported_io_types": { 00:40:51.243 "read": true, 00:40:51.243 "write": true, 00:40:51.243 "unmap": false, 00:40:51.243 "write_zeroes": true, 00:40:51.243 "flush": false, 00:40:51.243 "reset": true, 00:40:51.243 "compare": false, 00:40:51.243 "compare_and_write": false, 00:40:51.243 "abort": false, 00:40:51.243 "nvme_admin": false, 00:40:51.243 "nvme_io": false 00:40:51.243 }, 00:40:51.243 "memory_domains": [ 00:40:51.243 { 00:40:51.243 "dma_device_id": "system", 00:40:51.243 "dma_device_type": 1 00:40:51.243 }, 00:40:51.243 { 00:40:51.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:51.243 "dma_device_type": 2 00:40:51.243 }, 00:40:51.243 { 00:40:51.243 "dma_device_id": "system", 00:40:51.243 "dma_device_type": 1 00:40:51.243 }, 00:40:51.243 { 00:40:51.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:51.243 "dma_device_type": 2 00:40:51.243 } 00:40:51.243 ], 00:40:51.243 "driver_specific": { 00:40:51.243 "raid": { 00:40:51.243 "uuid": "8e4e0772-3141-4bb0-90c4-b1077801c87c", 00:40:51.243 "strip_size_kb": 0, 00:40:51.243 "state": "online", 00:40:51.243 "raid_level": "raid1", 00:40:51.243 "superblock": true, 00:40:51.243 "num_base_bdevs": 2, 00:40:51.243 "num_base_bdevs_discovered": 2, 00:40:51.243 "num_base_bdevs_operational": 2, 00:40:51.243 "base_bdevs_list": [ 00:40:51.243 { 00:40:51.243 "name": "pt1", 00:40:51.243 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:51.243 "is_configured": true, 00:40:51.243 "data_offset": 256, 00:40:51.243 "data_size": 7936 00:40:51.243 }, 00:40:51.243 { 00:40:51.243 "name": "pt2", 00:40:51.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:51.243 "is_configured": true, 00:40:51.243 "data_offset": 256, 00:40:51.243 "data_size": 7936 00:40:51.243 } 00:40:51.243 ] 00:40:51.243 } 00:40:51.243 } 00:40:51.243 }' 00:40:51.243 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:51.503 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:40:51.503 pt2' 00:40:51.503 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:40:51.503 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:40:51.503 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:40:51.762 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:40:51.762 "name": "pt1", 00:40:51.762 "aliases": [ 00:40:51.762 "00000000-0000-0000-0000-000000000001" 00:40:51.762 ], 00:40:51.762 "product_name": "passthru", 00:40:51.762 "block_size": 4096, 00:40:51.762 "num_blocks": 8192, 00:40:51.762 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:51.762 "md_size": 32, 00:40:51.762 "md_interleave": false, 00:40:51.762 "dif_type": 0, 00:40:51.762 "assigned_rate_limits": { 00:40:51.762 "rw_ios_per_sec": 0, 00:40:51.762 "rw_mbytes_per_sec": 0, 00:40:51.762 "r_mbytes_per_sec": 0, 00:40:51.762 "w_mbytes_per_sec": 0 00:40:51.762 }, 00:40:51.762 "claimed": true, 00:40:51.762 "claim_type": "exclusive_write", 00:40:51.762 "zoned": false, 00:40:51.762 "supported_io_types": { 00:40:51.762 "read": true, 00:40:51.762 "write": true, 00:40:51.762 "unmap": true, 00:40:51.762 "write_zeroes": true, 00:40:51.762 "flush": true, 00:40:51.762 "reset": true, 00:40:51.762 "compare": false, 00:40:51.762 "compare_and_write": false, 00:40:51.762 "abort": true, 00:40:51.762 "nvme_admin": false, 00:40:51.762 "nvme_io": false 00:40:51.762 }, 00:40:51.762 "memory_domains": [ 00:40:51.762 { 00:40:51.762 "dma_device_id": "system", 00:40:51.762 "dma_device_type": 1 00:40:51.762 }, 00:40:51.762 { 00:40:51.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:51.762 "dma_device_type": 2 00:40:51.762 } 00:40:51.762 ], 00:40:51.762 "driver_specific": { 00:40:51.762 "passthru": { 00:40:51.762 "name": "pt1", 00:40:51.762 "base_bdev_name": "malloc1" 00:40:51.762 } 00:40:51.762 } 00:40:51.762 }' 00:40:51.762 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:51.762 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:51.762 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:40:51.762 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:51.762 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:51.762 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:40:51.762 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:51.762 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:51.762 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:40:51.762 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:52.021 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:52.021 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:40:52.021 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:40:52.021 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:40:52.021 12:04:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:40:52.280 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:40:52.280 "name": "pt2", 00:40:52.280 "aliases": [ 00:40:52.280 "00000000-0000-0000-0000-000000000002" 00:40:52.280 ], 00:40:52.280 "product_name": "passthru", 00:40:52.280 "block_size": 4096, 00:40:52.280 "num_blocks": 8192, 00:40:52.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:52.280 "md_size": 32, 00:40:52.280 "md_interleave": false, 00:40:52.280 "dif_type": 0, 00:40:52.280 "assigned_rate_limits": { 00:40:52.280 "rw_ios_per_sec": 0, 00:40:52.280 "rw_mbytes_per_sec": 0, 00:40:52.280 "r_mbytes_per_sec": 0, 00:40:52.280 "w_mbytes_per_sec": 0 00:40:52.280 }, 00:40:52.280 "claimed": true, 00:40:52.280 "claim_type": "exclusive_write", 00:40:52.280 "zoned": false, 00:40:52.280 "supported_io_types": { 00:40:52.280 "read": true, 00:40:52.280 "write": true, 00:40:52.280 "unmap": true, 00:40:52.280 "write_zeroes": true, 00:40:52.280 "flush": true, 00:40:52.280 "reset": true, 00:40:52.280 "compare": false, 00:40:52.280 "compare_and_write": false, 00:40:52.280 "abort": true, 00:40:52.280 "nvme_admin": false, 00:40:52.280 "nvme_io": false 00:40:52.280 }, 00:40:52.280 "memory_domains": [ 00:40:52.280 { 00:40:52.280 "dma_device_id": "system", 00:40:52.280 "dma_device_type": 1 00:40:52.280 }, 00:40:52.280 { 00:40:52.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:52.280 "dma_device_type": 2 00:40:52.280 } 00:40:52.280 ], 00:40:52.280 "driver_specific": { 00:40:52.280 "passthru": { 00:40:52.280 "name": "pt2", 00:40:52.280 "base_bdev_name": "malloc2" 00:40:52.280 } 00:40:52.280 } 00:40:52.280 }' 00:40:52.280 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:52.280 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:52.280 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:40:52.280 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:52.280 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:52.280 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:40:52.280 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:52.280 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:52.539 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:40:52.539 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:52.539 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:52.539 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:40:52.539 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:40:52.539 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:40:52.539 [2024-06-10 12:04:24.577424] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:52.539 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=8e4e0772-3141-4bb0-90c4-b1077801c87c 00:40:52.539 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 8e4e0772-3141-4bb0-90c4-b1077801c87c ']' 00:40:52.539 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:40:52.797 [2024-06-10 12:04:24.761197] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:52.797 [2024-06-10 12:04:24.761349] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:52.797 [2024-06-10 12:04:24.761476] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:52.797 [2024-06-10 12:04:24.761561] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:52.797 [2024-06-10 12:04:24.761592] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:40:52.797 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:52.797 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:40:53.056 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:40:53.056 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:40:53.056 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:40:53.056 12:04:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:40:53.314 12:04:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:40:53.314 12:04:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:40:53.573 12:04:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:40:53.573 12:04:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:40:53.573 12:04:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:40:53.573 12:04:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:40:53.573 12:04:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@649 -- # local es=0 00:40:53.573 12:04:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:40:53.573 12:04:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:53.573 12:04:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:53.573 12:04:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:53.573 12:04:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:53.573 12:04:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:53.573 12:04:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:53.573 12:04:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:53.573 12:04:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:40:53.573 12:04:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:40:53.832 [2024-06-10 12:04:25.809391] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:40:53.832 [2024-06-10 12:04:25.811485] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:40:53.832 [2024-06-10 12:04:25.811687] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:40:53.832 [2024-06-10 12:04:25.811879] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:40:53.832 [2024-06-10 12:04:25.811985] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:53.832 [2024-06-10 12:04:25.812055] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:40:53.832 request: 00:40:53.832 { 00:40:53.832 "name": "raid_bdev1", 00:40:53.832 "raid_level": "raid1", 00:40:53.832 "base_bdevs": [ 00:40:53.832 "malloc1", 00:40:53.832 "malloc2" 00:40:53.832 ], 00:40:53.832 "superblock": false, 00:40:53.832 "method": "bdev_raid_create", 00:40:53.832 "req_id": 1 00:40:53.832 } 00:40:53.832 Got JSON-RPC error response 00:40:53.832 response: 00:40:53.832 { 00:40:53.832 "code": -17, 00:40:53.832 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:40:53.832 } 00:40:53.832 12:04:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # es=1 00:40:53.832 12:04:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:40:53.832 12:04:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:40:53.833 12:04:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:40:53.833 12:04:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:53.833 12:04:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:40:54.091 12:04:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:40:54.091 12:04:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:40:54.091 12:04:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:40:54.348 [2024-06-10 12:04:26.281447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:40:54.349 [2024-06-10 12:04:26.281721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:54.349 [2024-06-10 12:04:26.281786] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:40:54.349 [2024-06-10 12:04:26.281881] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:54.349 [2024-06-10 12:04:26.284117] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:54.349 [2024-06-10 12:04:26.284308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:40:54.349 [2024-06-10 12:04:26.284554] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:40:54.349 [2024-06-10 12:04:26.284714] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:40:54.349 pt1 00:40:54.349 12:04:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:40:54.349 12:04:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:54.349 12:04:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:40:54.349 12:04:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:54.349 12:04:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:54.349 12:04:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:54.349 12:04:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:54.349 12:04:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:54.349 12:04:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:54.349 12:04:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:54.349 12:04:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:54.349 12:04:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:54.607 12:04:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:54.607 "name": "raid_bdev1", 00:40:54.607 "uuid": "8e4e0772-3141-4bb0-90c4-b1077801c87c", 00:40:54.607 "strip_size_kb": 0, 00:40:54.607 "state": "configuring", 00:40:54.607 "raid_level": "raid1", 00:40:54.607 "superblock": true, 00:40:54.607 "num_base_bdevs": 2, 00:40:54.607 "num_base_bdevs_discovered": 1, 00:40:54.607 "num_base_bdevs_operational": 2, 00:40:54.607 "base_bdevs_list": [ 00:40:54.607 { 00:40:54.607 "name": "pt1", 00:40:54.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:54.607 "is_configured": true, 00:40:54.607 "data_offset": 256, 00:40:54.607 "data_size": 7936 00:40:54.607 }, 00:40:54.607 { 00:40:54.607 "name": null, 00:40:54.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:54.607 "is_configured": false, 00:40:54.607 "data_offset": 256, 00:40:54.607 "data_size": 7936 00:40:54.607 } 00:40:54.607 ] 00:40:54.607 }' 00:40:54.607 12:04:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:54.607 12:04:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:55.174 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:40:55.174 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:40:55.174 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:40:55.174 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:55.432 [2024-06-10 12:04:27.325629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:55.432 [2024-06-10 12:04:27.325881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:55.432 [2024-06-10 12:04:27.325949] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:40:55.432 [2024-06-10 12:04:27.326051] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:55.432 [2024-06-10 12:04:27.326314] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:55.432 [2024-06-10 12:04:27.326481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:55.432 [2024-06-10 12:04:27.326641] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:40:55.432 [2024-06-10 12:04:27.326775] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:55.432 [2024-06-10 12:04:27.326891] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:40:55.432 [2024-06-10 12:04:27.327067] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:40:55.432 [2024-06-10 12:04:27.327215] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:40:55.432 [2024-06-10 12:04:27.327400] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:40:55.432 [2024-06-10 12:04:27.327492] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:40:55.432 [2024-06-10 12:04:27.327645] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:55.432 pt2 00:40:55.432 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:40:55.432 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:40:55.432 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:55.432 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:55.432 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:55.432 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:55.432 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:55.432 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:55.432 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:55.432 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:55.432 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:55.432 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:55.432 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:55.432 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:55.691 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:55.691 "name": "raid_bdev1", 00:40:55.691 "uuid": "8e4e0772-3141-4bb0-90c4-b1077801c87c", 00:40:55.691 "strip_size_kb": 0, 00:40:55.691 "state": "online", 00:40:55.691 "raid_level": "raid1", 00:40:55.691 "superblock": true, 00:40:55.691 "num_base_bdevs": 2, 00:40:55.691 "num_base_bdevs_discovered": 2, 00:40:55.691 "num_base_bdevs_operational": 2, 00:40:55.691 "base_bdevs_list": [ 00:40:55.691 { 00:40:55.691 "name": "pt1", 00:40:55.691 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:55.691 "is_configured": true, 00:40:55.691 "data_offset": 256, 00:40:55.691 "data_size": 7936 00:40:55.691 }, 00:40:55.691 { 00:40:55.691 "name": "pt2", 00:40:55.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:55.691 "is_configured": true, 00:40:55.691 "data_offset": 256, 00:40:55.691 "data_size": 7936 00:40:55.691 } 00:40:55.691 ] 00:40:55.691 }' 00:40:55.691 12:04:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:55.691 12:04:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:56.258 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:40:56.258 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:40:56.258 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:40:56.258 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:40:56.258 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:40:56.258 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:40:56.258 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:40:56.258 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:40:56.258 [2024-06-10 12:04:28.298014] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:56.518 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:40:56.518 "name": "raid_bdev1", 00:40:56.518 "aliases": [ 00:40:56.518 "8e4e0772-3141-4bb0-90c4-b1077801c87c" 00:40:56.518 ], 00:40:56.518 "product_name": "Raid Volume", 00:40:56.518 "block_size": 4096, 00:40:56.518 "num_blocks": 7936, 00:40:56.518 "uuid": "8e4e0772-3141-4bb0-90c4-b1077801c87c", 00:40:56.518 "md_size": 32, 00:40:56.518 "md_interleave": false, 00:40:56.518 "dif_type": 0, 00:40:56.518 "assigned_rate_limits": { 00:40:56.518 "rw_ios_per_sec": 0, 00:40:56.518 "rw_mbytes_per_sec": 0, 00:40:56.518 "r_mbytes_per_sec": 0, 00:40:56.518 "w_mbytes_per_sec": 0 00:40:56.518 }, 00:40:56.518 "claimed": false, 00:40:56.518 "zoned": false, 00:40:56.518 "supported_io_types": { 00:40:56.518 "read": true, 00:40:56.518 "write": true, 00:40:56.518 "unmap": false, 00:40:56.518 "write_zeroes": true, 00:40:56.518 "flush": false, 00:40:56.518 "reset": true, 00:40:56.518 "compare": false, 00:40:56.518 "compare_and_write": false, 00:40:56.518 "abort": false, 00:40:56.518 "nvme_admin": false, 00:40:56.518 "nvme_io": false 00:40:56.518 }, 00:40:56.518 "memory_domains": [ 00:40:56.518 { 00:40:56.518 "dma_device_id": "system", 00:40:56.518 "dma_device_type": 1 00:40:56.518 }, 00:40:56.518 { 00:40:56.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:56.518 "dma_device_type": 2 00:40:56.518 }, 00:40:56.518 { 00:40:56.518 "dma_device_id": "system", 00:40:56.518 "dma_device_type": 1 00:40:56.518 }, 00:40:56.518 { 00:40:56.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:56.518 "dma_device_type": 2 00:40:56.518 } 00:40:56.518 ], 00:40:56.518 "driver_specific": { 00:40:56.518 "raid": { 00:40:56.518 "uuid": "8e4e0772-3141-4bb0-90c4-b1077801c87c", 00:40:56.518 "strip_size_kb": 0, 00:40:56.518 "state": "online", 00:40:56.518 "raid_level": "raid1", 00:40:56.518 "superblock": true, 00:40:56.518 "num_base_bdevs": 2, 00:40:56.518 "num_base_bdevs_discovered": 2, 00:40:56.518 "num_base_bdevs_operational": 2, 00:40:56.518 "base_bdevs_list": [ 00:40:56.518 { 00:40:56.518 "name": "pt1", 00:40:56.518 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:56.518 "is_configured": true, 00:40:56.518 "data_offset": 256, 00:40:56.518 "data_size": 7936 00:40:56.518 }, 00:40:56.518 { 00:40:56.518 "name": "pt2", 00:40:56.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:56.518 "is_configured": true, 00:40:56.518 "data_offset": 256, 00:40:56.518 "data_size": 7936 00:40:56.518 } 00:40:56.518 ] 00:40:56.518 } 00:40:56.518 } 00:40:56.518 }' 00:40:56.518 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:56.518 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:40:56.518 pt2' 00:40:56.518 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:40:56.518 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:40:56.518 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:40:56.518 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:40:56.518 "name": "pt1", 00:40:56.518 "aliases": [ 00:40:56.518 "00000000-0000-0000-0000-000000000001" 00:40:56.518 ], 00:40:56.518 "product_name": "passthru", 00:40:56.518 "block_size": 4096, 00:40:56.518 "num_blocks": 8192, 00:40:56.518 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:56.518 "md_size": 32, 00:40:56.518 "md_interleave": false, 00:40:56.518 "dif_type": 0, 00:40:56.518 "assigned_rate_limits": { 00:40:56.518 "rw_ios_per_sec": 0, 00:40:56.518 "rw_mbytes_per_sec": 0, 00:40:56.518 "r_mbytes_per_sec": 0, 00:40:56.518 "w_mbytes_per_sec": 0 00:40:56.518 }, 00:40:56.518 "claimed": true, 00:40:56.518 "claim_type": "exclusive_write", 00:40:56.518 "zoned": false, 00:40:56.518 "supported_io_types": { 00:40:56.518 "read": true, 00:40:56.518 "write": true, 00:40:56.518 "unmap": true, 00:40:56.518 "write_zeroes": true, 00:40:56.518 "flush": true, 00:40:56.518 "reset": true, 00:40:56.518 "compare": false, 00:40:56.518 "compare_and_write": false, 00:40:56.518 "abort": true, 00:40:56.518 "nvme_admin": false, 00:40:56.518 "nvme_io": false 00:40:56.518 }, 00:40:56.518 "memory_domains": [ 00:40:56.518 { 00:40:56.518 "dma_device_id": "system", 00:40:56.518 "dma_device_type": 1 00:40:56.518 }, 00:40:56.518 { 00:40:56.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:56.518 "dma_device_type": 2 00:40:56.518 } 00:40:56.518 ], 00:40:56.518 "driver_specific": { 00:40:56.518 "passthru": { 00:40:56.518 "name": "pt1", 00:40:56.518 "base_bdev_name": "malloc1" 00:40:56.518 } 00:40:56.518 } 00:40:56.518 }' 00:40:56.518 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:56.777 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:56.777 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:40:56.777 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:56.777 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:56.777 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:40:56.777 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:56.777 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:56.777 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:40:56.777 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:57.035 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:57.035 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:40:57.035 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:40:57.035 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:40:57.035 12:04:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:40:57.294 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:40:57.294 "name": "pt2", 00:40:57.294 "aliases": [ 00:40:57.294 "00000000-0000-0000-0000-000000000002" 00:40:57.294 ], 00:40:57.294 "product_name": "passthru", 00:40:57.294 "block_size": 4096, 00:40:57.294 "num_blocks": 8192, 00:40:57.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:57.294 "md_size": 32, 00:40:57.294 "md_interleave": false, 00:40:57.294 "dif_type": 0, 00:40:57.294 "assigned_rate_limits": { 00:40:57.294 "rw_ios_per_sec": 0, 00:40:57.294 "rw_mbytes_per_sec": 0, 00:40:57.294 "r_mbytes_per_sec": 0, 00:40:57.294 "w_mbytes_per_sec": 0 00:40:57.294 }, 00:40:57.294 "claimed": true, 00:40:57.294 "claim_type": "exclusive_write", 00:40:57.294 "zoned": false, 00:40:57.294 "supported_io_types": { 00:40:57.294 "read": true, 00:40:57.294 "write": true, 00:40:57.294 "unmap": true, 00:40:57.294 "write_zeroes": true, 00:40:57.294 "flush": true, 00:40:57.294 "reset": true, 00:40:57.294 "compare": false, 00:40:57.294 "compare_and_write": false, 00:40:57.294 "abort": true, 00:40:57.294 "nvme_admin": false, 00:40:57.294 "nvme_io": false 00:40:57.294 }, 00:40:57.294 "memory_domains": [ 00:40:57.294 { 00:40:57.294 "dma_device_id": "system", 00:40:57.294 "dma_device_type": 1 00:40:57.294 }, 00:40:57.294 { 00:40:57.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:57.294 "dma_device_type": 2 00:40:57.294 } 00:40:57.294 ], 00:40:57.294 "driver_specific": { 00:40:57.294 "passthru": { 00:40:57.294 "name": "pt2", 00:40:57.294 "base_bdev_name": "malloc2" 00:40:57.294 } 00:40:57.294 } 00:40:57.294 }' 00:40:57.294 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:57.294 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:57.294 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:40:57.294 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:57.294 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:57.294 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:40:57.294 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:57.552 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:57.552 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:40:57.552 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:57.552 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:57.552 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:40:57.552 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:40:57.552 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:40:57.810 [2024-06-10 12:04:29.774276] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:57.810 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 8e4e0772-3141-4bb0-90c4-b1077801c87c '!=' 8e4e0772-3141-4bb0-90c4-b1077801c87c ']' 00:40:57.810 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:40:57.810 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:40:57.810 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:40:57.810 12:04:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:40:58.069 [2024-06-10 12:04:30.038216] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:40:58.069 12:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:58.069 12:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:58.069 12:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:58.069 12:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:58.069 12:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:58.069 12:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:58.069 12:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:58.069 12:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:58.069 12:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:58.069 12:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:58.069 12:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:58.069 12:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:58.328 12:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:58.328 "name": "raid_bdev1", 00:40:58.328 "uuid": "8e4e0772-3141-4bb0-90c4-b1077801c87c", 00:40:58.328 "strip_size_kb": 0, 00:40:58.328 "state": "online", 00:40:58.328 "raid_level": "raid1", 00:40:58.328 "superblock": true, 00:40:58.328 "num_base_bdevs": 2, 00:40:58.328 "num_base_bdevs_discovered": 1, 00:40:58.328 "num_base_bdevs_operational": 1, 00:40:58.328 "base_bdevs_list": [ 00:40:58.328 { 00:40:58.328 "name": null, 00:40:58.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:58.328 "is_configured": false, 00:40:58.328 "data_offset": 256, 00:40:58.328 "data_size": 7936 00:40:58.328 }, 00:40:58.328 { 00:40:58.328 "name": "pt2", 00:40:58.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:58.328 "is_configured": true, 00:40:58.328 "data_offset": 256, 00:40:58.328 "data_size": 7936 00:40:58.328 } 00:40:58.328 ] 00:40:58.328 }' 00:40:58.328 12:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:58.328 12:04:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:58.895 12:04:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:40:59.153 [2024-06-10 12:04:31.054375] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:59.153 [2024-06-10 12:04:31.054532] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:59.153 [2024-06-10 12:04:31.054758] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:59.153 [2024-06-10 12:04:31.054892] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:59.153 [2024-06-10 12:04:31.054969] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:40:59.153 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:59.153 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:40:59.412 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:40:59.412 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:40:59.412 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:40:59.412 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:40:59.412 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:59.671 [2024-06-10 12:04:31.690443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:59.671 [2024-06-10 12:04:31.690698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:59.671 [2024-06-10 12:04:31.690818] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:40:59.671 [2024-06-10 12:04:31.690918] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:59.671 [2024-06-10 12:04:31.693176] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:59.671 [2024-06-10 12:04:31.693359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:59.671 [2024-06-10 12:04:31.693565] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:40:59.671 [2024-06-10 12:04:31.693687] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:59.671 [2024-06-10 12:04:31.693854] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:40:59.671 [2024-06-10 12:04:31.693939] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:40:59.671 [2024-06-10 12:04:31.694066] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:40:59.671 [2024-06-10 12:04:31.694260] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:40:59.671 [2024-06-10 12:04:31.694345] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:40:59.671 [2024-06-10 12:04:31.694560] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:59.671 pt2 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:59.671 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:59.929 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:59.929 "name": "raid_bdev1", 00:40:59.929 "uuid": "8e4e0772-3141-4bb0-90c4-b1077801c87c", 00:40:59.929 "strip_size_kb": 0, 00:40:59.929 "state": "online", 00:40:59.929 "raid_level": "raid1", 00:40:59.929 "superblock": true, 00:40:59.929 "num_base_bdevs": 2, 00:40:59.929 "num_base_bdevs_discovered": 1, 00:40:59.929 "num_base_bdevs_operational": 1, 00:40:59.929 "base_bdevs_list": [ 00:40:59.929 { 00:40:59.929 "name": null, 00:40:59.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:59.929 "is_configured": false, 00:40:59.929 "data_offset": 256, 00:40:59.929 "data_size": 7936 00:40:59.929 }, 00:40:59.929 { 00:40:59.929 "name": "pt2", 00:40:59.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:59.929 "is_configured": true, 00:40:59.929 "data_offset": 256, 00:40:59.929 "data_size": 7936 00:40:59.929 } 00:40:59.929 ] 00:40:59.929 }' 00:40:59.929 12:04:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:59.929 12:04:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:41:00.500 12:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:41:00.757 [2024-06-10 12:04:32.698692] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:00.757 [2024-06-10 12:04:32.698934] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:00.757 [2024-06-10 12:04:32.699081] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:00.757 [2024-06-10 12:04:32.699223] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:00.757 [2024-06-10 12:04:32.699299] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:41:00.757 12:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:00.757 12:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:41:01.014 12:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:41:01.014 12:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:41:01.014 12:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:41:01.014 12:04:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:41:01.272 [2024-06-10 12:04:33.150863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:41:01.272 [2024-06-10 12:04:33.151114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:01.272 [2024-06-10 12:04:33.151190] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:41:01.272 [2024-06-10 12:04:33.151360] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:01.272 [2024-06-10 12:04:33.153462] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:01.272 [2024-06-10 12:04:33.153634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:41:01.272 [2024-06-10 12:04:33.153816] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:41:01.272 [2024-06-10 12:04:33.153927] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:41:01.272 [2024-06-10 12:04:33.154059] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:41:01.272 [2024-06-10 12:04:33.154147] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:01.272 [2024-06-10 12:04:33.154196] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:41:01.272 [2024-06-10 12:04:33.154372] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:41:01.272 [2024-06-10 12:04:33.154467] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:41:01.272 [2024-06-10 12:04:33.154624] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:41:01.272 [2024-06-10 12:04:33.154773] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:41:01.272 [2024-06-10 12:04:33.154935] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:41:01.272 [2024-06-10 12:04:33.155012] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:41:01.272 [2024-06-10 12:04:33.155198] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:01.272 pt1 00:41:01.273 12:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:41:01.273 12:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:01.273 12:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:01.273 12:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:01.273 12:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:01.273 12:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:01.273 12:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:01.273 12:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:01.273 12:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:01.273 12:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:01.273 12:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:01.273 12:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:01.273 12:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:01.531 12:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:01.531 "name": "raid_bdev1", 00:41:01.531 "uuid": "8e4e0772-3141-4bb0-90c4-b1077801c87c", 00:41:01.531 "strip_size_kb": 0, 00:41:01.531 "state": "online", 00:41:01.531 "raid_level": "raid1", 00:41:01.531 "superblock": true, 00:41:01.531 "num_base_bdevs": 2, 00:41:01.531 "num_base_bdevs_discovered": 1, 00:41:01.531 "num_base_bdevs_operational": 1, 00:41:01.531 "base_bdevs_list": [ 00:41:01.531 { 00:41:01.531 "name": null, 00:41:01.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:01.531 "is_configured": false, 00:41:01.531 "data_offset": 256, 00:41:01.531 "data_size": 7936 00:41:01.531 }, 00:41:01.531 { 00:41:01.531 "name": "pt2", 00:41:01.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:01.531 "is_configured": true, 00:41:01.531 "data_offset": 256, 00:41:01.531 "data_size": 7936 00:41:01.531 } 00:41:01.531 ] 00:41:01.531 }' 00:41:01.531 12:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:01.531 12:04:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:41:02.098 12:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:41:02.098 12:04:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:41:02.357 12:04:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:41:02.357 12:04:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:41:02.357 12:04:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:41:02.616 [2024-06-10 12:04:34.467639] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:02.616 12:04:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 8e4e0772-3141-4bb0-90c4-b1077801c87c '!=' 8e4e0772-3141-4bb0-90c4-b1077801c87c ']' 00:41:02.616 12:04:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 162995 00:41:02.616 12:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@949 -- # '[' -z 162995 ']' 00:41:02.616 12:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # kill -0 162995 00:41:02.616 12:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # uname 00:41:02.616 12:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:02.616 12:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 162995 00:41:02.616 12:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:41:02.616 12:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:41:02.616 12:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # echo 'killing process with pid 162995' 00:41:02.616 killing process with pid 162995 00:41:02.616 12:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # kill 162995 00:41:02.616 [2024-06-10 12:04:34.525156] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:02.616 12:04:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # wait 162995 00:41:02.616 [2024-06-10 12:04:34.525351] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:02.616 [2024-06-10 12:04:34.525406] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:02.616 [2024-06-10 12:04:34.525416] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:41:02.875 [2024-06-10 12:04:34.744933] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:04.252 12:04:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:41:04.252 00:41:04.252 real 0m15.905s 00:41:04.252 user 0m28.102s 00:41:04.252 sys 0m2.359s 00:41:04.252 12:04:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:04.252 12:04:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:41:04.252 ************************************ 00:41:04.252 END TEST raid_superblock_test_md_separate 00:41:04.252 ************************************ 00:41:04.252 12:04:36 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:41:04.252 12:04:36 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:41:04.252 12:04:36 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:41:04.252 12:04:36 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:04.252 12:04:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:41:04.252 ************************************ 00:41:04.252 START TEST raid_rebuild_test_sb_md_separate 00:41:04.252 ************************************ 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 2 true false true 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=163521 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 163521 /var/tmp/spdk-raid.sock 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@830 -- # '[' -z 163521 ']' 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:41:04.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:04.252 12:04:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:41:04.252 [2024-06-10 12:04:36.187847] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:41:04.253 [2024-06-10 12:04:36.188292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163521 ] 00:41:04.253 I/O size of 3145728 is greater than zero copy threshold (65536). 00:41:04.253 Zero copy mechanism will not be used. 00:41:04.511 [2024-06-10 12:04:36.347981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:04.511 [2024-06-10 12:04:36.551276] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:41:04.770 [2024-06-10 12:04:36.783118] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:05.349 12:04:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:05.349 12:04:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@863 -- # return 0 00:41:05.349 12:04:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:41:05.349 12:04:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:41:05.605 BaseBdev1_malloc 00:41:05.605 12:04:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:41:06.150 [2024-06-10 12:04:37.748719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:41:06.150 [2024-06-10 12:04:37.749036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:06.151 [2024-06-10 12:04:37.749122] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:41:06.151 [2024-06-10 12:04:37.749224] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:06.151 [2024-06-10 12:04:37.751347] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:06.151 [2024-06-10 12:04:37.751499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:41:06.151 BaseBdev1 00:41:06.151 12:04:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:41:06.151 12:04:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:41:06.151 BaseBdev2_malloc 00:41:06.151 12:04:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:41:06.433 [2024-06-10 12:04:38.306785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:41:06.433 [2024-06-10 12:04:38.307034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:06.433 [2024-06-10 12:04:38.307186] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:41:06.433 [2024-06-10 12:04:38.307278] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:06.433 [2024-06-10 12:04:38.309513] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:06.433 [2024-06-10 12:04:38.309676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:41:06.433 BaseBdev2 00:41:06.433 12:04:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:41:06.729 spare_malloc 00:41:06.729 12:04:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:41:06.729 spare_delay 00:41:06.729 12:04:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:41:06.987 [2024-06-10 12:04:39.014002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:06.987 [2024-06-10 12:04:39.014369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:06.987 [2024-06-10 12:04:39.014458] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:41:06.987 [2024-06-10 12:04:39.014747] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:06.987 [2024-06-10 12:04:39.017010] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:06.987 [2024-06-10 12:04:39.017183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:06.987 spare 00:41:06.987 12:04:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:41:07.244 [2024-06-10 12:04:39.214126] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:07.244 [2024-06-10 12:04:39.216490] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:07.244 [2024-06-10 12:04:39.216863] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:41:07.244 [2024-06-10 12:04:39.216982] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:41:07.244 [2024-06-10 12:04:39.217243] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:41:07.244 [2024-06-10 12:04:39.217463] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:41:07.244 [2024-06-10 12:04:39.217569] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:41:07.244 [2024-06-10 12:04:39.217749] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:07.244 12:04:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:41:07.244 12:04:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:07.244 12:04:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:07.244 12:04:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:07.244 12:04:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:07.244 12:04:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:07.244 12:04:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:07.244 12:04:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:07.244 12:04:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:07.244 12:04:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:07.244 12:04:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:07.244 12:04:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:07.502 12:04:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:07.502 "name": "raid_bdev1", 00:41:07.502 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:07.502 "strip_size_kb": 0, 00:41:07.502 "state": "online", 00:41:07.502 "raid_level": "raid1", 00:41:07.502 "superblock": true, 00:41:07.502 "num_base_bdevs": 2, 00:41:07.502 "num_base_bdevs_discovered": 2, 00:41:07.502 "num_base_bdevs_operational": 2, 00:41:07.502 "base_bdevs_list": [ 00:41:07.502 { 00:41:07.502 "name": "BaseBdev1", 00:41:07.502 "uuid": "a8f5bdbe-eb28-5966-a1c9-8ce44015cc7c", 00:41:07.502 "is_configured": true, 00:41:07.502 "data_offset": 256, 00:41:07.502 "data_size": 7936 00:41:07.502 }, 00:41:07.502 { 00:41:07.502 "name": "BaseBdev2", 00:41:07.502 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:07.502 "is_configured": true, 00:41:07.502 "data_offset": 256, 00:41:07.502 "data_size": 7936 00:41:07.502 } 00:41:07.502 ] 00:41:07.502 }' 00:41:07.502 12:04:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:07.502 12:04:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:41:08.068 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:41:08.068 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:41:08.326 [2024-06-10 12:04:40.302580] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:08.326 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:41:08.326 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:08.326 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:41:08.584 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:41:08.584 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:41:08.584 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:41:08.584 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:41:08.584 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:41:08.584 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:08.584 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:41:08.584 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:08.584 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:41:08.584 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:08.584 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:41:08.584 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:08.584 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:08.584 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:41:08.842 [2024-06-10 12:04:40.802501] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:41:08.842 /dev/nbd0 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local i 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # break 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:08.842 1+0 records in 00:41:08.842 1+0 records out 00:41:08.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528688 s, 7.7 MB/s 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # size=4096 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # return 0 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:41:08.842 12:04:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:41:09.777 7936+0 records in 00:41:09.777 7936+0 records out 00:41:09.777 32505856 bytes (33 MB, 31 MiB) copied, 0.774175 s, 42.0 MB/s 00:41:09.777 12:04:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:41:09.777 12:04:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:09.777 12:04:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:41:09.777 12:04:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:09.777 12:04:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:41:09.777 12:04:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:09.777 12:04:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:41:10.052 12:04:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:10.052 [2024-06-10 12:04:41.870205] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:10.052 12:04:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:10.052 12:04:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:10.052 12:04:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:10.052 12:04:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:10.052 12:04:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:10.052 12:04:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:41:10.052 12:04:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:41:10.052 12:04:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:41:10.333 [2024-06-10 12:04:42.201951] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:10.333 12:04:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:10.333 12:04:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:10.333 12:04:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:10.333 12:04:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:10.333 12:04:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:10.333 12:04:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:10.333 12:04:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:10.333 12:04:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:10.333 12:04:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:10.333 12:04:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:10.333 12:04:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:10.333 12:04:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:10.590 12:04:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:10.590 "name": "raid_bdev1", 00:41:10.590 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:10.590 "strip_size_kb": 0, 00:41:10.590 "state": "online", 00:41:10.590 "raid_level": "raid1", 00:41:10.590 "superblock": true, 00:41:10.590 "num_base_bdevs": 2, 00:41:10.590 "num_base_bdevs_discovered": 1, 00:41:10.590 "num_base_bdevs_operational": 1, 00:41:10.590 "base_bdevs_list": [ 00:41:10.590 { 00:41:10.590 "name": null, 00:41:10.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:10.590 "is_configured": false, 00:41:10.590 "data_offset": 256, 00:41:10.590 "data_size": 7936 00:41:10.590 }, 00:41:10.590 { 00:41:10.590 "name": "BaseBdev2", 00:41:10.590 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:10.590 "is_configured": true, 00:41:10.590 "data_offset": 256, 00:41:10.590 "data_size": 7936 00:41:10.590 } 00:41:10.590 ] 00:41:10.590 }' 00:41:10.590 12:04:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:10.590 12:04:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:41:11.157 12:04:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:41:11.414 [2024-06-10 12:04:43.383090] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:11.415 [2024-06-10 12:04:43.401494] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018cff0 00:41:11.415 [2024-06-10 12:04:43.403776] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:11.415 12:04:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:41:12.789 12:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:12.789 12:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:12.789 12:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:41:12.789 12:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:41:12.789 12:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:12.789 12:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:12.789 12:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:12.789 12:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:12.789 "name": "raid_bdev1", 00:41:12.789 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:12.789 "strip_size_kb": 0, 00:41:12.789 "state": "online", 00:41:12.789 "raid_level": "raid1", 00:41:12.789 "superblock": true, 00:41:12.789 "num_base_bdevs": 2, 00:41:12.789 "num_base_bdevs_discovered": 2, 00:41:12.789 "num_base_bdevs_operational": 2, 00:41:12.789 "process": { 00:41:12.789 "type": "rebuild", 00:41:12.789 "target": "spare", 00:41:12.789 "progress": { 00:41:12.789 "blocks": 3072, 00:41:12.789 "percent": 38 00:41:12.789 } 00:41:12.789 }, 00:41:12.789 "base_bdevs_list": [ 00:41:12.789 { 00:41:12.789 "name": "spare", 00:41:12.789 "uuid": "bd16f428-5926-5c74-a991-11a95533cae3", 00:41:12.789 "is_configured": true, 00:41:12.789 "data_offset": 256, 00:41:12.789 "data_size": 7936 00:41:12.789 }, 00:41:12.789 { 00:41:12.789 "name": "BaseBdev2", 00:41:12.789 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:12.789 "is_configured": true, 00:41:12.789 "data_offset": 256, 00:41:12.789 "data_size": 7936 00:41:12.789 } 00:41:12.789 ] 00:41:12.789 }' 00:41:12.789 12:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:12.789 12:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:12.789 12:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:12.789 12:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:41:12.789 12:04:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:41:13.048 [2024-06-10 12:04:44.953978] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:13.048 [2024-06-10 12:04:45.015147] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:13.048 [2024-06-10 12:04:45.015451] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:13.048 [2024-06-10 12:04:45.015609] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:13.048 [2024-06-10 12:04:45.015656] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:41:13.048 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:13.048 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:13.048 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:13.048 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:13.048 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:13.048 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:13.048 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:13.048 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:13.048 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:13.048 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:13.048 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:13.048 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:13.307 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:13.307 "name": "raid_bdev1", 00:41:13.307 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:13.307 "strip_size_kb": 0, 00:41:13.307 "state": "online", 00:41:13.307 "raid_level": "raid1", 00:41:13.307 "superblock": true, 00:41:13.307 "num_base_bdevs": 2, 00:41:13.307 "num_base_bdevs_discovered": 1, 00:41:13.307 "num_base_bdevs_operational": 1, 00:41:13.307 "base_bdevs_list": [ 00:41:13.307 { 00:41:13.307 "name": null, 00:41:13.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:13.307 "is_configured": false, 00:41:13.307 "data_offset": 256, 00:41:13.307 "data_size": 7936 00:41:13.307 }, 00:41:13.307 { 00:41:13.307 "name": "BaseBdev2", 00:41:13.307 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:13.307 "is_configured": true, 00:41:13.307 "data_offset": 256, 00:41:13.307 "data_size": 7936 00:41:13.307 } 00:41:13.307 ] 00:41:13.307 }' 00:41:13.307 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:13.307 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:41:13.875 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:13.875 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:13.875 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:41:13.875 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:41:13.875 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:13.875 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:13.875 12:04:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:14.135 12:04:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:14.135 "name": "raid_bdev1", 00:41:14.135 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:14.135 "strip_size_kb": 0, 00:41:14.135 "state": "online", 00:41:14.135 "raid_level": "raid1", 00:41:14.135 "superblock": true, 00:41:14.135 "num_base_bdevs": 2, 00:41:14.135 "num_base_bdevs_discovered": 1, 00:41:14.135 "num_base_bdevs_operational": 1, 00:41:14.135 "base_bdevs_list": [ 00:41:14.135 { 00:41:14.135 "name": null, 00:41:14.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:14.135 "is_configured": false, 00:41:14.135 "data_offset": 256, 00:41:14.135 "data_size": 7936 00:41:14.135 }, 00:41:14.135 { 00:41:14.135 "name": "BaseBdev2", 00:41:14.135 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:14.135 "is_configured": true, 00:41:14.135 "data_offset": 256, 00:41:14.135 "data_size": 7936 00:41:14.135 } 00:41:14.135 ] 00:41:14.135 }' 00:41:14.135 12:04:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:14.135 12:04:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:41:14.135 12:04:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:14.393 12:04:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:14.393 12:04:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:41:14.652 [2024-06-10 12:04:46.467387] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:14.652 [2024-06-10 12:04:46.484534] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:41:14.652 [2024-06-10 12:04:46.486941] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:14.652 12:04:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:41:15.588 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:15.588 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:15.588 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:41:15.588 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:41:15.588 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:15.588 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:15.588 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:15.851 "name": "raid_bdev1", 00:41:15.851 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:15.851 "strip_size_kb": 0, 00:41:15.851 "state": "online", 00:41:15.851 "raid_level": "raid1", 00:41:15.851 "superblock": true, 00:41:15.851 "num_base_bdevs": 2, 00:41:15.851 "num_base_bdevs_discovered": 2, 00:41:15.851 "num_base_bdevs_operational": 2, 00:41:15.851 "process": { 00:41:15.851 "type": "rebuild", 00:41:15.851 "target": "spare", 00:41:15.851 "progress": { 00:41:15.851 "blocks": 3072, 00:41:15.851 "percent": 38 00:41:15.851 } 00:41:15.851 }, 00:41:15.851 "base_bdevs_list": [ 00:41:15.851 { 00:41:15.851 "name": "spare", 00:41:15.851 "uuid": "bd16f428-5926-5c74-a991-11a95533cae3", 00:41:15.851 "is_configured": true, 00:41:15.851 "data_offset": 256, 00:41:15.851 "data_size": 7936 00:41:15.851 }, 00:41:15.851 { 00:41:15.851 "name": "BaseBdev2", 00:41:15.851 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:15.851 "is_configured": true, 00:41:15.851 "data_offset": 256, 00:41:15.851 "data_size": 7936 00:41:15.851 } 00:41:15.851 ] 00:41:15.851 }' 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:41:15.851 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=1547 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:15.851 12:04:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:16.110 12:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:16.110 "name": "raid_bdev1", 00:41:16.110 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:16.110 "strip_size_kb": 0, 00:41:16.110 "state": "online", 00:41:16.110 "raid_level": "raid1", 00:41:16.110 "superblock": true, 00:41:16.110 "num_base_bdevs": 2, 00:41:16.110 "num_base_bdevs_discovered": 2, 00:41:16.110 "num_base_bdevs_operational": 2, 00:41:16.110 "process": { 00:41:16.110 "type": "rebuild", 00:41:16.110 "target": "spare", 00:41:16.110 "progress": { 00:41:16.110 "blocks": 3840, 00:41:16.110 "percent": 48 00:41:16.110 } 00:41:16.110 }, 00:41:16.110 "base_bdevs_list": [ 00:41:16.110 { 00:41:16.110 "name": "spare", 00:41:16.110 "uuid": "bd16f428-5926-5c74-a991-11a95533cae3", 00:41:16.110 "is_configured": true, 00:41:16.110 "data_offset": 256, 00:41:16.110 "data_size": 7936 00:41:16.110 }, 00:41:16.110 { 00:41:16.110 "name": "BaseBdev2", 00:41:16.110 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:16.110 "is_configured": true, 00:41:16.110 "data_offset": 256, 00:41:16.110 "data_size": 7936 00:41:16.110 } 00:41:16.110 ] 00:41:16.110 }' 00:41:16.110 12:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:16.110 12:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:16.110 12:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:16.369 12:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:41:16.369 12:04:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:41:17.306 12:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:41:17.306 12:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:17.306 12:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:17.306 12:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:41:17.306 12:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:41:17.306 12:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:17.306 12:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:17.306 12:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:17.565 12:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:17.565 "name": "raid_bdev1", 00:41:17.565 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:17.565 "strip_size_kb": 0, 00:41:17.565 "state": "online", 00:41:17.565 "raid_level": "raid1", 00:41:17.565 "superblock": true, 00:41:17.565 "num_base_bdevs": 2, 00:41:17.565 "num_base_bdevs_discovered": 2, 00:41:17.565 "num_base_bdevs_operational": 2, 00:41:17.565 "process": { 00:41:17.565 "type": "rebuild", 00:41:17.565 "target": "spare", 00:41:17.565 "progress": { 00:41:17.565 "blocks": 7424, 00:41:17.565 "percent": 93 00:41:17.565 } 00:41:17.565 }, 00:41:17.565 "base_bdevs_list": [ 00:41:17.565 { 00:41:17.565 "name": "spare", 00:41:17.565 "uuid": "bd16f428-5926-5c74-a991-11a95533cae3", 00:41:17.565 "is_configured": true, 00:41:17.565 "data_offset": 256, 00:41:17.565 "data_size": 7936 00:41:17.565 }, 00:41:17.565 { 00:41:17.565 "name": "BaseBdev2", 00:41:17.565 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:17.565 "is_configured": true, 00:41:17.565 "data_offset": 256, 00:41:17.565 "data_size": 7936 00:41:17.565 } 00:41:17.565 ] 00:41:17.565 }' 00:41:17.565 12:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:17.565 12:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:17.565 12:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:17.565 12:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:41:17.565 12:04:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:41:17.565 [2024-06-10 12:04:49.606841] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:41:17.565 [2024-06-10 12:04:49.607126] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:41:17.565 [2024-06-10 12:04:49.607433] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:18.555 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:41:18.555 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:18.555 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:18.555 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:41:18.555 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:41:18.555 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:18.555 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:18.555 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:18.815 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:18.815 "name": "raid_bdev1", 00:41:18.815 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:18.815 "strip_size_kb": 0, 00:41:18.815 "state": "online", 00:41:18.815 "raid_level": "raid1", 00:41:18.815 "superblock": true, 00:41:18.815 "num_base_bdevs": 2, 00:41:18.815 "num_base_bdevs_discovered": 2, 00:41:18.815 "num_base_bdevs_operational": 2, 00:41:18.815 "base_bdevs_list": [ 00:41:18.815 { 00:41:18.815 "name": "spare", 00:41:18.815 "uuid": "bd16f428-5926-5c74-a991-11a95533cae3", 00:41:18.815 "is_configured": true, 00:41:18.815 "data_offset": 256, 00:41:18.815 "data_size": 7936 00:41:18.815 }, 00:41:18.815 { 00:41:18.815 "name": "BaseBdev2", 00:41:18.815 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:18.815 "is_configured": true, 00:41:18.815 "data_offset": 256, 00:41:18.815 "data_size": 7936 00:41:18.815 } 00:41:18.815 ] 00:41:18.815 }' 00:41:18.815 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:19.074 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:41:19.074 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:19.074 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:41:19.074 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:41:19.074 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:19.074 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:19.074 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:41:19.074 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:41:19.074 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:19.074 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:19.074 12:04:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:19.333 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:19.333 "name": "raid_bdev1", 00:41:19.333 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:19.333 "strip_size_kb": 0, 00:41:19.333 "state": "online", 00:41:19.333 "raid_level": "raid1", 00:41:19.333 "superblock": true, 00:41:19.333 "num_base_bdevs": 2, 00:41:19.333 "num_base_bdevs_discovered": 2, 00:41:19.333 "num_base_bdevs_operational": 2, 00:41:19.333 "base_bdevs_list": [ 00:41:19.333 { 00:41:19.333 "name": "spare", 00:41:19.333 "uuid": "bd16f428-5926-5c74-a991-11a95533cae3", 00:41:19.333 "is_configured": true, 00:41:19.333 "data_offset": 256, 00:41:19.333 "data_size": 7936 00:41:19.333 }, 00:41:19.333 { 00:41:19.333 "name": "BaseBdev2", 00:41:19.333 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:19.333 "is_configured": true, 00:41:19.333 "data_offset": 256, 00:41:19.333 "data_size": 7936 00:41:19.333 } 00:41:19.333 ] 00:41:19.333 }' 00:41:19.333 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:19.333 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:41:19.333 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:19.592 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:19.592 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:41:19.592 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:19.592 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:19.592 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:19.592 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:19.592 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:19.592 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:19.592 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:19.592 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:19.592 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:19.592 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:19.592 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:19.851 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:19.851 "name": "raid_bdev1", 00:41:19.851 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:19.851 "strip_size_kb": 0, 00:41:19.851 "state": "online", 00:41:19.851 "raid_level": "raid1", 00:41:19.851 "superblock": true, 00:41:19.851 "num_base_bdevs": 2, 00:41:19.851 "num_base_bdevs_discovered": 2, 00:41:19.851 "num_base_bdevs_operational": 2, 00:41:19.851 "base_bdevs_list": [ 00:41:19.851 { 00:41:19.851 "name": "spare", 00:41:19.851 "uuid": "bd16f428-5926-5c74-a991-11a95533cae3", 00:41:19.851 "is_configured": true, 00:41:19.851 "data_offset": 256, 00:41:19.851 "data_size": 7936 00:41:19.851 }, 00:41:19.851 { 00:41:19.851 "name": "BaseBdev2", 00:41:19.851 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:19.851 "is_configured": true, 00:41:19.851 "data_offset": 256, 00:41:19.851 "data_size": 7936 00:41:19.851 } 00:41:19.851 ] 00:41:19.851 }' 00:41:19.851 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:19.851 12:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:41:20.417 12:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:41:20.983 [2024-06-10 12:04:52.805461] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:20.983 [2024-06-10 12:04:52.805693] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:20.983 [2024-06-10 12:04:52.805861] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:20.983 [2024-06-10 12:04:52.806040] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:20.983 [2024-06-10 12:04:52.806145] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:41:20.983 12:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:41:20.983 12:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:21.241 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:41:21.241 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:41:21.241 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:41:21.241 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:41:21.241 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:21.241 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:41:21.241 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:21.241 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:21.241 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:21.241 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:41:21.241 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:21.241 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:21.241 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:41:21.501 /dev/nbd0 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local i 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # break 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:21.501 1+0 records in 00:41:21.501 1+0 records out 00:41:21.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622933 s, 6.6 MB/s 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # size=4096 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # return 0 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:21.501 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:41:21.803 /dev/nbd1 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local i 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # break 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:21.803 1+0 records in 00:41:21.803 1+0 records out 00:41:21.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458339 s, 8.9 MB/s 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # size=4096 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # return 0 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:21.803 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:41:22.062 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:41:22.062 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:22.062 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:22.062 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:22.062 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:41:22.062 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:22.062 12:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:41:22.321 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:22.321 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:22.321 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:22.321 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:22.321 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:22.321 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:22.321 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:41:22.321 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:41:22.321 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:22.321 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:41:22.321 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:22.321 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:22.321 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:22.321 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:22.321 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:22.321 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:22.579 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:41:22.579 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:41:22.579 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:41:22.579 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:41:22.838 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:41:23.097 [2024-06-10 12:04:54.953455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:23.097 [2024-06-10 12:04:54.953549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:23.097 [2024-06-10 12:04:54.953613] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:41:23.097 [2024-06-10 12:04:54.953637] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:23.097 [2024-06-10 12:04:54.956093] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:23.097 [2024-06-10 12:04:54.956160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:23.097 [2024-06-10 12:04:54.956328] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:41:23.097 [2024-06-10 12:04:54.956391] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:23.097 [2024-06-10 12:04:54.956512] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:23.097 spare 00:41:23.097 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:41:23.097 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:23.097 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:23.097 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:23.097 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:23.097 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:23.097 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:23.097 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:23.097 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:23.097 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:23.097 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:23.097 12:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:23.097 [2024-06-10 12:04:55.056603] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:41:23.097 [2024-06-10 12:04:55.056639] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:41:23.097 [2024-06-10 12:04:55.056822] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:41:23.097 [2024-06-10 12:04:55.056971] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:41:23.097 [2024-06-10 12:04:55.056982] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:41:23.097 [2024-06-10 12:04:55.057112] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:23.356 12:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:23.356 "name": "raid_bdev1", 00:41:23.356 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:23.356 "strip_size_kb": 0, 00:41:23.356 "state": "online", 00:41:23.356 "raid_level": "raid1", 00:41:23.356 "superblock": true, 00:41:23.356 "num_base_bdevs": 2, 00:41:23.356 "num_base_bdevs_discovered": 2, 00:41:23.356 "num_base_bdevs_operational": 2, 00:41:23.356 "base_bdevs_list": [ 00:41:23.356 { 00:41:23.356 "name": "spare", 00:41:23.356 "uuid": "bd16f428-5926-5c74-a991-11a95533cae3", 00:41:23.356 "is_configured": true, 00:41:23.356 "data_offset": 256, 00:41:23.356 "data_size": 7936 00:41:23.356 }, 00:41:23.356 { 00:41:23.356 "name": "BaseBdev2", 00:41:23.356 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:23.356 "is_configured": true, 00:41:23.356 "data_offset": 256, 00:41:23.356 "data_size": 7936 00:41:23.356 } 00:41:23.356 ] 00:41:23.356 }' 00:41:23.356 12:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:23.356 12:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:41:23.921 12:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:23.921 12:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:23.921 12:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:41:23.921 12:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:41:23.921 12:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:23.921 12:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:23.921 12:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:24.180 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:24.180 "name": "raid_bdev1", 00:41:24.180 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:24.180 "strip_size_kb": 0, 00:41:24.180 "state": "online", 00:41:24.180 "raid_level": "raid1", 00:41:24.180 "superblock": true, 00:41:24.180 "num_base_bdevs": 2, 00:41:24.180 "num_base_bdevs_discovered": 2, 00:41:24.180 "num_base_bdevs_operational": 2, 00:41:24.180 "base_bdevs_list": [ 00:41:24.180 { 00:41:24.180 "name": "spare", 00:41:24.180 "uuid": "bd16f428-5926-5c74-a991-11a95533cae3", 00:41:24.180 "is_configured": true, 00:41:24.180 "data_offset": 256, 00:41:24.180 "data_size": 7936 00:41:24.180 }, 00:41:24.180 { 00:41:24.180 "name": "BaseBdev2", 00:41:24.180 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:24.180 "is_configured": true, 00:41:24.180 "data_offset": 256, 00:41:24.180 "data_size": 7936 00:41:24.180 } 00:41:24.180 ] 00:41:24.180 }' 00:41:24.180 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:24.180 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:41:24.180 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:24.180 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:24.180 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:24.180 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:41:24.747 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:41:24.747 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:41:24.747 [2024-06-10 12:04:56.781950] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:24.747 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:24.747 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:24.747 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:24.747 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:24.747 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:24.747 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:24.747 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:24.747 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:24.747 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:24.747 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:25.005 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:25.005 12:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:25.263 12:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:25.263 "name": "raid_bdev1", 00:41:25.263 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:25.263 "strip_size_kb": 0, 00:41:25.263 "state": "online", 00:41:25.263 "raid_level": "raid1", 00:41:25.263 "superblock": true, 00:41:25.263 "num_base_bdevs": 2, 00:41:25.263 "num_base_bdevs_discovered": 1, 00:41:25.263 "num_base_bdevs_operational": 1, 00:41:25.263 "base_bdevs_list": [ 00:41:25.263 { 00:41:25.263 "name": null, 00:41:25.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:25.263 "is_configured": false, 00:41:25.263 "data_offset": 256, 00:41:25.263 "data_size": 7936 00:41:25.263 }, 00:41:25.263 { 00:41:25.263 "name": "BaseBdev2", 00:41:25.263 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:25.263 "is_configured": true, 00:41:25.263 "data_offset": 256, 00:41:25.263 "data_size": 7936 00:41:25.263 } 00:41:25.263 ] 00:41:25.263 }' 00:41:25.263 12:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:25.263 12:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:41:25.829 12:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:41:26.088 [2024-06-10 12:04:58.082279] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:26.088 [2024-06-10 12:04:58.082476] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:41:26.088 [2024-06-10 12:04:58.082491] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:41:26.088 [2024-06-10 12:04:58.082545] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:26.088 [2024-06-10 12:04:58.100505] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1dc0 00:41:26.088 [2024-06-10 12:04:58.102838] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:26.088 12:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:41:27.465 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:27.465 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:27.465 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:41:27.465 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:41:27.465 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:27.465 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:27.465 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:27.465 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:27.465 "name": "raid_bdev1", 00:41:27.465 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:27.465 "strip_size_kb": 0, 00:41:27.465 "state": "online", 00:41:27.465 "raid_level": "raid1", 00:41:27.465 "superblock": true, 00:41:27.465 "num_base_bdevs": 2, 00:41:27.465 "num_base_bdevs_discovered": 2, 00:41:27.465 "num_base_bdevs_operational": 2, 00:41:27.465 "process": { 00:41:27.465 "type": "rebuild", 00:41:27.465 "target": "spare", 00:41:27.465 "progress": { 00:41:27.465 "blocks": 3328, 00:41:27.465 "percent": 41 00:41:27.465 } 00:41:27.465 }, 00:41:27.465 "base_bdevs_list": [ 00:41:27.465 { 00:41:27.465 "name": "spare", 00:41:27.465 "uuid": "bd16f428-5926-5c74-a991-11a95533cae3", 00:41:27.465 "is_configured": true, 00:41:27.465 "data_offset": 256, 00:41:27.465 "data_size": 7936 00:41:27.465 }, 00:41:27.465 { 00:41:27.465 "name": "BaseBdev2", 00:41:27.465 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:27.465 "is_configured": true, 00:41:27.465 "data_offset": 256, 00:41:27.465 "data_size": 7936 00:41:27.465 } 00:41:27.465 ] 00:41:27.465 }' 00:41:27.465 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:27.465 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:27.465 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:27.465 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:41:27.465 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:41:27.723 [2024-06-10 12:04:59.780661] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:27.981 [2024-06-10 12:04:59.813635] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:27.981 [2024-06-10 12:04:59.813737] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:27.981 [2024-06-10 12:04:59.813755] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:27.981 [2024-06-10 12:04:59.813762] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:41:27.981 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:27.981 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:27.981 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:27.981 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:27.981 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:27.981 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:27.981 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:27.981 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:27.981 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:27.981 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:27.981 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:27.981 12:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:28.241 12:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:28.241 "name": "raid_bdev1", 00:41:28.241 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:28.241 "strip_size_kb": 0, 00:41:28.241 "state": "online", 00:41:28.241 "raid_level": "raid1", 00:41:28.241 "superblock": true, 00:41:28.241 "num_base_bdevs": 2, 00:41:28.241 "num_base_bdevs_discovered": 1, 00:41:28.241 "num_base_bdevs_operational": 1, 00:41:28.241 "base_bdevs_list": [ 00:41:28.241 { 00:41:28.241 "name": null, 00:41:28.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:28.241 "is_configured": false, 00:41:28.241 "data_offset": 256, 00:41:28.241 "data_size": 7936 00:41:28.241 }, 00:41:28.241 { 00:41:28.241 "name": "BaseBdev2", 00:41:28.241 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:28.241 "is_configured": true, 00:41:28.241 "data_offset": 256, 00:41:28.241 "data_size": 7936 00:41:28.241 } 00:41:28.241 ] 00:41:28.241 }' 00:41:28.241 12:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:28.241 12:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:41:28.807 12:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:41:29.065 [2024-06-10 12:05:01.019133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:29.065 [2024-06-10 12:05:01.019229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:29.065 [2024-06-10 12:05:01.019264] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:41:29.065 [2024-06-10 12:05:01.019293] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:29.065 [2024-06-10 12:05:01.019575] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:29.065 [2024-06-10 12:05:01.019606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:29.065 [2024-06-10 12:05:01.019730] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:41:29.065 [2024-06-10 12:05:01.019751] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:41:29.065 [2024-06-10 12:05:01.019759] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:41:29.065 [2024-06-10 12:05:01.019809] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:29.065 [2024-06-10 12:05:01.036058] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:41:29.065 spare 00:41:29.065 [2024-06-10 12:05:01.038211] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:29.065 12:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:41:30.466 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:30.466 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:30.466 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:41:30.466 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:41:30.466 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:30.466 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:30.466 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:30.466 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:30.466 "name": "raid_bdev1", 00:41:30.466 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:30.466 "strip_size_kb": 0, 00:41:30.466 "state": "online", 00:41:30.466 "raid_level": "raid1", 00:41:30.466 "superblock": true, 00:41:30.466 "num_base_bdevs": 2, 00:41:30.466 "num_base_bdevs_discovered": 2, 00:41:30.466 "num_base_bdevs_operational": 2, 00:41:30.466 "process": { 00:41:30.466 "type": "rebuild", 00:41:30.466 "target": "spare", 00:41:30.466 "progress": { 00:41:30.466 "blocks": 3072, 00:41:30.466 "percent": 38 00:41:30.466 } 00:41:30.466 }, 00:41:30.466 "base_bdevs_list": [ 00:41:30.466 { 00:41:30.466 "name": "spare", 00:41:30.466 "uuid": "bd16f428-5926-5c74-a991-11a95533cae3", 00:41:30.466 "is_configured": true, 00:41:30.466 "data_offset": 256, 00:41:30.466 "data_size": 7936 00:41:30.466 }, 00:41:30.466 { 00:41:30.466 "name": "BaseBdev2", 00:41:30.466 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:30.466 "is_configured": true, 00:41:30.466 "data_offset": 256, 00:41:30.466 "data_size": 7936 00:41:30.466 } 00:41:30.466 ] 00:41:30.466 }' 00:41:30.466 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:30.466 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:30.466 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:30.466 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:41:30.466 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:41:30.724 [2024-06-10 12:05:02.664088] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:30.724 [2024-06-10 12:05:02.749167] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:30.724 [2024-06-10 12:05:02.749252] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:30.724 [2024-06-10 12:05:02.749269] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:30.724 [2024-06-10 12:05:02.749278] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:41:30.983 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:30.983 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:30.983 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:30.983 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:30.983 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:30.983 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:30.983 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:30.983 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:30.983 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:30.983 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:30.983 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:30.983 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:30.983 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:30.983 "name": "raid_bdev1", 00:41:30.983 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:30.983 "strip_size_kb": 0, 00:41:30.983 "state": "online", 00:41:30.983 "raid_level": "raid1", 00:41:30.983 "superblock": true, 00:41:30.983 "num_base_bdevs": 2, 00:41:30.983 "num_base_bdevs_discovered": 1, 00:41:30.983 "num_base_bdevs_operational": 1, 00:41:30.983 "base_bdevs_list": [ 00:41:30.983 { 00:41:30.983 "name": null, 00:41:30.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:30.983 "is_configured": false, 00:41:30.983 "data_offset": 256, 00:41:30.983 "data_size": 7936 00:41:30.983 }, 00:41:30.983 { 00:41:30.983 "name": "BaseBdev2", 00:41:30.983 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:30.983 "is_configured": true, 00:41:30.983 "data_offset": 256, 00:41:30.983 "data_size": 7936 00:41:30.983 } 00:41:30.983 ] 00:41:30.983 }' 00:41:30.983 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:30.983 12:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:41:31.551 12:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:31.551 12:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:31.551 12:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:41:31.551 12:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:41:31.551 12:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:31.551 12:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:31.551 12:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:31.810 12:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:31.810 "name": "raid_bdev1", 00:41:31.810 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:31.810 "strip_size_kb": 0, 00:41:31.810 "state": "online", 00:41:31.810 "raid_level": "raid1", 00:41:31.810 "superblock": true, 00:41:31.810 "num_base_bdevs": 2, 00:41:31.810 "num_base_bdevs_discovered": 1, 00:41:31.810 "num_base_bdevs_operational": 1, 00:41:31.810 "base_bdevs_list": [ 00:41:31.810 { 00:41:31.810 "name": null, 00:41:31.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:31.810 "is_configured": false, 00:41:31.810 "data_offset": 256, 00:41:31.810 "data_size": 7936 00:41:31.810 }, 00:41:31.810 { 00:41:31.810 "name": "BaseBdev2", 00:41:31.810 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:31.810 "is_configured": true, 00:41:31.810 "data_offset": 256, 00:41:31.810 "data_size": 7936 00:41:31.810 } 00:41:31.810 ] 00:41:31.810 }' 00:41:31.810 12:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:31.810 12:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:41:31.810 12:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:32.068 12:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:32.068 12:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:41:32.326 12:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:41:32.326 [2024-06-10 12:05:04.335155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:41:32.326 [2024-06-10 12:05:04.335251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:32.326 [2024-06-10 12:05:04.335292] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:41:32.326 [2024-06-10 12:05:04.335313] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:32.326 [2024-06-10 12:05:04.335558] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:32.326 [2024-06-10 12:05:04.335583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:41:32.326 [2024-06-10 12:05:04.335731] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:41:32.326 [2024-06-10 12:05:04.335754] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:41:32.326 [2024-06-10 12:05:04.335762] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:41:32.326 BaseBdev1 00:41:32.326 12:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:41:33.704 12:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:33.704 12:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:33.704 12:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:33.704 12:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:33.704 12:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:33.704 12:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:33.704 12:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:33.704 12:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:33.704 12:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:33.704 12:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:33.704 12:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:33.704 12:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:33.704 12:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:33.704 "name": "raid_bdev1", 00:41:33.704 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:33.704 "strip_size_kb": 0, 00:41:33.704 "state": "online", 00:41:33.704 "raid_level": "raid1", 00:41:33.704 "superblock": true, 00:41:33.704 "num_base_bdevs": 2, 00:41:33.704 "num_base_bdevs_discovered": 1, 00:41:33.704 "num_base_bdevs_operational": 1, 00:41:33.704 "base_bdevs_list": [ 00:41:33.704 { 00:41:33.704 "name": null, 00:41:33.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:33.704 "is_configured": false, 00:41:33.704 "data_offset": 256, 00:41:33.704 "data_size": 7936 00:41:33.704 }, 00:41:33.704 { 00:41:33.704 "name": "BaseBdev2", 00:41:33.704 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:33.704 "is_configured": true, 00:41:33.704 "data_offset": 256, 00:41:33.704 "data_size": 7936 00:41:33.704 } 00:41:33.704 ] 00:41:33.704 }' 00:41:33.704 12:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:33.704 12:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:41:34.272 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:34.272 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:34.272 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:41:34.272 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:41:34.272 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:34.272 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:34.272 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:34.840 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:34.840 "name": "raid_bdev1", 00:41:34.840 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:34.840 "strip_size_kb": 0, 00:41:34.840 "state": "online", 00:41:34.840 "raid_level": "raid1", 00:41:34.840 "superblock": true, 00:41:34.840 "num_base_bdevs": 2, 00:41:34.840 "num_base_bdevs_discovered": 1, 00:41:34.840 "num_base_bdevs_operational": 1, 00:41:34.840 "base_bdevs_list": [ 00:41:34.840 { 00:41:34.840 "name": null, 00:41:34.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:34.840 "is_configured": false, 00:41:34.840 "data_offset": 256, 00:41:34.840 "data_size": 7936 00:41:34.840 }, 00:41:34.840 { 00:41:34.840 "name": "BaseBdev2", 00:41:34.840 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:34.840 "is_configured": true, 00:41:34.840 "data_offset": 256, 00:41:34.840 "data_size": 7936 00:41:34.840 } 00:41:34.840 ] 00:41:34.840 }' 00:41:34.840 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:34.840 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:41:34.840 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:34.840 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:34.840 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:41:34.840 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@649 -- # local es=0 00:41:34.840 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:41:34.840 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:34.840 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:34.840 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:34.840 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:34.840 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:34.840 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:34.840 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:34.840 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:41:34.840 12:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:41:35.099 [2024-06-10 12:05:07.003820] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:35.099 [2024-06-10 12:05:07.004090] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:41:35.099 [2024-06-10 12:05:07.004112] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:41:35.099 request: 00:41:35.099 { 00:41:35.099 "base_bdev": "BaseBdev1", 00:41:35.099 "raid_bdev": "raid_bdev1", 00:41:35.099 "method": "bdev_raid_add_base_bdev", 00:41:35.099 "req_id": 1 00:41:35.099 } 00:41:35.099 Got JSON-RPC error response 00:41:35.099 response: 00:41:35.099 { 00:41:35.099 "code": -22, 00:41:35.099 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:41:35.099 } 00:41:35.099 12:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # es=1 00:41:35.099 12:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:41:35.099 12:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:41:35.099 12:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:41:35.099 12:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:41:36.034 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:36.034 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:36.034 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:36.034 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:36.034 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:36.034 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:36.034 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:36.034 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:36.034 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:36.034 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:36.034 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:36.034 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:36.293 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:36.293 "name": "raid_bdev1", 00:41:36.293 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:36.293 "strip_size_kb": 0, 00:41:36.293 "state": "online", 00:41:36.293 "raid_level": "raid1", 00:41:36.293 "superblock": true, 00:41:36.293 "num_base_bdevs": 2, 00:41:36.293 "num_base_bdevs_discovered": 1, 00:41:36.293 "num_base_bdevs_operational": 1, 00:41:36.293 "base_bdevs_list": [ 00:41:36.293 { 00:41:36.293 "name": null, 00:41:36.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:36.293 "is_configured": false, 00:41:36.293 "data_offset": 256, 00:41:36.293 "data_size": 7936 00:41:36.293 }, 00:41:36.293 { 00:41:36.293 "name": "BaseBdev2", 00:41:36.293 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:36.293 "is_configured": true, 00:41:36.293 "data_offset": 256, 00:41:36.293 "data_size": 7936 00:41:36.293 } 00:41:36.293 ] 00:41:36.293 }' 00:41:36.293 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:36.293 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:41:37.229 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:37.229 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:37.229 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:41:37.229 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:41:37.229 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:37.229 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:37.229 12:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:37.229 12:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:37.229 "name": "raid_bdev1", 00:41:37.229 "uuid": "18478f3c-ec31-41f3-ae1b-3f33db3931c3", 00:41:37.229 "strip_size_kb": 0, 00:41:37.229 "state": "online", 00:41:37.229 "raid_level": "raid1", 00:41:37.229 "superblock": true, 00:41:37.229 "num_base_bdevs": 2, 00:41:37.229 "num_base_bdevs_discovered": 1, 00:41:37.229 "num_base_bdevs_operational": 1, 00:41:37.229 "base_bdevs_list": [ 00:41:37.229 { 00:41:37.229 "name": null, 00:41:37.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:37.229 "is_configured": false, 00:41:37.229 "data_offset": 256, 00:41:37.229 "data_size": 7936 00:41:37.229 }, 00:41:37.229 { 00:41:37.229 "name": "BaseBdev2", 00:41:37.229 "uuid": "8c586e73-522f-54f3-b367-f2f258a9d48d", 00:41:37.229 "is_configured": true, 00:41:37.229 "data_offset": 256, 00:41:37.229 "data_size": 7936 00:41:37.229 } 00:41:37.229 ] 00:41:37.229 }' 00:41:37.229 12:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:37.229 12:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:41:37.229 12:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:37.488 12:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:37.488 12:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 163521 00:41:37.488 12:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@949 -- # '[' -z 163521 ']' 00:41:37.488 12:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # kill -0 163521 00:41:37.488 12:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # uname 00:41:37.488 12:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:37.488 12:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 163521 00:41:37.488 12:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:41:37.488 12:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:41:37.488 12:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@967 -- # echo 'killing process with pid 163521' 00:41:37.488 killing process with pid 163521 00:41:37.488 12:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # kill 163521 00:41:37.488 Received shutdown signal, test time was about 60.000000 seconds 00:41:37.488 00:41:37.488 Latency(us) 00:41:37.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:37.488 =================================================================================================================== 00:41:37.488 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:37.488 [2024-06-10 12:05:09.346777] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:37.488 12:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # wait 163521 00:41:37.488 [2024-06-10 12:05:09.346975] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:37.488 [2024-06-10 12:05:09.347052] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:37.488 [2024-06-10 12:05:09.347065] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:41:37.747 [2024-06-10 12:05:09.749355] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:39.666 ************************************ 00:41:39.666 END TEST raid_rebuild_test_sb_md_separate 00:41:39.666 ************************************ 00:41:39.666 12:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:41:39.666 00:41:39.666 real 0m35.375s 00:41:39.666 user 0m55.188s 00:41:39.666 sys 0m4.796s 00:41:39.666 12:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:39.666 12:05:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:41:39.666 12:05:11 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:41:39.666 12:05:11 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:41:39.666 12:05:11 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:41:39.666 12:05:11 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:39.666 12:05:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:41:39.666 ************************************ 00:41:39.666 START TEST raid_state_function_test_sb_md_interleaved 00:41:39.666 ************************************ 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 2 true 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=164426 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 164426' 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:41:39.666 Process raid pid: 164426 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 164426 /var/tmp/spdk-raid.sock 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@830 -- # '[' -z 164426 ']' 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:39.666 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:41:39.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:41:39.667 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:39.667 12:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:39.667 [2024-06-10 12:05:11.637066] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:41:39.667 [2024-06-10 12:05:11.637262] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:39.957 [2024-06-10 12:05:11.800579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:40.215 [2024-06-10 12:05:12.018090] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:41:40.215 [2024-06-10 12:05:12.247571] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:40.781 12:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:40.781 12:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@863 -- # return 0 00:41:40.781 12:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:41:41.038 [2024-06-10 12:05:12.872630] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:41:41.038 [2024-06-10 12:05:12.872721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:41:41.038 [2024-06-10 12:05:12.872734] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:41.038 [2024-06-10 12:05:12.872780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:41.038 12:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:41:41.038 12:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:41:41.038 12:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:41:41.038 12:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:41.038 12:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:41.038 12:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:41.038 12:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:41.038 12:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:41.038 12:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:41.038 12:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:41.038 12:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:41.038 12:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:41.296 12:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:41.296 "name": "Existed_Raid", 00:41:41.296 "uuid": "f5ba19eb-cd38-4014-92dc-15e342b4ea11", 00:41:41.296 "strip_size_kb": 0, 00:41:41.296 "state": "configuring", 00:41:41.296 "raid_level": "raid1", 00:41:41.296 "superblock": true, 00:41:41.296 "num_base_bdevs": 2, 00:41:41.296 "num_base_bdevs_discovered": 0, 00:41:41.296 "num_base_bdevs_operational": 2, 00:41:41.296 "base_bdevs_list": [ 00:41:41.296 { 00:41:41.296 "name": "BaseBdev1", 00:41:41.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:41.296 "is_configured": false, 00:41:41.296 "data_offset": 0, 00:41:41.296 "data_size": 0 00:41:41.296 }, 00:41:41.296 { 00:41:41.296 "name": "BaseBdev2", 00:41:41.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:41.296 "is_configured": false, 00:41:41.296 "data_offset": 0, 00:41:41.296 "data_size": 0 00:41:41.296 } 00:41:41.296 ] 00:41:41.296 }' 00:41:41.296 12:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:41.296 12:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:41.861 12:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:41:42.120 [2024-06-10 12:05:14.012737] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:41:42.120 [2024-06-10 12:05:14.012782] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:41:42.120 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:41:42.379 [2024-06-10 12:05:14.316818] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:41:42.379 [2024-06-10 12:05:14.316881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:41:42.379 [2024-06-10 12:05:14.316891] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:42.379 [2024-06-10 12:05:14.316916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:42.379 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:41:42.637 [2024-06-10 12:05:14.551910] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:42.637 BaseBdev1 00:41:42.637 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:41:42.638 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:41:42.638 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:41:42.638 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local i 00:41:42.638 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:41:42.638 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:41:42.638 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:41:42.897 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:41:42.897 [ 00:41:42.897 { 00:41:42.897 "name": "BaseBdev1", 00:41:42.897 "aliases": [ 00:41:42.897 "af0fa781-1f62-4096-b3f0-8cbf0540ad07" 00:41:42.897 ], 00:41:42.897 "product_name": "Malloc disk", 00:41:42.897 "block_size": 4128, 00:41:42.897 "num_blocks": 8192, 00:41:42.897 "uuid": "af0fa781-1f62-4096-b3f0-8cbf0540ad07", 00:41:42.897 "md_size": 32, 00:41:42.897 "md_interleave": true, 00:41:42.897 "dif_type": 0, 00:41:42.897 "assigned_rate_limits": { 00:41:42.897 "rw_ios_per_sec": 0, 00:41:42.897 "rw_mbytes_per_sec": 0, 00:41:42.897 "r_mbytes_per_sec": 0, 00:41:42.897 "w_mbytes_per_sec": 0 00:41:42.897 }, 00:41:42.897 "claimed": true, 00:41:42.897 "claim_type": "exclusive_write", 00:41:42.897 "zoned": false, 00:41:42.897 "supported_io_types": { 00:41:42.897 "read": true, 00:41:42.897 "write": true, 00:41:42.897 "unmap": true, 00:41:42.897 "write_zeroes": true, 00:41:42.897 "flush": true, 00:41:42.897 "reset": true, 00:41:42.897 "compare": false, 00:41:42.897 "compare_and_write": false, 00:41:42.897 "abort": true, 00:41:42.897 "nvme_admin": false, 00:41:42.897 "nvme_io": false 00:41:42.897 }, 00:41:42.897 "memory_domains": [ 00:41:42.897 { 00:41:42.897 "dma_device_id": "system", 00:41:42.897 "dma_device_type": 1 00:41:42.897 }, 00:41:42.897 { 00:41:42.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:42.897 "dma_device_type": 2 00:41:42.897 } 00:41:42.897 ], 00:41:42.897 "driver_specific": {} 00:41:42.897 } 00:41:42.897 ] 00:41:43.156 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # return 0 00:41:43.156 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:41:43.156 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:41:43.156 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:41:43.156 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:43.156 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:43.156 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:43.156 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:43.156 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:43.156 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:43.156 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:43.156 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:43.156 12:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:43.156 12:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:43.156 "name": "Existed_Raid", 00:41:43.156 "uuid": "00e92728-f87a-4659-b66d-b883c4652c05", 00:41:43.156 "strip_size_kb": 0, 00:41:43.156 "state": "configuring", 00:41:43.156 "raid_level": "raid1", 00:41:43.156 "superblock": true, 00:41:43.156 "num_base_bdevs": 2, 00:41:43.156 "num_base_bdevs_discovered": 1, 00:41:43.156 "num_base_bdevs_operational": 2, 00:41:43.156 "base_bdevs_list": [ 00:41:43.156 { 00:41:43.156 "name": "BaseBdev1", 00:41:43.156 "uuid": "af0fa781-1f62-4096-b3f0-8cbf0540ad07", 00:41:43.156 "is_configured": true, 00:41:43.156 "data_offset": 256, 00:41:43.156 "data_size": 7936 00:41:43.156 }, 00:41:43.156 { 00:41:43.156 "name": "BaseBdev2", 00:41:43.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:43.156 "is_configured": false, 00:41:43.156 "data_offset": 0, 00:41:43.156 "data_size": 0 00:41:43.156 } 00:41:43.156 ] 00:41:43.156 }' 00:41:43.156 12:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:43.156 12:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:44.092 12:05:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:41:44.092 [2024-06-10 12:05:15.996375] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:41:44.092 [2024-06-10 12:05:15.996433] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:41:44.092 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:41:44.351 [2024-06-10 12:05:16.236482] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:44.351 [2024-06-10 12:05:16.238617] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:44.351 [2024-06-10 12:05:16.238708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:44.351 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:41:44.351 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:41:44.351 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:41:44.351 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:41:44.351 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:41:44.351 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:44.351 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:44.351 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:44.351 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:44.351 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:44.351 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:44.351 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:44.351 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:44.351 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:44.631 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:44.631 "name": "Existed_Raid", 00:41:44.631 "uuid": "4e7dce59-9dc8-4e52-a625-0d76c470d1e9", 00:41:44.631 "strip_size_kb": 0, 00:41:44.631 "state": "configuring", 00:41:44.631 "raid_level": "raid1", 00:41:44.631 "superblock": true, 00:41:44.631 "num_base_bdevs": 2, 00:41:44.631 "num_base_bdevs_discovered": 1, 00:41:44.631 "num_base_bdevs_operational": 2, 00:41:44.631 "base_bdevs_list": [ 00:41:44.631 { 00:41:44.631 "name": "BaseBdev1", 00:41:44.631 "uuid": "af0fa781-1f62-4096-b3f0-8cbf0540ad07", 00:41:44.631 "is_configured": true, 00:41:44.631 "data_offset": 256, 00:41:44.631 "data_size": 7936 00:41:44.631 }, 00:41:44.631 { 00:41:44.631 "name": "BaseBdev2", 00:41:44.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:44.631 "is_configured": false, 00:41:44.631 "data_offset": 0, 00:41:44.631 "data_size": 0 00:41:44.631 } 00:41:44.631 ] 00:41:44.631 }' 00:41:44.631 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:44.631 12:05:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:45.564 12:05:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:41:45.564 [2024-06-10 12:05:17.585798] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:45.564 [2024-06-10 12:05:17.586022] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:41:45.564 [2024-06-10 12:05:17.586036] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:41:45.564 [2024-06-10 12:05:17.586149] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:41:45.564 [2024-06-10 12:05:17.586242] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:41:45.564 [2024-06-10 12:05:17.586252] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:41:45.564 [2024-06-10 12:05:17.586315] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:45.564 BaseBdev2 00:41:45.565 12:05:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:41:45.565 12:05:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:41:45.565 12:05:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:41:45.565 12:05:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local i 00:41:45.565 12:05:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:41:45.565 12:05:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:41:45.565 12:05:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:41:46.132 12:05:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:41:46.132 [ 00:41:46.132 { 00:41:46.132 "name": "BaseBdev2", 00:41:46.132 "aliases": [ 00:41:46.132 "d5b3ee6a-0757-4929-9509-0bb1f63c37df" 00:41:46.132 ], 00:41:46.132 "product_name": "Malloc disk", 00:41:46.132 "block_size": 4128, 00:41:46.132 "num_blocks": 8192, 00:41:46.132 "uuid": "d5b3ee6a-0757-4929-9509-0bb1f63c37df", 00:41:46.132 "md_size": 32, 00:41:46.132 "md_interleave": true, 00:41:46.132 "dif_type": 0, 00:41:46.132 "assigned_rate_limits": { 00:41:46.132 "rw_ios_per_sec": 0, 00:41:46.132 "rw_mbytes_per_sec": 0, 00:41:46.132 "r_mbytes_per_sec": 0, 00:41:46.132 "w_mbytes_per_sec": 0 00:41:46.132 }, 00:41:46.132 "claimed": true, 00:41:46.132 "claim_type": "exclusive_write", 00:41:46.132 "zoned": false, 00:41:46.132 "supported_io_types": { 00:41:46.132 "read": true, 00:41:46.132 "write": true, 00:41:46.132 "unmap": true, 00:41:46.132 "write_zeroes": true, 00:41:46.132 "flush": true, 00:41:46.132 "reset": true, 00:41:46.132 "compare": false, 00:41:46.132 "compare_and_write": false, 00:41:46.132 "abort": true, 00:41:46.132 "nvme_admin": false, 00:41:46.132 "nvme_io": false 00:41:46.132 }, 00:41:46.132 "memory_domains": [ 00:41:46.132 { 00:41:46.132 "dma_device_id": "system", 00:41:46.132 "dma_device_type": 1 00:41:46.132 }, 00:41:46.132 { 00:41:46.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:46.133 "dma_device_type": 2 00:41:46.133 } 00:41:46.133 ], 00:41:46.133 "driver_specific": {} 00:41:46.133 } 00:41:46.133 ] 00:41:46.133 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # return 0 00:41:46.133 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:41:46.133 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:41:46.133 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:41:46.133 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:41:46.133 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:46.133 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:46.133 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:46.133 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:46.133 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:46.133 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:46.133 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:46.133 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:46.133 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:46.133 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:46.392 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:46.392 "name": "Existed_Raid", 00:41:46.392 "uuid": "4e7dce59-9dc8-4e52-a625-0d76c470d1e9", 00:41:46.392 "strip_size_kb": 0, 00:41:46.392 "state": "online", 00:41:46.392 "raid_level": "raid1", 00:41:46.392 "superblock": true, 00:41:46.392 "num_base_bdevs": 2, 00:41:46.392 "num_base_bdevs_discovered": 2, 00:41:46.392 "num_base_bdevs_operational": 2, 00:41:46.392 "base_bdevs_list": [ 00:41:46.392 { 00:41:46.392 "name": "BaseBdev1", 00:41:46.392 "uuid": "af0fa781-1f62-4096-b3f0-8cbf0540ad07", 00:41:46.392 "is_configured": true, 00:41:46.392 "data_offset": 256, 00:41:46.392 "data_size": 7936 00:41:46.392 }, 00:41:46.392 { 00:41:46.392 "name": "BaseBdev2", 00:41:46.392 "uuid": "d5b3ee6a-0757-4929-9509-0bb1f63c37df", 00:41:46.392 "is_configured": true, 00:41:46.392 "data_offset": 256, 00:41:46.392 "data_size": 7936 00:41:46.392 } 00:41:46.392 ] 00:41:46.392 }' 00:41:46.392 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:46.392 12:05:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:46.959 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:41:46.959 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:41:46.959 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:41:46.959 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:41:46.959 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:41:46.959 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:41:47.218 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:41:47.218 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:41:47.218 [2024-06-10 12:05:19.274545] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:47.477 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:41:47.477 "name": "Existed_Raid", 00:41:47.477 "aliases": [ 00:41:47.477 "4e7dce59-9dc8-4e52-a625-0d76c470d1e9" 00:41:47.477 ], 00:41:47.477 "product_name": "Raid Volume", 00:41:47.477 "block_size": 4128, 00:41:47.477 "num_blocks": 7936, 00:41:47.477 "uuid": "4e7dce59-9dc8-4e52-a625-0d76c470d1e9", 00:41:47.477 "md_size": 32, 00:41:47.477 "md_interleave": true, 00:41:47.477 "dif_type": 0, 00:41:47.477 "assigned_rate_limits": { 00:41:47.477 "rw_ios_per_sec": 0, 00:41:47.477 "rw_mbytes_per_sec": 0, 00:41:47.477 "r_mbytes_per_sec": 0, 00:41:47.477 "w_mbytes_per_sec": 0 00:41:47.477 }, 00:41:47.477 "claimed": false, 00:41:47.477 "zoned": false, 00:41:47.477 "supported_io_types": { 00:41:47.477 "read": true, 00:41:47.477 "write": true, 00:41:47.477 "unmap": false, 00:41:47.477 "write_zeroes": true, 00:41:47.477 "flush": false, 00:41:47.477 "reset": true, 00:41:47.477 "compare": false, 00:41:47.477 "compare_and_write": false, 00:41:47.477 "abort": false, 00:41:47.477 "nvme_admin": false, 00:41:47.477 "nvme_io": false 00:41:47.477 }, 00:41:47.477 "memory_domains": [ 00:41:47.477 { 00:41:47.477 "dma_device_id": "system", 00:41:47.477 "dma_device_type": 1 00:41:47.477 }, 00:41:47.477 { 00:41:47.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:47.477 "dma_device_type": 2 00:41:47.477 }, 00:41:47.477 { 00:41:47.477 "dma_device_id": "system", 00:41:47.477 "dma_device_type": 1 00:41:47.477 }, 00:41:47.477 { 00:41:47.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:47.477 "dma_device_type": 2 00:41:47.477 } 00:41:47.477 ], 00:41:47.477 "driver_specific": { 00:41:47.477 "raid": { 00:41:47.477 "uuid": "4e7dce59-9dc8-4e52-a625-0d76c470d1e9", 00:41:47.477 "strip_size_kb": 0, 00:41:47.477 "state": "online", 00:41:47.477 "raid_level": "raid1", 00:41:47.477 "superblock": true, 00:41:47.477 "num_base_bdevs": 2, 00:41:47.477 "num_base_bdevs_discovered": 2, 00:41:47.477 "num_base_bdevs_operational": 2, 00:41:47.477 "base_bdevs_list": [ 00:41:47.477 { 00:41:47.477 "name": "BaseBdev1", 00:41:47.477 "uuid": "af0fa781-1f62-4096-b3f0-8cbf0540ad07", 00:41:47.477 "is_configured": true, 00:41:47.477 "data_offset": 256, 00:41:47.477 "data_size": 7936 00:41:47.477 }, 00:41:47.477 { 00:41:47.477 "name": "BaseBdev2", 00:41:47.477 "uuid": "d5b3ee6a-0757-4929-9509-0bb1f63c37df", 00:41:47.477 "is_configured": true, 00:41:47.477 "data_offset": 256, 00:41:47.477 "data_size": 7936 00:41:47.477 } 00:41:47.477 ] 00:41:47.477 } 00:41:47.477 } 00:41:47.477 }' 00:41:47.477 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:41:47.477 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:41:47.477 BaseBdev2' 00:41:47.477 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:41:47.477 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:41:47.477 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:41:47.736 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:41:47.736 "name": "BaseBdev1", 00:41:47.736 "aliases": [ 00:41:47.736 "af0fa781-1f62-4096-b3f0-8cbf0540ad07" 00:41:47.736 ], 00:41:47.736 "product_name": "Malloc disk", 00:41:47.736 "block_size": 4128, 00:41:47.736 "num_blocks": 8192, 00:41:47.736 "uuid": "af0fa781-1f62-4096-b3f0-8cbf0540ad07", 00:41:47.736 "md_size": 32, 00:41:47.736 "md_interleave": true, 00:41:47.736 "dif_type": 0, 00:41:47.736 "assigned_rate_limits": { 00:41:47.736 "rw_ios_per_sec": 0, 00:41:47.736 "rw_mbytes_per_sec": 0, 00:41:47.736 "r_mbytes_per_sec": 0, 00:41:47.736 "w_mbytes_per_sec": 0 00:41:47.736 }, 00:41:47.736 "claimed": true, 00:41:47.736 "claim_type": "exclusive_write", 00:41:47.736 "zoned": false, 00:41:47.736 "supported_io_types": { 00:41:47.736 "read": true, 00:41:47.736 "write": true, 00:41:47.736 "unmap": true, 00:41:47.736 "write_zeroes": true, 00:41:47.736 "flush": true, 00:41:47.736 "reset": true, 00:41:47.736 "compare": false, 00:41:47.736 "compare_and_write": false, 00:41:47.736 "abort": true, 00:41:47.736 "nvme_admin": false, 00:41:47.736 "nvme_io": false 00:41:47.736 }, 00:41:47.736 "memory_domains": [ 00:41:47.736 { 00:41:47.736 "dma_device_id": "system", 00:41:47.736 "dma_device_type": 1 00:41:47.736 }, 00:41:47.736 { 00:41:47.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:47.736 "dma_device_type": 2 00:41:47.736 } 00:41:47.736 ], 00:41:47.736 "driver_specific": {} 00:41:47.736 }' 00:41:47.736 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:47.736 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:47.736 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:41:47.736 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:47.736 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:47.736 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:41:47.736 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:47.995 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:47.995 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:41:47.995 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:47.995 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:47.995 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:41:47.995 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:41:47.995 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:41:47.995 12:05:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:41:48.253 12:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:41:48.253 "name": "BaseBdev2", 00:41:48.254 "aliases": [ 00:41:48.254 "d5b3ee6a-0757-4929-9509-0bb1f63c37df" 00:41:48.254 ], 00:41:48.254 "product_name": "Malloc disk", 00:41:48.254 "block_size": 4128, 00:41:48.254 "num_blocks": 8192, 00:41:48.254 "uuid": "d5b3ee6a-0757-4929-9509-0bb1f63c37df", 00:41:48.254 "md_size": 32, 00:41:48.254 "md_interleave": true, 00:41:48.254 "dif_type": 0, 00:41:48.254 "assigned_rate_limits": { 00:41:48.254 "rw_ios_per_sec": 0, 00:41:48.254 "rw_mbytes_per_sec": 0, 00:41:48.254 "r_mbytes_per_sec": 0, 00:41:48.254 "w_mbytes_per_sec": 0 00:41:48.254 }, 00:41:48.254 "claimed": true, 00:41:48.254 "claim_type": "exclusive_write", 00:41:48.254 "zoned": false, 00:41:48.254 "supported_io_types": { 00:41:48.254 "read": true, 00:41:48.254 "write": true, 00:41:48.254 "unmap": true, 00:41:48.254 "write_zeroes": true, 00:41:48.254 "flush": true, 00:41:48.254 "reset": true, 00:41:48.254 "compare": false, 00:41:48.254 "compare_and_write": false, 00:41:48.254 "abort": true, 00:41:48.254 "nvme_admin": false, 00:41:48.254 "nvme_io": false 00:41:48.254 }, 00:41:48.254 "memory_domains": [ 00:41:48.254 { 00:41:48.254 "dma_device_id": "system", 00:41:48.254 "dma_device_type": 1 00:41:48.254 }, 00:41:48.254 { 00:41:48.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:48.254 "dma_device_type": 2 00:41:48.254 } 00:41:48.254 ], 00:41:48.254 "driver_specific": {} 00:41:48.254 }' 00:41:48.254 12:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:48.512 12:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:48.512 12:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:41:48.512 12:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:48.512 12:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:48.512 12:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:41:48.512 12:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:48.512 12:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:48.512 12:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:41:48.512 12:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:48.771 12:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:48.771 12:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:41:48.771 12:05:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:41:49.044 [2024-06-10 12:05:20.898645] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:49.044 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:41:49.044 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:41:49.044 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:41:49.044 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:41:49.044 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:41:49.044 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:41:49.044 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:41:49.044 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:49.044 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:49.044 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:49.044 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:49.044 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:49.044 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:49.044 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:49.044 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:49.044 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:49.044 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:49.303 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:49.303 "name": "Existed_Raid", 00:41:49.303 "uuid": "4e7dce59-9dc8-4e52-a625-0d76c470d1e9", 00:41:49.303 "strip_size_kb": 0, 00:41:49.303 "state": "online", 00:41:49.303 "raid_level": "raid1", 00:41:49.303 "superblock": true, 00:41:49.303 "num_base_bdevs": 2, 00:41:49.303 "num_base_bdevs_discovered": 1, 00:41:49.303 "num_base_bdevs_operational": 1, 00:41:49.303 "base_bdevs_list": [ 00:41:49.303 { 00:41:49.303 "name": null, 00:41:49.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:49.303 "is_configured": false, 00:41:49.303 "data_offset": 256, 00:41:49.303 "data_size": 7936 00:41:49.303 }, 00:41:49.303 { 00:41:49.303 "name": "BaseBdev2", 00:41:49.303 "uuid": "d5b3ee6a-0757-4929-9509-0bb1f63c37df", 00:41:49.303 "is_configured": true, 00:41:49.303 "data_offset": 256, 00:41:49.303 "data_size": 7936 00:41:49.303 } 00:41:49.303 ] 00:41:49.303 }' 00:41:49.303 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:49.303 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:49.870 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:41:49.870 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:41:49.870 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:49.870 12:05:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:41:50.129 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:41:50.129 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:41:50.129 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:41:50.387 [2024-06-10 12:05:22.353431] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:41:50.387 [2024-06-10 12:05:22.353587] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:50.646 [2024-06-10 12:05:22.475048] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:50.646 [2024-06-10 12:05:22.475136] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:50.646 [2024-06-10 12:05:22.475151] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:41:50.646 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:41:50.646 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:41:50.646 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:50.646 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:41:50.905 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:41:50.905 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:41:50.905 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:41:50.905 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 164426 00:41:50.905 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@949 -- # '[' -z 164426 ']' 00:41:50.905 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # kill -0 164426 00:41:50.905 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # uname 00:41:50.905 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:50.905 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 164426 00:41:50.905 killing process with pid 164426 00:41:50.905 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:41:50.905 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:41:50.905 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # echo 'killing process with pid 164426' 00:41:50.905 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # kill 164426 00:41:50.905 12:05:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # wait 164426 00:41:50.905 [2024-06-10 12:05:22.823401] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:50.905 [2024-06-10 12:05:22.823535] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:52.283 ************************************ 00:41:52.283 END TEST raid_state_function_test_sb_md_interleaved 00:41:52.283 ************************************ 00:41:52.283 12:05:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:41:52.283 00:41:52.283 real 0m12.664s 00:41:52.283 user 0m21.693s 00:41:52.283 sys 0m1.830s 00:41:52.283 12:05:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:52.283 12:05:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:52.283 12:05:24 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:41:52.283 12:05:24 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:41:52.283 12:05:24 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:52.283 12:05:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:41:52.283 ************************************ 00:41:52.283 START TEST raid_superblock_test_md_interleaved 00:41:52.283 ************************************ 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 2 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=164813 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 164813 /var/tmp/spdk-raid.sock 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@830 -- # '[' -z 164813 ']' 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:41:52.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:52.283 12:05:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:52.542 [2024-06-10 12:05:24.374123] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:41:52.542 [2024-06-10 12:05:24.374469] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164813 ] 00:41:52.543 [2024-06-10 12:05:24.558469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:52.801 [2024-06-10 12:05:24.780356] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:41:53.059 [2024-06-10 12:05:25.000861] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:53.626 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:53.626 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@863 -- # return 0 00:41:53.626 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:41:53.626 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:41:53.626 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:41:53.626 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:41:53.626 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:41:53.626 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:41:53.626 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:41:53.626 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:41:53.626 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:41:53.884 malloc1 00:41:53.884 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:41:53.884 [2024-06-10 12:05:25.901822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:41:53.884 [2024-06-10 12:05:25.901939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:53.885 [2024-06-10 12:05:25.901985] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:41:53.885 [2024-06-10 12:05:25.902015] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:53.885 [2024-06-10 12:05:25.904326] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:53.885 [2024-06-10 12:05:25.904379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:41:53.885 pt1 00:41:53.885 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:41:53.885 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:41:53.885 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:41:53.885 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:41:53.885 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:41:53.885 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:41:53.885 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:41:53.885 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:41:53.885 12:05:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:41:54.450 malloc2 00:41:54.450 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:41:54.450 [2024-06-10 12:05:26.493644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:41:54.450 [2024-06-10 12:05:26.493766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:54.450 [2024-06-10 12:05:26.493821] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:41:54.450 [2024-06-10 12:05:26.493843] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:54.450 [2024-06-10 12:05:26.495986] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:54.450 [2024-06-10 12:05:26.496037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:41:54.450 pt2 00:41:54.708 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:41:54.708 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:41:54.708 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:41:54.708 [2024-06-10 12:05:26.697776] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:41:54.708 [2024-06-10 12:05:26.700017] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:41:54.708 [2024-06-10 12:05:26.700271] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:41:54.708 [2024-06-10 12:05:26.700293] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:41:54.708 [2024-06-10 12:05:26.700405] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:41:54.708 [2024-06-10 12:05:26.700478] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:41:54.708 [2024-06-10 12:05:26.700491] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:41:54.708 [2024-06-10 12:05:26.700561] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:54.708 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:41:54.708 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:54.708 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:54.708 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:54.708 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:54.708 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:54.708 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:54.708 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:54.708 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:54.708 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:54.708 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:54.708 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:54.966 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:54.966 "name": "raid_bdev1", 00:41:54.966 "uuid": "2c34409c-766d-419c-9d7e-d1121e48496f", 00:41:54.966 "strip_size_kb": 0, 00:41:54.966 "state": "online", 00:41:54.966 "raid_level": "raid1", 00:41:54.966 "superblock": true, 00:41:54.966 "num_base_bdevs": 2, 00:41:54.966 "num_base_bdevs_discovered": 2, 00:41:54.966 "num_base_bdevs_operational": 2, 00:41:54.966 "base_bdevs_list": [ 00:41:54.966 { 00:41:54.966 "name": "pt1", 00:41:54.966 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:54.966 "is_configured": true, 00:41:54.966 "data_offset": 256, 00:41:54.966 "data_size": 7936 00:41:54.966 }, 00:41:54.966 { 00:41:54.966 "name": "pt2", 00:41:54.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:54.966 "is_configured": true, 00:41:54.966 "data_offset": 256, 00:41:54.966 "data_size": 7936 00:41:54.966 } 00:41:54.966 ] 00:41:54.966 }' 00:41:54.966 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:54.966 12:05:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:55.532 12:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:41:55.532 12:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:41:55.532 12:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:41:55.532 12:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:41:55.532 12:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:41:55.532 12:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:41:55.532 12:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:41:55.532 12:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:41:55.790 [2024-06-10 12:05:27.730155] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:55.790 12:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:41:55.790 "name": "raid_bdev1", 00:41:55.790 "aliases": [ 00:41:55.790 "2c34409c-766d-419c-9d7e-d1121e48496f" 00:41:55.790 ], 00:41:55.790 "product_name": "Raid Volume", 00:41:55.790 "block_size": 4128, 00:41:55.790 "num_blocks": 7936, 00:41:55.790 "uuid": "2c34409c-766d-419c-9d7e-d1121e48496f", 00:41:55.790 "md_size": 32, 00:41:55.790 "md_interleave": true, 00:41:55.790 "dif_type": 0, 00:41:55.790 "assigned_rate_limits": { 00:41:55.790 "rw_ios_per_sec": 0, 00:41:55.790 "rw_mbytes_per_sec": 0, 00:41:55.790 "r_mbytes_per_sec": 0, 00:41:55.790 "w_mbytes_per_sec": 0 00:41:55.790 }, 00:41:55.790 "claimed": false, 00:41:55.790 "zoned": false, 00:41:55.790 "supported_io_types": { 00:41:55.790 "read": true, 00:41:55.790 "write": true, 00:41:55.790 "unmap": false, 00:41:55.790 "write_zeroes": true, 00:41:55.790 "flush": false, 00:41:55.790 "reset": true, 00:41:55.790 "compare": false, 00:41:55.790 "compare_and_write": false, 00:41:55.790 "abort": false, 00:41:55.790 "nvme_admin": false, 00:41:55.790 "nvme_io": false 00:41:55.790 }, 00:41:55.790 "memory_domains": [ 00:41:55.790 { 00:41:55.790 "dma_device_id": "system", 00:41:55.790 "dma_device_type": 1 00:41:55.790 }, 00:41:55.790 { 00:41:55.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:55.790 "dma_device_type": 2 00:41:55.790 }, 00:41:55.790 { 00:41:55.790 "dma_device_id": "system", 00:41:55.790 "dma_device_type": 1 00:41:55.791 }, 00:41:55.791 { 00:41:55.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:55.791 "dma_device_type": 2 00:41:55.791 } 00:41:55.791 ], 00:41:55.791 "driver_specific": { 00:41:55.791 "raid": { 00:41:55.791 "uuid": "2c34409c-766d-419c-9d7e-d1121e48496f", 00:41:55.791 "strip_size_kb": 0, 00:41:55.791 "state": "online", 00:41:55.791 "raid_level": "raid1", 00:41:55.791 "superblock": true, 00:41:55.791 "num_base_bdevs": 2, 00:41:55.791 "num_base_bdevs_discovered": 2, 00:41:55.791 "num_base_bdevs_operational": 2, 00:41:55.791 "base_bdevs_list": [ 00:41:55.791 { 00:41:55.791 "name": "pt1", 00:41:55.791 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:55.791 "is_configured": true, 00:41:55.791 "data_offset": 256, 00:41:55.791 "data_size": 7936 00:41:55.791 }, 00:41:55.791 { 00:41:55.791 "name": "pt2", 00:41:55.791 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:55.791 "is_configured": true, 00:41:55.791 "data_offset": 256, 00:41:55.791 "data_size": 7936 00:41:55.791 } 00:41:55.791 ] 00:41:55.791 } 00:41:55.791 } 00:41:55.791 }' 00:41:55.791 12:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:41:55.791 12:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:41:55.791 pt2' 00:41:55.791 12:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:41:55.791 12:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:41:55.791 12:05:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:41:56.049 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:41:56.049 "name": "pt1", 00:41:56.049 "aliases": [ 00:41:56.049 "00000000-0000-0000-0000-000000000001" 00:41:56.049 ], 00:41:56.049 "product_name": "passthru", 00:41:56.049 "block_size": 4128, 00:41:56.049 "num_blocks": 8192, 00:41:56.049 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:56.049 "md_size": 32, 00:41:56.050 "md_interleave": true, 00:41:56.050 "dif_type": 0, 00:41:56.050 "assigned_rate_limits": { 00:41:56.050 "rw_ios_per_sec": 0, 00:41:56.050 "rw_mbytes_per_sec": 0, 00:41:56.050 "r_mbytes_per_sec": 0, 00:41:56.050 "w_mbytes_per_sec": 0 00:41:56.050 }, 00:41:56.050 "claimed": true, 00:41:56.050 "claim_type": "exclusive_write", 00:41:56.050 "zoned": false, 00:41:56.050 "supported_io_types": { 00:41:56.050 "read": true, 00:41:56.050 "write": true, 00:41:56.050 "unmap": true, 00:41:56.050 "write_zeroes": true, 00:41:56.050 "flush": true, 00:41:56.050 "reset": true, 00:41:56.050 "compare": false, 00:41:56.050 "compare_and_write": false, 00:41:56.050 "abort": true, 00:41:56.050 "nvme_admin": false, 00:41:56.050 "nvme_io": false 00:41:56.050 }, 00:41:56.050 "memory_domains": [ 00:41:56.050 { 00:41:56.050 "dma_device_id": "system", 00:41:56.050 "dma_device_type": 1 00:41:56.050 }, 00:41:56.050 { 00:41:56.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:56.050 "dma_device_type": 2 00:41:56.050 } 00:41:56.050 ], 00:41:56.050 "driver_specific": { 00:41:56.050 "passthru": { 00:41:56.050 "name": "pt1", 00:41:56.050 "base_bdev_name": "malloc1" 00:41:56.050 } 00:41:56.050 } 00:41:56.050 }' 00:41:56.050 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:56.050 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:56.308 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:41:56.308 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:56.308 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:56.308 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:41:56.308 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:56.308 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:56.309 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:41:56.309 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:56.309 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:56.567 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:41:56.567 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:41:56.567 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:41:56.567 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:41:56.567 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:41:56.567 "name": "pt2", 00:41:56.567 "aliases": [ 00:41:56.568 "00000000-0000-0000-0000-000000000002" 00:41:56.568 ], 00:41:56.568 "product_name": "passthru", 00:41:56.568 "block_size": 4128, 00:41:56.568 "num_blocks": 8192, 00:41:56.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:56.568 "md_size": 32, 00:41:56.568 "md_interleave": true, 00:41:56.568 "dif_type": 0, 00:41:56.568 "assigned_rate_limits": { 00:41:56.568 "rw_ios_per_sec": 0, 00:41:56.568 "rw_mbytes_per_sec": 0, 00:41:56.568 "r_mbytes_per_sec": 0, 00:41:56.568 "w_mbytes_per_sec": 0 00:41:56.568 }, 00:41:56.568 "claimed": true, 00:41:56.568 "claim_type": "exclusive_write", 00:41:56.568 "zoned": false, 00:41:56.568 "supported_io_types": { 00:41:56.568 "read": true, 00:41:56.568 "write": true, 00:41:56.568 "unmap": true, 00:41:56.568 "write_zeroes": true, 00:41:56.568 "flush": true, 00:41:56.568 "reset": true, 00:41:56.568 "compare": false, 00:41:56.568 "compare_and_write": false, 00:41:56.568 "abort": true, 00:41:56.568 "nvme_admin": false, 00:41:56.568 "nvme_io": false 00:41:56.568 }, 00:41:56.568 "memory_domains": [ 00:41:56.568 { 00:41:56.568 "dma_device_id": "system", 00:41:56.568 "dma_device_type": 1 00:41:56.568 }, 00:41:56.568 { 00:41:56.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:56.568 "dma_device_type": 2 00:41:56.568 } 00:41:56.568 ], 00:41:56.568 "driver_specific": { 00:41:56.568 "passthru": { 00:41:56.568 "name": "pt2", 00:41:56.568 "base_bdev_name": "malloc2" 00:41:56.568 } 00:41:56.568 } 00:41:56.568 }' 00:41:56.568 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:56.827 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:56.827 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:41:56.827 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:56.827 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:56.827 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:41:56.827 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:56.827 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:57.086 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:41:57.086 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:57.086 12:05:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:57.086 12:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:41:57.086 12:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:41:57.086 12:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:41:57.344 [2024-06-10 12:05:29.274496] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:57.344 12:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2c34409c-766d-419c-9d7e-d1121e48496f 00:41:57.344 12:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 2c34409c-766d-419c-9d7e-d1121e48496f ']' 00:41:57.344 12:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:41:57.603 [2024-06-10 12:05:29.486273] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:57.603 [2024-06-10 12:05:29.486469] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:57.603 [2024-06-10 12:05:29.486699] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:57.603 [2024-06-10 12:05:29.486848] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:57.603 [2024-06-10 12:05:29.486934] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:41:57.603 12:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:57.603 12:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:41:57.861 12:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:41:57.861 12:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:41:57.861 12:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:41:57.861 12:05:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:41:58.120 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:41:58.120 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:41:58.378 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:41:58.378 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:41:58.637 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:41:58.637 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:41:58.637 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@649 -- # local es=0 00:41:58.637 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:41:58.637 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:58.637 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:58.637 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:58.637 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:58.637 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:58.637 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:58.637 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:58.637 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:41:58.638 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:41:58.638 [2024-06-10 12:05:30.694488] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:41:58.896 [2024-06-10 12:05:30.696849] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:41:58.896 [2024-06-10 12:05:30.697073] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:41:58.896 [2024-06-10 12:05:30.697273] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:41:58.896 [2024-06-10 12:05:30.697337] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:58.896 [2024-06-10 12:05:30.697505] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:41:58.896 request: 00:41:58.896 { 00:41:58.896 "name": "raid_bdev1", 00:41:58.896 "raid_level": "raid1", 00:41:58.896 "base_bdevs": [ 00:41:58.896 "malloc1", 00:41:58.896 "malloc2" 00:41:58.896 ], 00:41:58.896 "superblock": false, 00:41:58.896 "method": "bdev_raid_create", 00:41:58.896 "req_id": 1 00:41:58.896 } 00:41:58.896 Got JSON-RPC error response 00:41:58.896 response: 00:41:58.896 { 00:41:58.896 "code": -17, 00:41:58.896 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:41:58.896 } 00:41:58.896 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # es=1 00:41:58.896 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:41:58.897 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:41:58.897 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:41:58.897 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:58.897 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:41:58.897 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:41:58.897 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:41:58.897 12:05:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:41:59.164 [2024-06-10 12:05:31.098546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:41:59.164 [2024-06-10 12:05:31.098852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:59.165 [2024-06-10 12:05:31.098923] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:41:59.165 [2024-06-10 12:05:31.099180] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:59.165 [2024-06-10 12:05:31.101382] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:59.165 [2024-06-10 12:05:31.101562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:41:59.165 [2024-06-10 12:05:31.101715] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:41:59.165 [2024-06-10 12:05:31.101850] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:41:59.165 pt1 00:41:59.165 12:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:41:59.165 12:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:59.165 12:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:41:59.165 12:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:59.165 12:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:59.165 12:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:59.165 12:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:59.165 12:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:59.165 12:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:59.165 12:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:59.165 12:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:59.165 12:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:59.421 12:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:59.421 "name": "raid_bdev1", 00:41:59.421 "uuid": "2c34409c-766d-419c-9d7e-d1121e48496f", 00:41:59.421 "strip_size_kb": 0, 00:41:59.421 "state": "configuring", 00:41:59.421 "raid_level": "raid1", 00:41:59.421 "superblock": true, 00:41:59.421 "num_base_bdevs": 2, 00:41:59.421 "num_base_bdevs_discovered": 1, 00:41:59.421 "num_base_bdevs_operational": 2, 00:41:59.421 "base_bdevs_list": [ 00:41:59.421 { 00:41:59.421 "name": "pt1", 00:41:59.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:59.421 "is_configured": true, 00:41:59.421 "data_offset": 256, 00:41:59.421 "data_size": 7936 00:41:59.421 }, 00:41:59.421 { 00:41:59.421 "name": null, 00:41:59.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:59.421 "is_configured": false, 00:41:59.421 "data_offset": 256, 00:41:59.421 "data_size": 7936 00:41:59.421 } 00:41:59.421 ] 00:41:59.421 }' 00:41:59.421 12:05:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:59.421 12:05:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:42:00.357 [2024-06-10 12:05:32.330878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:42:00.357 [2024-06-10 12:05:32.331201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:00.357 [2024-06-10 12:05:32.331345] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:42:00.357 [2024-06-10 12:05:32.331446] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:00.357 [2024-06-10 12:05:32.331653] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:00.357 [2024-06-10 12:05:32.331808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:42:00.357 [2024-06-10 12:05:32.331964] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:42:00.357 [2024-06-10 12:05:32.332016] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:00.357 [2024-06-10 12:05:32.332194] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:42:00.357 [2024-06-10 12:05:32.332280] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:42:00.357 [2024-06-10 12:05:32.332390] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:42:00.357 [2024-06-10 12:05:32.332599] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:42:00.357 [2024-06-10 12:05:32.332635] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:42:00.357 [2024-06-10 12:05:32.332714] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:00.357 pt2 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:00.357 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:00.616 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:42:00.616 "name": "raid_bdev1", 00:42:00.616 "uuid": "2c34409c-766d-419c-9d7e-d1121e48496f", 00:42:00.616 "strip_size_kb": 0, 00:42:00.616 "state": "online", 00:42:00.616 "raid_level": "raid1", 00:42:00.616 "superblock": true, 00:42:00.616 "num_base_bdevs": 2, 00:42:00.616 "num_base_bdevs_discovered": 2, 00:42:00.616 "num_base_bdevs_operational": 2, 00:42:00.616 "base_bdevs_list": [ 00:42:00.616 { 00:42:00.616 "name": "pt1", 00:42:00.616 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:00.616 "is_configured": true, 00:42:00.616 "data_offset": 256, 00:42:00.616 "data_size": 7936 00:42:00.616 }, 00:42:00.616 { 00:42:00.616 "name": "pt2", 00:42:00.616 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:00.616 "is_configured": true, 00:42:00.616 "data_offset": 256, 00:42:00.616 "data_size": 7936 00:42:00.616 } 00:42:00.616 ] 00:42:00.616 }' 00:42:00.616 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:42:00.616 12:05:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:01.551 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:42:01.551 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:42:01.551 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:42:01.551 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:42:01.551 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:42:01.551 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:42:01.551 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:42:01.551 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:42:01.551 [2024-06-10 12:05:33.447378] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:01.551 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:42:01.551 "name": "raid_bdev1", 00:42:01.551 "aliases": [ 00:42:01.551 "2c34409c-766d-419c-9d7e-d1121e48496f" 00:42:01.551 ], 00:42:01.551 "product_name": "Raid Volume", 00:42:01.551 "block_size": 4128, 00:42:01.551 "num_blocks": 7936, 00:42:01.551 "uuid": "2c34409c-766d-419c-9d7e-d1121e48496f", 00:42:01.551 "md_size": 32, 00:42:01.551 "md_interleave": true, 00:42:01.551 "dif_type": 0, 00:42:01.551 "assigned_rate_limits": { 00:42:01.551 "rw_ios_per_sec": 0, 00:42:01.551 "rw_mbytes_per_sec": 0, 00:42:01.551 "r_mbytes_per_sec": 0, 00:42:01.551 "w_mbytes_per_sec": 0 00:42:01.551 }, 00:42:01.551 "claimed": false, 00:42:01.551 "zoned": false, 00:42:01.551 "supported_io_types": { 00:42:01.551 "read": true, 00:42:01.551 "write": true, 00:42:01.551 "unmap": false, 00:42:01.551 "write_zeroes": true, 00:42:01.551 "flush": false, 00:42:01.551 "reset": true, 00:42:01.551 "compare": false, 00:42:01.551 "compare_and_write": false, 00:42:01.551 "abort": false, 00:42:01.551 "nvme_admin": false, 00:42:01.551 "nvme_io": false 00:42:01.551 }, 00:42:01.551 "memory_domains": [ 00:42:01.551 { 00:42:01.551 "dma_device_id": "system", 00:42:01.551 "dma_device_type": 1 00:42:01.551 }, 00:42:01.551 { 00:42:01.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:01.551 "dma_device_type": 2 00:42:01.551 }, 00:42:01.551 { 00:42:01.551 "dma_device_id": "system", 00:42:01.551 "dma_device_type": 1 00:42:01.551 }, 00:42:01.551 { 00:42:01.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:01.551 "dma_device_type": 2 00:42:01.551 } 00:42:01.551 ], 00:42:01.551 "driver_specific": { 00:42:01.551 "raid": { 00:42:01.551 "uuid": "2c34409c-766d-419c-9d7e-d1121e48496f", 00:42:01.551 "strip_size_kb": 0, 00:42:01.551 "state": "online", 00:42:01.551 "raid_level": "raid1", 00:42:01.551 "superblock": true, 00:42:01.551 "num_base_bdevs": 2, 00:42:01.551 "num_base_bdevs_discovered": 2, 00:42:01.551 "num_base_bdevs_operational": 2, 00:42:01.551 "base_bdevs_list": [ 00:42:01.551 { 00:42:01.551 "name": "pt1", 00:42:01.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:01.551 "is_configured": true, 00:42:01.551 "data_offset": 256, 00:42:01.551 "data_size": 7936 00:42:01.551 }, 00:42:01.551 { 00:42:01.551 "name": "pt2", 00:42:01.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:01.551 "is_configured": true, 00:42:01.551 "data_offset": 256, 00:42:01.551 "data_size": 7936 00:42:01.551 } 00:42:01.551 ] 00:42:01.551 } 00:42:01.551 } 00:42:01.551 }' 00:42:01.551 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:42:01.551 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:42:01.551 pt2' 00:42:01.551 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:42:01.551 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:42:01.551 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:42:01.810 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:42:01.810 "name": "pt1", 00:42:01.810 "aliases": [ 00:42:01.810 "00000000-0000-0000-0000-000000000001" 00:42:01.810 ], 00:42:01.810 "product_name": "passthru", 00:42:01.810 "block_size": 4128, 00:42:01.810 "num_blocks": 8192, 00:42:01.810 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:01.810 "md_size": 32, 00:42:01.810 "md_interleave": true, 00:42:01.810 "dif_type": 0, 00:42:01.810 "assigned_rate_limits": { 00:42:01.810 "rw_ios_per_sec": 0, 00:42:01.810 "rw_mbytes_per_sec": 0, 00:42:01.810 "r_mbytes_per_sec": 0, 00:42:01.810 "w_mbytes_per_sec": 0 00:42:01.810 }, 00:42:01.810 "claimed": true, 00:42:01.810 "claim_type": "exclusive_write", 00:42:01.810 "zoned": false, 00:42:01.810 "supported_io_types": { 00:42:01.810 "read": true, 00:42:01.810 "write": true, 00:42:01.810 "unmap": true, 00:42:01.810 "write_zeroes": true, 00:42:01.810 "flush": true, 00:42:01.810 "reset": true, 00:42:01.810 "compare": false, 00:42:01.810 "compare_and_write": false, 00:42:01.810 "abort": true, 00:42:01.810 "nvme_admin": false, 00:42:01.810 "nvme_io": false 00:42:01.810 }, 00:42:01.810 "memory_domains": [ 00:42:01.810 { 00:42:01.810 "dma_device_id": "system", 00:42:01.810 "dma_device_type": 1 00:42:01.810 }, 00:42:01.810 { 00:42:01.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:01.810 "dma_device_type": 2 00:42:01.810 } 00:42:01.810 ], 00:42:01.810 "driver_specific": { 00:42:01.810 "passthru": { 00:42:01.810 "name": "pt1", 00:42:01.810 "base_bdev_name": "malloc1" 00:42:01.810 } 00:42:01.810 } 00:42:01.810 }' 00:42:01.810 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:42:01.810 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:42:01.810 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:42:01.810 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:42:02.069 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:42:02.069 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:42:02.069 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:42:02.069 12:05:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:42:02.069 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:42:02.069 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:42:02.069 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:42:02.069 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:42:02.069 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:42:02.069 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:42:02.069 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:42:02.328 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:42:02.328 "name": "pt2", 00:42:02.328 "aliases": [ 00:42:02.328 "00000000-0000-0000-0000-000000000002" 00:42:02.328 ], 00:42:02.328 "product_name": "passthru", 00:42:02.328 "block_size": 4128, 00:42:02.328 "num_blocks": 8192, 00:42:02.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:02.328 "md_size": 32, 00:42:02.328 "md_interleave": true, 00:42:02.328 "dif_type": 0, 00:42:02.328 "assigned_rate_limits": { 00:42:02.328 "rw_ios_per_sec": 0, 00:42:02.328 "rw_mbytes_per_sec": 0, 00:42:02.328 "r_mbytes_per_sec": 0, 00:42:02.328 "w_mbytes_per_sec": 0 00:42:02.328 }, 00:42:02.328 "claimed": true, 00:42:02.328 "claim_type": "exclusive_write", 00:42:02.328 "zoned": false, 00:42:02.328 "supported_io_types": { 00:42:02.328 "read": true, 00:42:02.328 "write": true, 00:42:02.328 "unmap": true, 00:42:02.328 "write_zeroes": true, 00:42:02.328 "flush": true, 00:42:02.328 "reset": true, 00:42:02.328 "compare": false, 00:42:02.328 "compare_and_write": false, 00:42:02.328 "abort": true, 00:42:02.328 "nvme_admin": false, 00:42:02.328 "nvme_io": false 00:42:02.328 }, 00:42:02.328 "memory_domains": [ 00:42:02.328 { 00:42:02.328 "dma_device_id": "system", 00:42:02.328 "dma_device_type": 1 00:42:02.328 }, 00:42:02.328 { 00:42:02.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:02.328 "dma_device_type": 2 00:42:02.328 } 00:42:02.328 ], 00:42:02.328 "driver_specific": { 00:42:02.328 "passthru": { 00:42:02.328 "name": "pt2", 00:42:02.328 "base_bdev_name": "malloc2" 00:42:02.328 } 00:42:02.328 } 00:42:02.328 }' 00:42:02.328 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:42:02.587 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:42:02.587 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:42:02.587 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:42:02.587 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:42:02.587 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:42:02.587 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:42:02.587 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:42:02.845 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:42:02.846 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:42:02.846 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:42:02.846 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:42:02.846 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:42:02.846 12:05:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:42:03.104 [2024-06-10 12:05:35.023672] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:03.104 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 2c34409c-766d-419c-9d7e-d1121e48496f '!=' 2c34409c-766d-419c-9d7e-d1121e48496f ']' 00:42:03.104 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:42:03.104 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:42:03.104 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:42:03.104 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:42:03.392 [2024-06-10 12:05:35.211609] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:42:03.392 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:03.392 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:42:03.392 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:42:03.392 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:42:03.392 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:42:03.392 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:42:03.392 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:42:03.392 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:42:03.392 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:42:03.392 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:42:03.392 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:03.392 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:03.696 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:42:03.696 "name": "raid_bdev1", 00:42:03.696 "uuid": "2c34409c-766d-419c-9d7e-d1121e48496f", 00:42:03.696 "strip_size_kb": 0, 00:42:03.696 "state": "online", 00:42:03.696 "raid_level": "raid1", 00:42:03.696 "superblock": true, 00:42:03.696 "num_base_bdevs": 2, 00:42:03.696 "num_base_bdevs_discovered": 1, 00:42:03.696 "num_base_bdevs_operational": 1, 00:42:03.696 "base_bdevs_list": [ 00:42:03.696 { 00:42:03.696 "name": null, 00:42:03.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:03.696 "is_configured": false, 00:42:03.696 "data_offset": 256, 00:42:03.696 "data_size": 7936 00:42:03.696 }, 00:42:03.696 { 00:42:03.696 "name": "pt2", 00:42:03.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:03.696 "is_configured": true, 00:42:03.696 "data_offset": 256, 00:42:03.696 "data_size": 7936 00:42:03.696 } 00:42:03.696 ] 00:42:03.696 }' 00:42:03.696 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:42:03.696 12:05:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:04.263 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:42:04.263 [2024-06-10 12:05:36.295760] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:04.263 [2024-06-10 12:05:36.295947] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:04.263 [2024-06-10 12:05:36.296086] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:04.263 [2024-06-10 12:05:36.296232] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:04.263 [2024-06-10 12:05:36.296318] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:42:04.263 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:42:04.263 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:04.522 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:42:04.522 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:42:04.522 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:42:04.522 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:42:04.522 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:42:04.782 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:42:04.782 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:42:04.782 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:42:04.782 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:42:04.782 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:42:04.782 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:42:05.041 [2024-06-10 12:05:36.963158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:42:05.041 [2024-06-10 12:05:36.964032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:05.041 [2024-06-10 12:05:36.964448] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:42:05.041 [2024-06-10 12:05:36.964757] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:05.041 [2024-06-10 12:05:36.967783] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:05.041 [2024-06-10 12:05:36.968104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:42:05.041 [2024-06-10 12:05:36.968456] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:42:05.041 [2024-06-10 12:05:36.968647] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:05.041 [2024-06-10 12:05:36.968910] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:42:05.041 [2024-06-10 12:05:36.969069] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:42:05.041 [2024-06-10 12:05:36.969207] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:42:05.041 [2024-06-10 12:05:36.969514] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:42:05.041 [2024-06-10 12:05:36.969639] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:42:05.041 pt2 00:42:05.041 [2024-06-10 12:05:36.969805] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:05.041 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:05.041 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:42:05.041 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:42:05.041 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:42:05.041 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:42:05.041 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:42:05.041 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:42:05.041 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:42:05.041 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:42:05.041 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:42:05.041 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:05.041 12:05:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:05.300 12:05:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:42:05.300 "name": "raid_bdev1", 00:42:05.300 "uuid": "2c34409c-766d-419c-9d7e-d1121e48496f", 00:42:05.300 "strip_size_kb": 0, 00:42:05.300 "state": "online", 00:42:05.300 "raid_level": "raid1", 00:42:05.300 "superblock": true, 00:42:05.300 "num_base_bdevs": 2, 00:42:05.300 "num_base_bdevs_discovered": 1, 00:42:05.300 "num_base_bdevs_operational": 1, 00:42:05.300 "base_bdevs_list": [ 00:42:05.300 { 00:42:05.300 "name": null, 00:42:05.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:05.300 "is_configured": false, 00:42:05.300 "data_offset": 256, 00:42:05.300 "data_size": 7936 00:42:05.300 }, 00:42:05.300 { 00:42:05.300 "name": "pt2", 00:42:05.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:05.300 "is_configured": true, 00:42:05.300 "data_offset": 256, 00:42:05.300 "data_size": 7936 00:42:05.300 } 00:42:05.300 ] 00:42:05.300 }' 00:42:05.300 12:05:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:42:05.300 12:05:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:05.867 12:05:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:42:06.125 [2024-06-10 12:05:38.032800] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:06.125 [2024-06-10 12:05:38.033038] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:06.125 [2024-06-10 12:05:38.033257] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:06.125 [2024-06-10 12:05:38.033349] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:06.125 [2024-06-10 12:05:38.033552] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:42:06.125 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:06.125 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:42:06.384 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:42:06.384 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:42:06.384 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:42:06.384 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:42:06.384 [2024-06-10 12:05:38.424911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:42:06.384 [2024-06-10 12:05:38.425594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:06.384 [2024-06-10 12:05:38.425916] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:42:06.384 [2024-06-10 12:05:38.426164] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:06.384 [2024-06-10 12:05:38.428601] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:06.384 [2024-06-10 12:05:38.428877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:42:06.384 [2024-06-10 12:05:38.429192] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:42:06.384 [2024-06-10 12:05:38.429374] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:42:06.384 [2024-06-10 12:05:38.429621] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:42:06.384 [2024-06-10 12:05:38.429726] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:06.384 [2024-06-10 12:05:38.429785] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:42:06.384 [2024-06-10 12:05:38.429938] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:06.384 [2024-06-10 12:05:38.430066] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:42:06.384 [2024-06-10 12:05:38.430257] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:42:06.384 [2024-06-10 12:05:38.430359] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:42:06.384 pt1 00:42:06.384 [2024-06-10 12:05:38.430460] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:42:06.384 [2024-06-10 12:05:38.430470] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:42:06.384 [2024-06-10 12:05:38.430534] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:06.643 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:42:06.643 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:06.643 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:42:06.643 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:42:06.643 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:42:06.643 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:42:06.643 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:42:06.643 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:42:06.643 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:42:06.643 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:42:06.643 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:42:06.643 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:06.643 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:06.902 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:42:06.902 "name": "raid_bdev1", 00:42:06.902 "uuid": "2c34409c-766d-419c-9d7e-d1121e48496f", 00:42:06.902 "strip_size_kb": 0, 00:42:06.902 "state": "online", 00:42:06.902 "raid_level": "raid1", 00:42:06.902 "superblock": true, 00:42:06.902 "num_base_bdevs": 2, 00:42:06.902 "num_base_bdevs_discovered": 1, 00:42:06.902 "num_base_bdevs_operational": 1, 00:42:06.902 "base_bdevs_list": [ 00:42:06.902 { 00:42:06.902 "name": null, 00:42:06.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:06.902 "is_configured": false, 00:42:06.902 "data_offset": 256, 00:42:06.902 "data_size": 7936 00:42:06.902 }, 00:42:06.902 { 00:42:06.902 "name": "pt2", 00:42:06.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:06.902 "is_configured": true, 00:42:06.902 "data_offset": 256, 00:42:06.902 "data_size": 7936 00:42:06.902 } 00:42:06.902 ] 00:42:06.902 }' 00:42:06.902 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:42:06.902 12:05:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:07.468 12:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:42:07.468 12:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:42:07.725 12:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:42:07.725 12:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:42:07.725 12:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:42:07.982 [2024-06-10 12:05:39.823346] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:07.982 12:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 2c34409c-766d-419c-9d7e-d1121e48496f '!=' 2c34409c-766d-419c-9d7e-d1121e48496f ']' 00:42:07.982 12:05:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 164813 00:42:07.982 12:05:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@949 -- # '[' -z 164813 ']' 00:42:07.982 12:05:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # kill -0 164813 00:42:07.982 12:05:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # uname 00:42:07.982 12:05:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:42:07.982 12:05:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 164813 00:42:07.982 killing process with pid 164813 00:42:07.982 12:05:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:42:07.982 12:05:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:42:07.982 12:05:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # echo 'killing process with pid 164813' 00:42:07.982 12:05:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # kill 164813 00:42:07.982 12:05:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # wait 164813 00:42:07.982 [2024-06-10 12:05:39.869958] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:07.982 [2024-06-10 12:05:39.870039] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:07.982 [2024-06-10 12:05:39.870109] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:07.982 [2024-06-10 12:05:39.870119] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:42:08.241 [2024-06-10 12:05:40.079562] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:09.616 12:05:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:42:09.616 00:42:09.616 real 0m17.143s 00:42:09.616 user 0m30.235s 00:42:09.616 sys 0m2.692s 00:42:09.616 12:05:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # xtrace_disable 00:42:09.616 12:05:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:09.616 ************************************ 00:42:09.616 END TEST raid_superblock_test_md_interleaved 00:42:09.616 ************************************ 00:42:09.616 12:05:41 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:42:09.616 12:05:41 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:42:09.616 12:05:41 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:42:09.616 12:05:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:42:09.616 ************************************ 00:42:09.616 START TEST raid_rebuild_test_sb_md_interleaved 00:42:09.616 ************************************ 00:42:09.616 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 2 true false false 00:42:09.616 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:42:09.616 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:42:09.616 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:42:09.616 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:42:09.616 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:42:09.616 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=165334 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 165334 /var/tmp/spdk-raid.sock 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@830 -- # '[' -z 165334 ']' 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local max_retries=100 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:42:09.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # xtrace_disable 00:42:09.617 12:05:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:09.617 [2024-06-10 12:05:41.606109] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:42:09.617 [2024-06-10 12:05:41.606513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165334 ] 00:42:09.617 I/O size of 3145728 is greater than zero copy threshold (65536). 00:42:09.617 Zero copy mechanism will not be used. 00:42:09.875 [2024-06-10 12:05:41.792281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:10.134 [2024-06-10 12:05:42.084076] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:42:10.392 [2024-06-10 12:05:42.323005] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:10.650 12:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:42:10.650 12:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@863 -- # return 0 00:42:10.650 12:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:42:10.650 12:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:42:11.217 BaseBdev1_malloc 00:42:11.217 12:05:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:42:11.217 [2024-06-10 12:05:43.177713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:42:11.217 [2024-06-10 12:05:43.177970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:11.217 [2024-06-10 12:05:43.178147] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:42:11.217 [2024-06-10 12:05:43.178268] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:11.217 [2024-06-10 12:05:43.180498] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:11.217 [2024-06-10 12:05:43.180647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:42:11.217 BaseBdev1 00:42:11.217 12:05:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:42:11.217 12:05:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:42:11.475 BaseBdev2_malloc 00:42:11.736 12:05:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:42:11.736 [2024-06-10 12:05:43.716595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:42:11.736 [2024-06-10 12:05:43.716930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:11.736 [2024-06-10 12:05:43.717017] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:42:11.736 [2024-06-10 12:05:43.717131] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:11.736 [2024-06-10 12:05:43.719132] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:11.736 [2024-06-10 12:05:43.719277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:42:11.736 BaseBdev2 00:42:11.736 12:05:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:42:12.002 spare_malloc 00:42:12.003 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:42:12.261 spare_delay 00:42:12.261 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:42:12.520 [2024-06-10 12:05:44.404883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:12.520 [2024-06-10 12:05:44.405211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:12.520 [2024-06-10 12:05:44.405284] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:42:12.520 [2024-06-10 12:05:44.405445] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:12.520 [2024-06-10 12:05:44.407706] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:12.520 [2024-06-10 12:05:44.407861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:12.520 spare 00:42:12.520 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:42:12.779 [2024-06-10 12:05:44.601001] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:12.779 [2024-06-10 12:05:44.603315] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:12.779 [2024-06-10 12:05:44.603674] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:42:12.779 [2024-06-10 12:05:44.603788] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:42:12.779 [2024-06-10 12:05:44.603932] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:42:12.779 [2024-06-10 12:05:44.604091] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:42:12.779 [2024-06-10 12:05:44.604168] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:42:12.779 [2024-06-10 12:05:44.604327] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:12.779 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:12.779 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:42:12.779 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:42:12.779 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:42:12.779 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:42:12.779 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:42:12.779 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:42:12.779 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:42:12.779 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:42:12.779 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:42:12.779 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:12.779 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:12.779 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:42:12.779 "name": "raid_bdev1", 00:42:12.779 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:12.779 "strip_size_kb": 0, 00:42:12.779 "state": "online", 00:42:12.779 "raid_level": "raid1", 00:42:12.779 "superblock": true, 00:42:12.779 "num_base_bdevs": 2, 00:42:12.779 "num_base_bdevs_discovered": 2, 00:42:12.779 "num_base_bdevs_operational": 2, 00:42:12.779 "base_bdevs_list": [ 00:42:12.779 { 00:42:12.779 "name": "BaseBdev1", 00:42:12.779 "uuid": "45d9cc79-1589-5207-962c-0f3840a16ba7", 00:42:12.779 "is_configured": true, 00:42:12.779 "data_offset": 256, 00:42:12.779 "data_size": 7936 00:42:12.779 }, 00:42:12.779 { 00:42:12.779 "name": "BaseBdev2", 00:42:12.779 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:12.779 "is_configured": true, 00:42:12.779 "data_offset": 256, 00:42:12.779 "data_size": 7936 00:42:12.779 } 00:42:12.779 ] 00:42:12.779 }' 00:42:12.779 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:42:12.779 12:05:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:13.713 12:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:42:13.713 12:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:42:13.713 [2024-06-10 12:05:45.725409] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:13.713 12:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:42:13.713 12:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:13.713 12:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:42:13.971 12:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:42:13.971 12:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:42:13.971 12:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:42:13.971 12:05:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:42:14.229 [2024-06-10 12:05:46.125263] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:42:14.229 12:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:14.229 12:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:42:14.229 12:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:42:14.229 12:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:42:14.229 12:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:42:14.229 12:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:42:14.229 12:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:42:14.229 12:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:42:14.229 12:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:42:14.229 12:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:42:14.229 12:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:14.229 12:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:14.487 12:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:42:14.487 "name": "raid_bdev1", 00:42:14.487 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:14.487 "strip_size_kb": 0, 00:42:14.487 "state": "online", 00:42:14.487 "raid_level": "raid1", 00:42:14.487 "superblock": true, 00:42:14.487 "num_base_bdevs": 2, 00:42:14.487 "num_base_bdevs_discovered": 1, 00:42:14.487 "num_base_bdevs_operational": 1, 00:42:14.487 "base_bdevs_list": [ 00:42:14.487 { 00:42:14.487 "name": null, 00:42:14.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:14.487 "is_configured": false, 00:42:14.487 "data_offset": 256, 00:42:14.487 "data_size": 7936 00:42:14.487 }, 00:42:14.487 { 00:42:14.487 "name": "BaseBdev2", 00:42:14.487 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:14.487 "is_configured": true, 00:42:14.487 "data_offset": 256, 00:42:14.487 "data_size": 7936 00:42:14.487 } 00:42:14.487 ] 00:42:14.487 }' 00:42:14.487 12:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:42:14.487 12:05:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:15.055 12:05:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:42:15.621 [2024-06-10 12:05:47.405562] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:15.621 [2024-06-10 12:05:47.424527] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:42:15.621 [2024-06-10 12:05:47.426906] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:15.621 12:05:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:42:16.557 12:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:16.557 12:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:42:16.557 12:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:42:16.557 12:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:42:16.557 12:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:42:16.557 12:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:16.557 12:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:16.816 12:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:16.816 "name": "raid_bdev1", 00:42:16.816 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:16.816 "strip_size_kb": 0, 00:42:16.816 "state": "online", 00:42:16.816 "raid_level": "raid1", 00:42:16.816 "superblock": true, 00:42:16.816 "num_base_bdevs": 2, 00:42:16.816 "num_base_bdevs_discovered": 2, 00:42:16.816 "num_base_bdevs_operational": 2, 00:42:16.816 "process": { 00:42:16.816 "type": "rebuild", 00:42:16.816 "target": "spare", 00:42:16.816 "progress": { 00:42:16.816 "blocks": 3072, 00:42:16.816 "percent": 38 00:42:16.816 } 00:42:16.816 }, 00:42:16.816 "base_bdevs_list": [ 00:42:16.816 { 00:42:16.816 "name": "spare", 00:42:16.816 "uuid": "56883d36-26d4-5103-8451-2d04dfff2709", 00:42:16.816 "is_configured": true, 00:42:16.816 "data_offset": 256, 00:42:16.816 "data_size": 7936 00:42:16.816 }, 00:42:16.816 { 00:42:16.816 "name": "BaseBdev2", 00:42:16.816 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:16.816 "is_configured": true, 00:42:16.816 "data_offset": 256, 00:42:16.816 "data_size": 7936 00:42:16.816 } 00:42:16.816 ] 00:42:16.816 }' 00:42:16.816 12:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:42:16.816 12:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:16.816 12:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:42:16.816 12:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:42:16.816 12:05:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:42:17.075 [2024-06-10 12:05:49.024080] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:17.075 [2024-06-10 12:05:49.037200] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:17.075 [2024-06-10 12:05:49.037395] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:17.075 [2024-06-10 12:05:49.037519] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:17.075 [2024-06-10 12:05:49.037599] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:17.075 12:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:17.075 12:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:42:17.075 12:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:42:17.075 12:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:42:17.075 12:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:42:17.075 12:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:42:17.075 12:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:42:17.075 12:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:42:17.075 12:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:42:17.075 12:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:42:17.075 12:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:17.075 12:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:17.334 12:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:42:17.334 "name": "raid_bdev1", 00:42:17.334 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:17.334 "strip_size_kb": 0, 00:42:17.334 "state": "online", 00:42:17.334 "raid_level": "raid1", 00:42:17.334 "superblock": true, 00:42:17.334 "num_base_bdevs": 2, 00:42:17.334 "num_base_bdevs_discovered": 1, 00:42:17.334 "num_base_bdevs_operational": 1, 00:42:17.334 "base_bdevs_list": [ 00:42:17.334 { 00:42:17.334 "name": null, 00:42:17.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:17.334 "is_configured": false, 00:42:17.334 "data_offset": 256, 00:42:17.334 "data_size": 7936 00:42:17.334 }, 00:42:17.334 { 00:42:17.334 "name": "BaseBdev2", 00:42:17.334 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:17.334 "is_configured": true, 00:42:17.334 "data_offset": 256, 00:42:17.334 "data_size": 7936 00:42:17.334 } 00:42:17.334 ] 00:42:17.334 }' 00:42:17.334 12:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:42:17.334 12:05:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:18.270 12:05:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:18.270 12:05:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:42:18.270 12:05:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:42:18.270 12:05:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:42:18.270 12:05:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:42:18.270 12:05:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:18.270 12:05:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:18.270 12:05:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:18.270 "name": "raid_bdev1", 00:42:18.270 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:18.270 "strip_size_kb": 0, 00:42:18.270 "state": "online", 00:42:18.270 "raid_level": "raid1", 00:42:18.270 "superblock": true, 00:42:18.270 "num_base_bdevs": 2, 00:42:18.270 "num_base_bdevs_discovered": 1, 00:42:18.270 "num_base_bdevs_operational": 1, 00:42:18.270 "base_bdevs_list": [ 00:42:18.270 { 00:42:18.270 "name": null, 00:42:18.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:18.270 "is_configured": false, 00:42:18.270 "data_offset": 256, 00:42:18.270 "data_size": 7936 00:42:18.270 }, 00:42:18.270 { 00:42:18.270 "name": "BaseBdev2", 00:42:18.270 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:18.270 "is_configured": true, 00:42:18.270 "data_offset": 256, 00:42:18.270 "data_size": 7936 00:42:18.270 } 00:42:18.270 ] 00:42:18.270 }' 00:42:18.270 12:05:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:42:18.530 12:05:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:42:18.530 12:05:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:42:18.530 12:05:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:18.530 12:05:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:42:18.790 [2024-06-10 12:05:50.723116] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:18.790 [2024-06-10 12:05:50.742739] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:42:18.790 [2024-06-10 12:05:50.745072] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:18.790 12:05:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:42:19.726 12:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:19.726 12:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:42:19.726 12:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:42:19.726 12:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:42:19.726 12:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:42:19.726 12:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:19.726 12:05:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:19.985 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:19.985 "name": "raid_bdev1", 00:42:19.985 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:19.985 "strip_size_kb": 0, 00:42:19.985 "state": "online", 00:42:19.985 "raid_level": "raid1", 00:42:19.985 "superblock": true, 00:42:19.985 "num_base_bdevs": 2, 00:42:19.985 "num_base_bdevs_discovered": 2, 00:42:19.985 "num_base_bdevs_operational": 2, 00:42:19.985 "process": { 00:42:19.985 "type": "rebuild", 00:42:19.985 "target": "spare", 00:42:19.985 "progress": { 00:42:19.985 "blocks": 3072, 00:42:19.985 "percent": 38 00:42:19.985 } 00:42:19.985 }, 00:42:19.985 "base_bdevs_list": [ 00:42:19.985 { 00:42:19.985 "name": "spare", 00:42:19.985 "uuid": "56883d36-26d4-5103-8451-2d04dfff2709", 00:42:19.985 "is_configured": true, 00:42:19.985 "data_offset": 256, 00:42:19.985 "data_size": 7936 00:42:19.985 }, 00:42:19.985 { 00:42:19.985 "name": "BaseBdev2", 00:42:19.985 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:19.985 "is_configured": true, 00:42:19.985 "data_offset": 256, 00:42:19.985 "data_size": 7936 00:42:19.985 } 00:42:19.985 ] 00:42:19.985 }' 00:42:19.985 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:42:20.242 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:20.242 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:42:20.242 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:42:20.242 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:42:20.242 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:42:20.242 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:42:20.242 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:42:20.242 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:42:20.242 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:42:20.242 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=1612 00:42:20.242 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:42:20.242 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:20.242 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:42:20.242 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:42:20.242 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:42:20.242 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:42:20.242 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:20.242 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:20.500 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:20.500 "name": "raid_bdev1", 00:42:20.500 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:20.500 "strip_size_kb": 0, 00:42:20.500 "state": "online", 00:42:20.500 "raid_level": "raid1", 00:42:20.500 "superblock": true, 00:42:20.500 "num_base_bdevs": 2, 00:42:20.500 "num_base_bdevs_discovered": 2, 00:42:20.500 "num_base_bdevs_operational": 2, 00:42:20.500 "process": { 00:42:20.500 "type": "rebuild", 00:42:20.500 "target": "spare", 00:42:20.500 "progress": { 00:42:20.500 "blocks": 4096, 00:42:20.500 "percent": 51 00:42:20.500 } 00:42:20.500 }, 00:42:20.500 "base_bdevs_list": [ 00:42:20.500 { 00:42:20.500 "name": "spare", 00:42:20.500 "uuid": "56883d36-26d4-5103-8451-2d04dfff2709", 00:42:20.500 "is_configured": true, 00:42:20.500 "data_offset": 256, 00:42:20.500 "data_size": 7936 00:42:20.500 }, 00:42:20.500 { 00:42:20.500 "name": "BaseBdev2", 00:42:20.500 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:20.500 "is_configured": true, 00:42:20.500 "data_offset": 256, 00:42:20.500 "data_size": 7936 00:42:20.500 } 00:42:20.500 ] 00:42:20.500 }' 00:42:20.500 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:42:20.500 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:20.500 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:42:20.500 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:42:20.500 12:05:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:42:21.875 12:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:42:21.875 12:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:21.875 12:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:42:21.875 12:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:42:21.875 12:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:42:21.875 12:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:42:21.875 12:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:21.875 12:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:21.875 [2024-06-10 12:05:53.864342] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:42:21.875 [2024-06-10 12:05:53.865869] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:42:21.875 [2024-06-10 12:05:53.866144] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:21.875 12:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:21.875 "name": "raid_bdev1", 00:42:21.875 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:21.875 "strip_size_kb": 0, 00:42:21.875 "state": "online", 00:42:21.875 "raid_level": "raid1", 00:42:21.875 "superblock": true, 00:42:21.875 "num_base_bdevs": 2, 00:42:21.876 "num_base_bdevs_discovered": 2, 00:42:21.876 "num_base_bdevs_operational": 2, 00:42:21.876 "process": { 00:42:21.876 "type": "rebuild", 00:42:21.876 "target": "spare", 00:42:21.876 "progress": { 00:42:21.876 "blocks": 7680, 00:42:21.876 "percent": 96 00:42:21.876 } 00:42:21.876 }, 00:42:21.876 "base_bdevs_list": [ 00:42:21.876 { 00:42:21.876 "name": "spare", 00:42:21.876 "uuid": "56883d36-26d4-5103-8451-2d04dfff2709", 00:42:21.876 "is_configured": true, 00:42:21.876 "data_offset": 256, 00:42:21.876 "data_size": 7936 00:42:21.876 }, 00:42:21.876 { 00:42:21.876 "name": "BaseBdev2", 00:42:21.876 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:21.876 "is_configured": true, 00:42:21.876 "data_offset": 256, 00:42:21.876 "data_size": 7936 00:42:21.876 } 00:42:21.876 ] 00:42:21.876 }' 00:42:21.876 12:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:42:21.876 12:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:21.876 12:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:42:22.134 12:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:42:22.134 12:05:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:42:23.069 12:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:42:23.069 12:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:23.069 12:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:42:23.069 12:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:42:23.069 12:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:42:23.069 12:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:42:23.069 12:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:23.069 12:05:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:23.328 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:23.328 "name": "raid_bdev1", 00:42:23.328 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:23.328 "strip_size_kb": 0, 00:42:23.328 "state": "online", 00:42:23.328 "raid_level": "raid1", 00:42:23.328 "superblock": true, 00:42:23.328 "num_base_bdevs": 2, 00:42:23.328 "num_base_bdevs_discovered": 2, 00:42:23.328 "num_base_bdevs_operational": 2, 00:42:23.328 "base_bdevs_list": [ 00:42:23.328 { 00:42:23.328 "name": "spare", 00:42:23.328 "uuid": "56883d36-26d4-5103-8451-2d04dfff2709", 00:42:23.328 "is_configured": true, 00:42:23.328 "data_offset": 256, 00:42:23.328 "data_size": 7936 00:42:23.328 }, 00:42:23.328 { 00:42:23.328 "name": "BaseBdev2", 00:42:23.328 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:23.328 "is_configured": true, 00:42:23.328 "data_offset": 256, 00:42:23.328 "data_size": 7936 00:42:23.328 } 00:42:23.328 ] 00:42:23.328 }' 00:42:23.328 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:42:23.328 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:42:23.328 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:42:23.328 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:42:23.328 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:42:23.328 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:23.328 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:42:23.328 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:42:23.328 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:42:23.328 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:42:23.329 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:23.329 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:23.588 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:23.588 "name": "raid_bdev1", 00:42:23.588 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:23.588 "strip_size_kb": 0, 00:42:23.588 "state": "online", 00:42:23.588 "raid_level": "raid1", 00:42:23.588 "superblock": true, 00:42:23.588 "num_base_bdevs": 2, 00:42:23.588 "num_base_bdevs_discovered": 2, 00:42:23.588 "num_base_bdevs_operational": 2, 00:42:23.588 "base_bdevs_list": [ 00:42:23.588 { 00:42:23.588 "name": "spare", 00:42:23.588 "uuid": "56883d36-26d4-5103-8451-2d04dfff2709", 00:42:23.588 "is_configured": true, 00:42:23.588 "data_offset": 256, 00:42:23.588 "data_size": 7936 00:42:23.588 }, 00:42:23.588 { 00:42:23.588 "name": "BaseBdev2", 00:42:23.588 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:23.588 "is_configured": true, 00:42:23.588 "data_offset": 256, 00:42:23.588 "data_size": 7936 00:42:23.588 } 00:42:23.588 ] 00:42:23.588 }' 00:42:23.588 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:42:23.847 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:42:23.847 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:42:23.847 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:23.847 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:23.847 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:42:23.847 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:42:23.847 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:42:23.847 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:42:23.847 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:42:23.847 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:42:23.847 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:42:23.847 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:42:23.847 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:42:23.847 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:23.847 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:24.106 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:42:24.106 "name": "raid_bdev1", 00:42:24.106 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:24.106 "strip_size_kb": 0, 00:42:24.106 "state": "online", 00:42:24.106 "raid_level": "raid1", 00:42:24.106 "superblock": true, 00:42:24.106 "num_base_bdevs": 2, 00:42:24.106 "num_base_bdevs_discovered": 2, 00:42:24.106 "num_base_bdevs_operational": 2, 00:42:24.106 "base_bdevs_list": [ 00:42:24.106 { 00:42:24.106 "name": "spare", 00:42:24.106 "uuid": "56883d36-26d4-5103-8451-2d04dfff2709", 00:42:24.106 "is_configured": true, 00:42:24.106 "data_offset": 256, 00:42:24.106 "data_size": 7936 00:42:24.106 }, 00:42:24.106 { 00:42:24.106 "name": "BaseBdev2", 00:42:24.106 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:24.106 "is_configured": true, 00:42:24.106 "data_offset": 256, 00:42:24.106 "data_size": 7936 00:42:24.106 } 00:42:24.106 ] 00:42:24.106 }' 00:42:24.106 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:42:24.106 12:05:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:24.673 12:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:42:24.932 [2024-06-10 12:05:56.815498] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:24.932 [2024-06-10 12:05:56.815752] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:24.932 [2024-06-10 12:05:56.815903] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:24.932 [2024-06-10 12:05:56.816043] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:24.932 [2024-06-10 12:05:56.816124] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:42:24.932 12:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:42:24.932 12:05:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:25.190 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:42:25.190 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:42:25.190 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:42:25.190 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:42:25.448 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:42:25.706 [2024-06-10 12:05:57.587622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:25.706 [2024-06-10 12:05:57.587851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:25.706 [2024-06-10 12:05:57.588005] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:42:25.706 [2024-06-10 12:05:57.588099] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:25.706 [2024-06-10 12:05:57.590342] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:25.706 [2024-06-10 12:05:57.590506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:25.706 [2024-06-10 12:05:57.590667] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:42:25.706 [2024-06-10 12:05:57.590809] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:25.706 [2024-06-10 12:05:57.590998] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:25.706 spare 00:42:25.706 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:25.706 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:42:25.706 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:42:25.706 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:42:25.706 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:42:25.706 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:42:25.706 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:42:25.706 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:42:25.706 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:42:25.706 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:42:25.706 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:25.706 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:25.706 [2024-06-10 12:05:57.691177] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:42:25.706 [2024-06-10 12:05:57.691335] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:42:25.706 [2024-06-10 12:05:57.691539] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:42:25.706 [2024-06-10 12:05:57.691710] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:42:25.706 [2024-06-10 12:05:57.691814] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:42:25.706 [2024-06-10 12:05:57.691971] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:25.971 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:42:25.971 "name": "raid_bdev1", 00:42:25.971 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:25.971 "strip_size_kb": 0, 00:42:25.971 "state": "online", 00:42:25.971 "raid_level": "raid1", 00:42:25.971 "superblock": true, 00:42:25.971 "num_base_bdevs": 2, 00:42:25.971 "num_base_bdevs_discovered": 2, 00:42:25.971 "num_base_bdevs_operational": 2, 00:42:25.971 "base_bdevs_list": [ 00:42:25.971 { 00:42:25.971 "name": "spare", 00:42:25.971 "uuid": "56883d36-26d4-5103-8451-2d04dfff2709", 00:42:25.971 "is_configured": true, 00:42:25.971 "data_offset": 256, 00:42:25.971 "data_size": 7936 00:42:25.971 }, 00:42:25.971 { 00:42:25.971 "name": "BaseBdev2", 00:42:25.971 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:25.971 "is_configured": true, 00:42:25.971 "data_offset": 256, 00:42:25.971 "data_size": 7936 00:42:25.971 } 00:42:25.971 ] 00:42:25.971 }' 00:42:25.971 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:42:25.971 12:05:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:26.539 12:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:26.539 12:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:42:26.539 12:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:42:26.539 12:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:42:26.539 12:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:42:26.539 12:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:26.539 12:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:26.798 12:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:26.798 "name": "raid_bdev1", 00:42:26.798 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:26.798 "strip_size_kb": 0, 00:42:26.798 "state": "online", 00:42:26.798 "raid_level": "raid1", 00:42:26.798 "superblock": true, 00:42:26.798 "num_base_bdevs": 2, 00:42:26.798 "num_base_bdevs_discovered": 2, 00:42:26.798 "num_base_bdevs_operational": 2, 00:42:26.798 "base_bdevs_list": [ 00:42:26.798 { 00:42:26.798 "name": "spare", 00:42:26.798 "uuid": "56883d36-26d4-5103-8451-2d04dfff2709", 00:42:26.798 "is_configured": true, 00:42:26.798 "data_offset": 256, 00:42:26.798 "data_size": 7936 00:42:26.798 }, 00:42:26.798 { 00:42:26.798 "name": "BaseBdev2", 00:42:26.798 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:26.798 "is_configured": true, 00:42:26.798 "data_offset": 256, 00:42:26.798 "data_size": 7936 00:42:26.798 } 00:42:26.798 ] 00:42:26.798 }' 00:42:26.798 12:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:42:26.798 12:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:42:26.798 12:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:42:26.798 12:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:26.798 12:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:26.798 12:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:42:27.057 12:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:42:27.057 12:05:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:42:27.315 [2024-06-10 12:05:59.216451] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:27.315 12:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:27.315 12:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:42:27.315 12:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:42:27.315 12:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:42:27.315 12:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:42:27.315 12:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:42:27.315 12:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:42:27.315 12:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:42:27.315 12:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:42:27.315 12:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:42:27.316 12:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:27.316 12:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:27.574 12:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:42:27.574 "name": "raid_bdev1", 00:42:27.574 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:27.574 "strip_size_kb": 0, 00:42:27.574 "state": "online", 00:42:27.574 "raid_level": "raid1", 00:42:27.574 "superblock": true, 00:42:27.574 "num_base_bdevs": 2, 00:42:27.574 "num_base_bdevs_discovered": 1, 00:42:27.574 "num_base_bdevs_operational": 1, 00:42:27.574 "base_bdevs_list": [ 00:42:27.574 { 00:42:27.574 "name": null, 00:42:27.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:27.574 "is_configured": false, 00:42:27.574 "data_offset": 256, 00:42:27.574 "data_size": 7936 00:42:27.574 }, 00:42:27.574 { 00:42:27.574 "name": "BaseBdev2", 00:42:27.574 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:27.574 "is_configured": true, 00:42:27.574 "data_offset": 256, 00:42:27.574 "data_size": 7936 00:42:27.574 } 00:42:27.574 ] 00:42:27.574 }' 00:42:27.574 12:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:42:27.574 12:05:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:28.141 12:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:42:28.400 [2024-06-10 12:06:00.312701] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:28.400 [2024-06-10 12:06:00.313067] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:42:28.400 [2024-06-10 12:06:00.313184] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:42:28.400 [2024-06-10 12:06:00.313365] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:28.400 [2024-06-10 12:06:00.331157] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:42:28.400 [2024-06-10 12:06:00.333289] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:28.400 12:06:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:42:29.334 12:06:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:29.334 12:06:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:42:29.334 12:06:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:42:29.334 12:06:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:42:29.334 12:06:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:42:29.334 12:06:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:29.334 12:06:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:29.593 12:06:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:29.593 "name": "raid_bdev1", 00:42:29.593 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:29.593 "strip_size_kb": 0, 00:42:29.593 "state": "online", 00:42:29.593 "raid_level": "raid1", 00:42:29.593 "superblock": true, 00:42:29.593 "num_base_bdevs": 2, 00:42:29.593 "num_base_bdevs_discovered": 2, 00:42:29.593 "num_base_bdevs_operational": 2, 00:42:29.593 "process": { 00:42:29.593 "type": "rebuild", 00:42:29.593 "target": "spare", 00:42:29.593 "progress": { 00:42:29.593 "blocks": 3072, 00:42:29.593 "percent": 38 00:42:29.593 } 00:42:29.593 }, 00:42:29.593 "base_bdevs_list": [ 00:42:29.593 { 00:42:29.593 "name": "spare", 00:42:29.593 "uuid": "56883d36-26d4-5103-8451-2d04dfff2709", 00:42:29.593 "is_configured": true, 00:42:29.593 "data_offset": 256, 00:42:29.593 "data_size": 7936 00:42:29.593 }, 00:42:29.593 { 00:42:29.593 "name": "BaseBdev2", 00:42:29.593 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:29.593 "is_configured": true, 00:42:29.593 "data_offset": 256, 00:42:29.593 "data_size": 7936 00:42:29.593 } 00:42:29.593 ] 00:42:29.593 }' 00:42:29.593 12:06:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:42:29.593 12:06:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:29.593 12:06:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:42:29.852 12:06:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:42:29.852 12:06:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:42:29.852 [2024-06-10 12:06:01.907707] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:30.112 [2024-06-10 12:06:01.944732] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:30.112 [2024-06-10 12:06:01.944969] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:30.112 [2024-06-10 12:06:01.945027] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:30.112 [2024-06-10 12:06:01.945130] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:30.112 12:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:30.112 12:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:42:30.112 12:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:42:30.112 12:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:42:30.112 12:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:42:30.112 12:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:42:30.112 12:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:42:30.112 12:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:42:30.112 12:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:42:30.112 12:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:42:30.112 12:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:30.112 12:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:30.416 12:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:42:30.417 "name": "raid_bdev1", 00:42:30.417 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:30.417 "strip_size_kb": 0, 00:42:30.417 "state": "online", 00:42:30.417 "raid_level": "raid1", 00:42:30.417 "superblock": true, 00:42:30.417 "num_base_bdevs": 2, 00:42:30.417 "num_base_bdevs_discovered": 1, 00:42:30.417 "num_base_bdevs_operational": 1, 00:42:30.417 "base_bdevs_list": [ 00:42:30.417 { 00:42:30.417 "name": null, 00:42:30.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:30.417 "is_configured": false, 00:42:30.417 "data_offset": 256, 00:42:30.417 "data_size": 7936 00:42:30.417 }, 00:42:30.417 { 00:42:30.417 "name": "BaseBdev2", 00:42:30.417 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:30.417 "is_configured": true, 00:42:30.417 "data_offset": 256, 00:42:30.417 "data_size": 7936 00:42:30.417 } 00:42:30.417 ] 00:42:30.417 }' 00:42:30.417 12:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:42:30.417 12:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:31.022 12:06:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:42:31.282 [2024-06-10 12:06:03.122991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:31.282 [2024-06-10 12:06:03.123310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:31.282 [2024-06-10 12:06:03.123389] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:42:31.282 [2024-06-10 12:06:03.123508] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:31.282 [2024-06-10 12:06:03.123798] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:31.282 [2024-06-10 12:06:03.123936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:31.282 [2024-06-10 12:06:03.124093] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:42:31.282 [2024-06-10 12:06:03.124183] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:42:31.282 [2024-06-10 12:06:03.124272] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:42:31.282 [2024-06-10 12:06:03.124351] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:31.282 [2024-06-10 12:06:03.141962] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:42:31.282 spare 00:42:31.282 [2024-06-10 12:06:03.144267] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:31.282 12:06:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:42:32.219 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:32.219 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:42:32.219 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:42:32.219 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:42:32.219 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:42:32.219 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:32.219 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:32.477 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:32.477 "name": "raid_bdev1", 00:42:32.477 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:32.477 "strip_size_kb": 0, 00:42:32.477 "state": "online", 00:42:32.477 "raid_level": "raid1", 00:42:32.477 "superblock": true, 00:42:32.478 "num_base_bdevs": 2, 00:42:32.478 "num_base_bdevs_discovered": 2, 00:42:32.478 "num_base_bdevs_operational": 2, 00:42:32.478 "process": { 00:42:32.478 "type": "rebuild", 00:42:32.478 "target": "spare", 00:42:32.478 "progress": { 00:42:32.478 "blocks": 3072, 00:42:32.478 "percent": 38 00:42:32.478 } 00:42:32.478 }, 00:42:32.478 "base_bdevs_list": [ 00:42:32.478 { 00:42:32.478 "name": "spare", 00:42:32.478 "uuid": "56883d36-26d4-5103-8451-2d04dfff2709", 00:42:32.478 "is_configured": true, 00:42:32.478 "data_offset": 256, 00:42:32.478 "data_size": 7936 00:42:32.478 }, 00:42:32.478 { 00:42:32.478 "name": "BaseBdev2", 00:42:32.478 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:32.478 "is_configured": true, 00:42:32.478 "data_offset": 256, 00:42:32.478 "data_size": 7936 00:42:32.478 } 00:42:32.478 ] 00:42:32.478 }' 00:42:32.478 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:42:32.478 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:32.478 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:42:32.737 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:42:32.737 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:42:32.995 [2024-06-10 12:06:04.817903] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:32.995 [2024-06-10 12:06:04.855709] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:32.995 [2024-06-10 12:06:04.855973] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:32.995 [2024-06-10 12:06:04.856031] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:32.995 [2024-06-10 12:06:04.856115] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:32.995 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:32.995 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:42:32.995 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:42:32.995 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:42:32.995 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:42:32.995 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:42:32.995 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:42:32.995 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:42:32.995 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:42:32.995 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:42:32.995 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:32.995 12:06:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:33.253 12:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:42:33.253 "name": "raid_bdev1", 00:42:33.253 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:33.253 "strip_size_kb": 0, 00:42:33.253 "state": "online", 00:42:33.253 "raid_level": "raid1", 00:42:33.253 "superblock": true, 00:42:33.253 "num_base_bdevs": 2, 00:42:33.253 "num_base_bdevs_discovered": 1, 00:42:33.253 "num_base_bdevs_operational": 1, 00:42:33.253 "base_bdevs_list": [ 00:42:33.253 { 00:42:33.253 "name": null, 00:42:33.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:33.253 "is_configured": false, 00:42:33.253 "data_offset": 256, 00:42:33.253 "data_size": 7936 00:42:33.253 }, 00:42:33.253 { 00:42:33.253 "name": "BaseBdev2", 00:42:33.253 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:33.253 "is_configured": true, 00:42:33.253 "data_offset": 256, 00:42:33.253 "data_size": 7936 00:42:33.253 } 00:42:33.253 ] 00:42:33.253 }' 00:42:33.253 12:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:42:33.253 12:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:33.820 12:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:33.820 12:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:42:33.820 12:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:42:33.820 12:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:42:33.820 12:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:42:33.820 12:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:33.820 12:06:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:34.078 12:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:34.078 "name": "raid_bdev1", 00:42:34.078 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:34.078 "strip_size_kb": 0, 00:42:34.078 "state": "online", 00:42:34.078 "raid_level": "raid1", 00:42:34.078 "superblock": true, 00:42:34.078 "num_base_bdevs": 2, 00:42:34.078 "num_base_bdevs_discovered": 1, 00:42:34.078 "num_base_bdevs_operational": 1, 00:42:34.078 "base_bdevs_list": [ 00:42:34.078 { 00:42:34.078 "name": null, 00:42:34.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:34.078 "is_configured": false, 00:42:34.078 "data_offset": 256, 00:42:34.078 "data_size": 7936 00:42:34.078 }, 00:42:34.078 { 00:42:34.078 "name": "BaseBdev2", 00:42:34.078 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:34.078 "is_configured": true, 00:42:34.078 "data_offset": 256, 00:42:34.078 "data_size": 7936 00:42:34.078 } 00:42:34.078 ] 00:42:34.078 }' 00:42:34.078 12:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:42:34.078 12:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:42:34.078 12:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:42:34.337 12:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:34.337 12:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:42:34.595 12:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:42:34.853 [2024-06-10 12:06:06.726200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:42:34.853 [2024-06-10 12:06:06.726515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:34.853 [2024-06-10 12:06:06.726644] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:42:34.853 [2024-06-10 12:06:06.726797] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:34.853 [2024-06-10 12:06:06.727051] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:34.853 [2024-06-10 12:06:06.727180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:42:34.853 [2024-06-10 12:06:06.727342] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:42:34.853 [2024-06-10 12:06:06.727443] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:42:34.853 [2024-06-10 12:06:06.727534] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:42:34.853 BaseBdev1 00:42:34.853 12:06:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:42:35.833 12:06:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:35.833 12:06:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:42:35.833 12:06:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:42:35.833 12:06:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:42:35.833 12:06:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:42:35.833 12:06:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:42:35.833 12:06:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:42:35.833 12:06:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:42:35.833 12:06:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:42:35.834 12:06:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:42:35.834 12:06:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:35.834 12:06:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:36.093 12:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:42:36.093 "name": "raid_bdev1", 00:42:36.093 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:36.093 "strip_size_kb": 0, 00:42:36.093 "state": "online", 00:42:36.093 "raid_level": "raid1", 00:42:36.093 "superblock": true, 00:42:36.093 "num_base_bdevs": 2, 00:42:36.093 "num_base_bdevs_discovered": 1, 00:42:36.093 "num_base_bdevs_operational": 1, 00:42:36.093 "base_bdevs_list": [ 00:42:36.093 { 00:42:36.093 "name": null, 00:42:36.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:36.093 "is_configured": false, 00:42:36.093 "data_offset": 256, 00:42:36.093 "data_size": 7936 00:42:36.093 }, 00:42:36.093 { 00:42:36.093 "name": "BaseBdev2", 00:42:36.093 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:36.093 "is_configured": true, 00:42:36.093 "data_offset": 256, 00:42:36.093 "data_size": 7936 00:42:36.093 } 00:42:36.093 ] 00:42:36.093 }' 00:42:36.093 12:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:42:36.093 12:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:36.661 12:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:36.661 12:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:42:36.661 12:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:42:36.661 12:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:42:36.661 12:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:42:36.661 12:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:36.661 12:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:36.920 12:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:36.920 "name": "raid_bdev1", 00:42:36.920 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:36.920 "strip_size_kb": 0, 00:42:36.920 "state": "online", 00:42:36.920 "raid_level": "raid1", 00:42:36.920 "superblock": true, 00:42:36.920 "num_base_bdevs": 2, 00:42:36.920 "num_base_bdevs_discovered": 1, 00:42:36.920 "num_base_bdevs_operational": 1, 00:42:36.920 "base_bdevs_list": [ 00:42:36.920 { 00:42:36.920 "name": null, 00:42:36.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:36.920 "is_configured": false, 00:42:36.920 "data_offset": 256, 00:42:36.920 "data_size": 7936 00:42:36.920 }, 00:42:36.920 { 00:42:36.920 "name": "BaseBdev2", 00:42:36.920 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:36.920 "is_configured": true, 00:42:36.920 "data_offset": 256, 00:42:36.920 "data_size": 7936 00:42:36.920 } 00:42:36.920 ] 00:42:36.920 }' 00:42:37.179 12:06:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:42:37.179 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:42:37.179 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:42:37.179 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:37.179 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:37.179 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@649 -- # local es=0 00:42:37.179 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:37.179 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:37.179 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:37.179 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:37.179 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:37.179 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:37.179 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:37.179 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:37.179 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:42:37.179 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:37.437 [2024-06-10 12:06:09.322824] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:37.437 [2024-06-10 12:06:09.323215] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:42:37.437 [2024-06-10 12:06:09.323362] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:42:37.437 request: 00:42:37.437 { 00:42:37.437 "base_bdev": "BaseBdev1", 00:42:37.437 "raid_bdev": "raid_bdev1", 00:42:37.437 "method": "bdev_raid_add_base_bdev", 00:42:37.437 "req_id": 1 00:42:37.437 } 00:42:37.437 Got JSON-RPC error response 00:42:37.437 response: 00:42:37.437 { 00:42:37.437 "code": -22, 00:42:37.437 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:42:37.437 } 00:42:37.437 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # es=1 00:42:37.437 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:42:37.437 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:42:37.437 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:42:37.437 12:06:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:42:38.373 12:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:38.373 12:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:42:38.373 12:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:42:38.373 12:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:42:38.373 12:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:42:38.373 12:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:42:38.373 12:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:42:38.373 12:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:42:38.373 12:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:42:38.373 12:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:42:38.373 12:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:38.373 12:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:38.631 12:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:42:38.631 "name": "raid_bdev1", 00:42:38.631 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:38.631 "strip_size_kb": 0, 00:42:38.631 "state": "online", 00:42:38.631 "raid_level": "raid1", 00:42:38.631 "superblock": true, 00:42:38.631 "num_base_bdevs": 2, 00:42:38.631 "num_base_bdevs_discovered": 1, 00:42:38.631 "num_base_bdevs_operational": 1, 00:42:38.631 "base_bdevs_list": [ 00:42:38.631 { 00:42:38.631 "name": null, 00:42:38.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:38.632 "is_configured": false, 00:42:38.632 "data_offset": 256, 00:42:38.632 "data_size": 7936 00:42:38.632 }, 00:42:38.632 { 00:42:38.632 "name": "BaseBdev2", 00:42:38.632 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:38.632 "is_configured": true, 00:42:38.632 "data_offset": 256, 00:42:38.632 "data_size": 7936 00:42:38.632 } 00:42:38.632 ] 00:42:38.632 }' 00:42:38.632 12:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:42:38.632 12:06:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:39.198 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:39.198 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:42:39.198 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:42:39.198 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:42:39.198 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:42:39.198 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:39.198 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:39.455 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:39.455 "name": "raid_bdev1", 00:42:39.455 "uuid": "c5b4ece2-7968-4369-8129-2fed8e9324c2", 00:42:39.455 "strip_size_kb": 0, 00:42:39.455 "state": "online", 00:42:39.455 "raid_level": "raid1", 00:42:39.455 "superblock": true, 00:42:39.455 "num_base_bdevs": 2, 00:42:39.455 "num_base_bdevs_discovered": 1, 00:42:39.456 "num_base_bdevs_operational": 1, 00:42:39.456 "base_bdevs_list": [ 00:42:39.456 { 00:42:39.456 "name": null, 00:42:39.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:39.456 "is_configured": false, 00:42:39.456 "data_offset": 256, 00:42:39.456 "data_size": 7936 00:42:39.456 }, 00:42:39.456 { 00:42:39.456 "name": "BaseBdev2", 00:42:39.456 "uuid": "0f3b1bb7-2338-5c29-be98-7cb51b042dfe", 00:42:39.456 "is_configured": true, 00:42:39.456 "data_offset": 256, 00:42:39.456 "data_size": 7936 00:42:39.456 } 00:42:39.456 ] 00:42:39.456 }' 00:42:39.456 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:42:39.714 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:42:39.714 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:42:39.714 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:39.714 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 165334 00:42:39.714 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@949 -- # '[' -z 165334 ']' 00:42:39.714 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # kill -0 165334 00:42:39.714 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # uname 00:42:39.714 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:42:39.714 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 165334 00:42:39.714 killing process with pid 165334 00:42:39.714 Received shutdown signal, test time was about 60.000000 seconds 00:42:39.714 00:42:39.714 Latency(us) 00:42:39.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:39.714 =================================================================================================================== 00:42:39.714 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:39.714 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:42:39.714 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:42:39.714 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # echo 'killing process with pid 165334' 00:42:39.714 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # kill 165334 00:42:39.714 12:06:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # wait 165334 00:42:39.714 [2024-06-10 12:06:11.607672] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:39.714 [2024-06-10 12:06:11.607788] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:39.714 [2024-06-10 12:06:11.607836] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:39.714 [2024-06-10 12:06:11.607846] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:42:39.972 [2024-06-10 12:06:11.988667] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:41.944 ************************************ 00:42:41.944 END TEST raid_rebuild_test_sb_md_interleaved 00:42:41.944 ************************************ 00:42:41.944 12:06:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:42:41.944 00:42:41.944 real 0m32.083s 00:42:41.944 user 0m51.027s 00:42:41.944 sys 0m3.452s 00:42:41.944 12:06:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # xtrace_disable 00:42:41.944 12:06:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:41.944 12:06:13 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:42:41.944 12:06:13 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:42:41.944 12:06:13 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 165334 ']' 00:42:41.944 12:06:13 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 165334 00:42:41.944 12:06:13 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:42:41.944 ************************************ 00:42:41.944 END TEST bdev_raid 00:42:41.944 ************************************ 00:42:41.944 00:42:41.944 real 26m42.939s 00:42:41.944 user 44m29.540s 00:42:41.944 sys 3m42.579s 00:42:41.944 12:06:13 bdev_raid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:42:41.944 12:06:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:42:41.944 12:06:13 -- spdk/autotest.sh@195 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:42:41.944 12:06:13 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:42:41.944 12:06:13 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:42:41.944 12:06:13 -- common/autotest_common.sh@10 -- # set +x 00:42:41.944 ************************************ 00:42:41.944 START TEST bdevperf_config 00:42:41.944 ************************************ 00:42:41.944 12:06:13 bdevperf_config -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:42:41.944 * Looking for test storage... 00:42:41.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:41.944 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:41.944 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:41.944 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:41.944 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:41.944 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:41.944 12:06:13 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:42:47.210 12:06:18 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-06-10 12:06:13.968809] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:42:47.210 [2024-06-10 12:06:13.969010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166196 ] 00:42:47.210 Using job config with 4 jobs 00:42:47.210 [2024-06-10 12:06:14.147389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:47.210 [2024-06-10 12:06:14.393078] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:42:47.210 cpumask for '\''job0'\'' is too big 00:42:47.210 cpumask for '\''job1'\'' is too big 00:42:47.210 cpumask for '\''job2'\'' is too big 00:42:47.210 cpumask for '\''job3'\'' is too big 00:42:47.210 Running I/O for 2 seconds... 00:42:47.210 00:42:47.210 Latency(us) 00:42:47.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:47.210 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:47.210 Malloc0 : 2.02 29340.36 28.65 0.00 0.00 8716.98 1575.98 13356.86 00:42:47.210 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:47.210 Malloc0 : 2.02 29319.10 28.63 0.00 0.00 8707.47 1536.98 11796.48 00:42:47.210 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:47.210 Malloc0 : 2.02 29298.25 28.61 0.00 0.00 8697.47 1536.98 10860.25 00:42:47.210 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:47.210 Malloc0 : 2.02 29277.31 28.59 0.00 0.00 8686.81 1529.17 10797.84 00:42:47.210 =================================================================================================================== 00:42:47.210 Total : 117235.01 114.49 0.00 0.00 8702.18 1529.17 13356.86' 00:42:47.210 12:06:18 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-06-10 12:06:13.968809] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:42:47.210 [2024-06-10 12:06:13.969010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166196 ] 00:42:47.210 Using job config with 4 jobs 00:42:47.210 [2024-06-10 12:06:14.147389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:47.210 [2024-06-10 12:06:14.393078] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:42:47.210 cpumask for '\''job0'\'' is too big 00:42:47.210 cpumask for '\''job1'\'' is too big 00:42:47.210 cpumask for '\''job2'\'' is too big 00:42:47.210 cpumask for '\''job3'\'' is too big 00:42:47.210 Running I/O for 2 seconds... 00:42:47.210 00:42:47.210 Latency(us) 00:42:47.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:47.210 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:47.210 Malloc0 : 2.02 29340.36 28.65 0.00 0.00 8716.98 1575.98 13356.86 00:42:47.210 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:47.210 Malloc0 : 2.02 29319.10 28.63 0.00 0.00 8707.47 1536.98 11796.48 00:42:47.210 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:47.210 Malloc0 : 2.02 29298.25 28.61 0.00 0.00 8697.47 1536.98 10860.25 00:42:47.210 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:47.210 Malloc0 : 2.02 29277.31 28.59 0.00 0.00 8686.81 1529.17 10797.84 00:42:47.210 =================================================================================================================== 00:42:47.210 Total : 117235.01 114.49 0.00 0.00 8702.18 1529.17 13356.86' 00:42:47.210 12:06:18 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-06-10 12:06:13.968809] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:42:47.210 [2024-06-10 12:06:13.969010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166196 ] 00:42:47.210 Using job config with 4 jobs 00:42:47.210 [2024-06-10 12:06:14.147389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:47.210 [2024-06-10 12:06:14.393078] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:42:47.210 cpumask for '\''job0'\'' is too big 00:42:47.210 cpumask for '\''job1'\'' is too big 00:42:47.210 cpumask for '\''job2'\'' is too big 00:42:47.210 cpumask for '\''job3'\'' is too big 00:42:47.210 Running I/O for 2 seconds... 00:42:47.210 00:42:47.210 Latency(us) 00:42:47.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:47.210 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:47.210 Malloc0 : 2.02 29340.36 28.65 0.00 0.00 8716.98 1575.98 13356.86 00:42:47.210 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:47.210 Malloc0 : 2.02 29319.10 28.63 0.00 0.00 8707.47 1536.98 11796.48 00:42:47.210 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:47.210 Malloc0 : 2.02 29298.25 28.61 0.00 0.00 8697.47 1536.98 10860.25 00:42:47.210 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:47.210 Malloc0 : 2.02 29277.31 28.59 0.00 0.00 8686.81 1529.17 10797.84 00:42:47.210 =================================================================================================================== 00:42:47.210 Total : 117235.01 114.49 0.00 0.00 8702.18 1529.17 13356.86' 00:42:47.210 12:06:18 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:42:47.210 12:06:18 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:42:47.210 12:06:18 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:42:47.210 12:06:18 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:42:47.210 [2024-06-10 12:06:18.989798] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:42:47.210 [2024-06-10 12:06:18.990124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166262 ] 00:42:47.210 [2024-06-10 12:06:19.150973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:47.468 [2024-06-10 12:06:19.395372] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:42:48.075 cpumask for 'job0' is too big 00:42:48.075 cpumask for 'job1' is too big 00:42:48.075 cpumask for 'job2' is too big 00:42:48.075 cpumask for 'job3' is too big 00:42:52.264 12:06:24 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:42:52.264 Running I/O for 2 seconds... 00:42:52.264 00:42:52.264 Latency(us) 00:42:52.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:52.264 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:52.264 Malloc0 : 2.01 30514.18 29.80 0.00 0.00 8381.27 1677.41 14605.17 00:42:52.264 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:52.264 Malloc0 : 2.01 30493.77 29.78 0.00 0.00 8370.61 1646.20 14605.17 00:42:52.264 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:52.264 Malloc0 : 2.02 30473.82 29.76 0.00 0.00 8359.73 1638.40 14480.34 00:42:52.264 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:52.264 Malloc0 : 2.02 30546.99 29.83 0.00 0.00 8322.28 815.30 14480.34 00:42:52.264 =================================================================================================================== 00:42:52.264 Total : 122028.76 119.17 0.00 0.00 8358.43 815.30 14605.17' 00:42:52.264 12:06:24 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:42:52.264 12:06:24 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:42:52.264 12:06:24 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:42:52.264 12:06:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:42:52.264 12:06:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:42:52.264 12:06:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:42:52.264 12:06:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:42:52.264 00:42:52.264 12:06:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:42:52.264 12:06:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:52.264 12:06:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:52.264 12:06:24 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:42:52.264 12:06:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:42:52.265 12:06:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:42:52.265 12:06:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:42:52.265 12:06:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:42:52.265 12:06:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:42:52.265 12:06:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:52.265 00:42:52.265 12:06:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:52.265 12:06:24 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:42:52.265 12:06:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:42:52.265 12:06:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:42:52.265 12:06:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:42:52.265 12:06:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:42:52.265 12:06:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:42:52.265 12:06:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:52.265 00:42:52.265 12:06:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:52.265 12:06:24 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:42:57.547 12:06:28 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-06-10 12:06:24.111118] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:42:57.547 [2024-06-10 12:06:24.111273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166323 ] 00:42:57.547 Using job config with 3 jobs 00:42:57.547 [2024-06-10 12:06:24.275243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:57.547 [2024-06-10 12:06:24.561617] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:42:57.547 cpumask for '\''job0'\'' is too big 00:42:57.548 cpumask for '\''job1'\'' is too big 00:42:57.548 cpumask for '\''job2'\'' is too big 00:42:57.548 Running I/O for 2 seconds... 00:42:57.548 00:42:57.548 Latency(us) 00:42:57.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:57.548 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:57.548 Malloc0 : 2.01 43092.30 42.08 0.00 0.00 5934.49 1404.34 8925.38 00:42:57.548 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:57.548 Malloc0 : 2.01 43063.54 42.05 0.00 0.00 5928.81 1419.95 8488.47 00:42:57.548 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:57.548 Malloc0 : 2.01 43120.64 42.11 0.00 0.00 5910.13 663.16 8426.06 00:42:57.548 =================================================================================================================== 00:42:57.548 Total : 129276.48 126.25 0.00 0.00 5924.46 663.16 8925.38' 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-06-10 12:06:24.111118] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:42:57.548 [2024-06-10 12:06:24.111273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166323 ] 00:42:57.548 Using job config with 3 jobs 00:42:57.548 [2024-06-10 12:06:24.275243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:57.548 [2024-06-10 12:06:24.561617] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:42:57.548 cpumask for '\''job0'\'' is too big 00:42:57.548 cpumask for '\''job1'\'' is too big 00:42:57.548 cpumask for '\''job2'\'' is too big 00:42:57.548 Running I/O for 2 seconds... 00:42:57.548 00:42:57.548 Latency(us) 00:42:57.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:57.548 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:57.548 Malloc0 : 2.01 43092.30 42.08 0.00 0.00 5934.49 1404.34 8925.38 00:42:57.548 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:57.548 Malloc0 : 2.01 43063.54 42.05 0.00 0.00 5928.81 1419.95 8488.47 00:42:57.548 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:57.548 Malloc0 : 2.01 43120.64 42.11 0.00 0.00 5910.13 663.16 8426.06 00:42:57.548 =================================================================================================================== 00:42:57.548 Total : 129276.48 126.25 0.00 0.00 5924.46 663.16 8925.38' 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-06-10 12:06:24.111118] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:42:57.548 [2024-06-10 12:06:24.111273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166323 ] 00:42:57.548 Using job config with 3 jobs 00:42:57.548 [2024-06-10 12:06:24.275243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:57.548 [2024-06-10 12:06:24.561617] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:42:57.548 cpumask for '\''job0'\'' is too big 00:42:57.548 cpumask for '\''job1'\'' is too big 00:42:57.548 cpumask for '\''job2'\'' is too big 00:42:57.548 Running I/O for 2 seconds... 00:42:57.548 00:42:57.548 Latency(us) 00:42:57.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:57.548 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:57.548 Malloc0 : 2.01 43092.30 42.08 0.00 0.00 5934.49 1404.34 8925.38 00:42:57.548 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:57.548 Malloc0 : 2.01 43063.54 42.05 0.00 0.00 5928.81 1419.95 8488.47 00:42:57.548 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:57.548 Malloc0 : 2.01 43120.64 42.11 0.00 0.00 5910.13 663.16 8426.06 00:42:57.548 =================================================================================================================== 00:42:57.548 Total : 129276.48 126.25 0.00 0.00 5924.46 663.16 8925.38' 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:42:57.548 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:57.548 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:57.548 00:42:57.548 12:06:28 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:57.548 12:06:29 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:42:57.548 12:06:29 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:42:57.548 12:06:29 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:42:57.548 12:06:29 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:42:57.548 12:06:29 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:42:57.548 12:06:29 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:42:57.548 12:06:29 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:57.548 00:42:57.548 12:06:29 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:57.548 12:06:29 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:42:57.548 12:06:29 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:42:57.548 12:06:29 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:42:57.548 12:06:29 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:42:57.548 12:06:29 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:42:57.548 12:06:29 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:42:57.548 12:06:29 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:57.548 00:42:57.548 12:06:29 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:57.548 12:06:29 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:43:02.819 12:06:33 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-06-10 12:06:29.075338] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:02.819 [2024-06-10 12:06:29.075507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166393 ] 00:43:02.819 Using job config with 4 jobs 00:43:02.819 [2024-06-10 12:06:29.232703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:02.819 [2024-06-10 12:06:29.469759] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:02.819 cpumask for '\''job0'\'' is too big 00:43:02.819 cpumask for '\''job1'\'' is too big 00:43:02.819 cpumask for '\''job2'\'' is too big 00:43:02.819 cpumask for '\''job3'\'' is too big 00:43:02.819 Running I/O for 2 seconds... 00:43:02.819 00:43:02.819 Latency(us) 00:43:02.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:02.819 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.819 Malloc0 : 2.03 15753.64 15.38 0.00 0.00 16238.38 3183.18 26464.06 00:43:02.819 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.819 Malloc1 : 2.03 15742.96 15.37 0.00 0.00 16235.97 3682.50 26464.06 00:43:02.819 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.819 Malloc0 : 2.03 15732.89 15.36 0.00 0.00 16201.09 3089.55 23093.64 00:43:02.819 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.819 Malloc1 : 2.04 15722.38 15.35 0.00 0.00 16198.88 3791.73 23343.30 00:43:02.819 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.819 Malloc0 : 2.04 15712.26 15.34 0.00 0.00 16161.20 2980.33 20472.20 00:43:02.819 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.819 Malloc1 : 2.04 15701.83 15.33 0.00 0.00 16160.50 3495.25 20472.20 00:43:02.819 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.819 Malloc0 : 2.04 15691.69 15.32 0.00 0.00 16126.18 3089.55 19723.22 00:43:02.819 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.819 Malloc1 : 2.04 15681.24 15.31 0.00 0.00 16123.63 3713.71 19848.05 00:43:02.819 =================================================================================================================== 00:43:02.819 Total : 125738.90 122.79 0.00 0.00 16180.73 2980.33 26464.06' 00:43:02.819 12:06:33 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-06-10 12:06:29.075338] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:02.820 [2024-06-10 12:06:29.075507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166393 ] 00:43:02.820 Using job config with 4 jobs 00:43:02.820 [2024-06-10 12:06:29.232703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:02.820 [2024-06-10 12:06:29.469759] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:02.820 cpumask for '\''job0'\'' is too big 00:43:02.820 cpumask for '\''job1'\'' is too big 00:43:02.820 cpumask for '\''job2'\'' is too big 00:43:02.820 cpumask for '\''job3'\'' is too big 00:43:02.820 Running I/O for 2 seconds... 00:43:02.820 00:43:02.820 Latency(us) 00:43:02.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:02.820 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.820 Malloc0 : 2.03 15753.64 15.38 0.00 0.00 16238.38 3183.18 26464.06 00:43:02.820 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.820 Malloc1 : 2.03 15742.96 15.37 0.00 0.00 16235.97 3682.50 26464.06 00:43:02.820 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.820 Malloc0 : 2.03 15732.89 15.36 0.00 0.00 16201.09 3089.55 23093.64 00:43:02.820 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.820 Malloc1 : 2.04 15722.38 15.35 0.00 0.00 16198.88 3791.73 23343.30 00:43:02.820 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.820 Malloc0 : 2.04 15712.26 15.34 0.00 0.00 16161.20 2980.33 20472.20 00:43:02.820 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.820 Malloc1 : 2.04 15701.83 15.33 0.00 0.00 16160.50 3495.25 20472.20 00:43:02.820 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.820 Malloc0 : 2.04 15691.69 15.32 0.00 0.00 16126.18 3089.55 19723.22 00:43:02.820 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.820 Malloc1 : 2.04 15681.24 15.31 0.00 0.00 16123.63 3713.71 19848.05 00:43:02.820 =================================================================================================================== 00:43:02.820 Total : 125738.90 122.79 0.00 0.00 16180.73 2980.33 26464.06' 00:43:02.820 12:06:33 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:43:02.820 12:06:33 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-06-10 12:06:29.075338] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:02.820 [2024-06-10 12:06:29.075507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166393 ] 00:43:02.820 Using job config with 4 jobs 00:43:02.820 [2024-06-10 12:06:29.232703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:02.820 [2024-06-10 12:06:29.469759] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:02.820 cpumask for '\''job0'\'' is too big 00:43:02.820 cpumask for '\''job1'\'' is too big 00:43:02.820 cpumask for '\''job2'\'' is too big 00:43:02.820 cpumask for '\''job3'\'' is too big 00:43:02.820 Running I/O for 2 seconds... 00:43:02.820 00:43:02.820 Latency(us) 00:43:02.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:02.820 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.820 Malloc0 : 2.03 15753.64 15.38 0.00 0.00 16238.38 3183.18 26464.06 00:43:02.820 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.820 Malloc1 : 2.03 15742.96 15.37 0.00 0.00 16235.97 3682.50 26464.06 00:43:02.820 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.820 Malloc0 : 2.03 15732.89 15.36 0.00 0.00 16201.09 3089.55 23093.64 00:43:02.820 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.820 Malloc1 : 2.04 15722.38 15.35 0.00 0.00 16198.88 3791.73 23343.30 00:43:02.820 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.820 Malloc0 : 2.04 15712.26 15.34 0.00 0.00 16161.20 2980.33 20472.20 00:43:02.820 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.820 Malloc1 : 2.04 15701.83 15.33 0.00 0.00 16160.50 3495.25 20472.20 00:43:02.820 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.820 Malloc0 : 2.04 15691.69 15.32 0.00 0.00 16126.18 3089.55 19723.22 00:43:02.820 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:43:02.820 Malloc1 : 2.04 15681.24 15.31 0.00 0.00 16123.63 3713.71 19848.05 00:43:02.820 =================================================================================================================== 00:43:02.820 Total : 125738.90 122.79 0.00 0.00 16180.73 2980.33 26464.06' 00:43:02.820 12:06:33 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:43:02.820 12:06:33 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:43:02.820 12:06:33 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:43:02.820 12:06:33 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:43:02.820 12:06:33 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:43:02.820 ************************************ 00:43:02.820 END TEST bdevperf_config 00:43:02.820 ************************************ 00:43:02.820 00:43:02.820 real 0m20.150s 00:43:02.820 user 0m18.372s 00:43:02.820 sys 0m1.190s 00:43:02.820 12:06:33 bdevperf_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:02.820 12:06:33 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:43:02.820 12:06:33 -- spdk/autotest.sh@196 -- # uname -s 00:43:02.820 12:06:33 -- spdk/autotest.sh@196 -- # [[ Linux == Linux ]] 00:43:02.820 12:06:33 -- spdk/autotest.sh@197 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:43:02.820 12:06:33 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:43:02.820 12:06:33 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:02.820 12:06:33 -- common/autotest_common.sh@10 -- # set +x 00:43:02.820 ************************************ 00:43:02.820 START TEST reactor_set_interrupt 00:43:02.820 ************************************ 00:43:02.820 12:06:33 reactor_set_interrupt -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:43:02.820 * Looking for test storage... 00:43:02.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:43:02.820 12:06:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:43:02.820 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:43:02.820 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:43:02.820 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:43:02.820 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:43:02.820 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:43:02.820 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:43:02.820 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:43:02.820 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:43:02.820 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:43:02.820 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:43:02.820 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:43:02.820 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:43:02.820 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:43:02.820 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:43:02.820 12:06:34 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:43:02.820 12:06:34 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:43:02.820 12:06:34 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:43:02.820 12:06:34 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:43:02.820 12:06:34 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:43:02.820 12:06:34 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:43:02.820 12:06:34 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:43:02.820 12:06:34 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:43:02.820 12:06:34 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:43:02.820 12:06:34 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:43:02.821 12:06:34 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:43:02.821 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:43:02.821 12:06:34 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:43:02.821 12:06:34 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:43:02.821 12:06:34 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:43:02.821 12:06:34 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:43:02.821 12:06:34 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:43:02.821 12:06:34 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:43:02.821 12:06:34 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:43:02.821 12:06:34 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:43:02.821 12:06:34 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:43:02.821 12:06:34 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:43:02.821 12:06:34 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:43:02.821 12:06:34 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:43:02.821 12:06:34 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:43:02.821 12:06:34 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:43:02.821 12:06:34 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:43:02.821 #define SPDK_CONFIG_H 00:43:02.821 #define SPDK_CONFIG_APPS 1 00:43:02.821 #define SPDK_CONFIG_ARCH native 00:43:02.821 #define SPDK_CONFIG_ASAN 1 00:43:02.821 #undef SPDK_CONFIG_AVAHI 00:43:02.821 #undef SPDK_CONFIG_CET 00:43:02.821 #define SPDK_CONFIG_COVERAGE 1 00:43:02.821 #define SPDK_CONFIG_CROSS_PREFIX 00:43:02.821 #undef SPDK_CONFIG_CRYPTO 00:43:02.821 #undef SPDK_CONFIG_CRYPTO_MLX5 00:43:02.821 #undef SPDK_CONFIG_CUSTOMOCF 00:43:02.821 #undef SPDK_CONFIG_DAOS 00:43:02.821 #define SPDK_CONFIG_DAOS_DIR 00:43:02.821 #define SPDK_CONFIG_DEBUG 1 00:43:02.821 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:43:02.821 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:43:02.821 #define SPDK_CONFIG_DPDK_INC_DIR 00:43:02.821 #define SPDK_CONFIG_DPDK_LIB_DIR 00:43:02.821 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:43:02.821 #undef SPDK_CONFIG_DPDK_UADK 00:43:02.821 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:43:02.821 #define SPDK_CONFIG_EXAMPLES 1 00:43:02.821 #undef SPDK_CONFIG_FC 00:43:02.821 #define SPDK_CONFIG_FC_PATH 00:43:02.821 #define SPDK_CONFIG_FIO_PLUGIN 1 00:43:02.821 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:43:02.821 #undef SPDK_CONFIG_FUSE 00:43:02.821 #undef SPDK_CONFIG_FUZZER 00:43:02.821 #define SPDK_CONFIG_FUZZER_LIB 00:43:02.821 #undef SPDK_CONFIG_GOLANG 00:43:02.821 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:43:02.821 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:43:02.821 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:43:02.821 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:43:02.821 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:43:02.821 #undef SPDK_CONFIG_HAVE_LIBBSD 00:43:02.821 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:43:02.821 #define SPDK_CONFIG_IDXD 1 00:43:02.821 #undef SPDK_CONFIG_IDXD_KERNEL 00:43:02.821 #undef SPDK_CONFIG_IPSEC_MB 00:43:02.821 #define SPDK_CONFIG_IPSEC_MB_DIR 00:43:02.821 #define SPDK_CONFIG_ISAL 1 00:43:02.821 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:43:02.821 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:43:02.821 #define SPDK_CONFIG_LIBDIR 00:43:02.821 #undef SPDK_CONFIG_LTO 00:43:02.821 #define SPDK_CONFIG_MAX_LCORES 00:43:02.821 #define SPDK_CONFIG_NVME_CUSE 1 00:43:02.821 #undef SPDK_CONFIG_OCF 00:43:02.821 #define SPDK_CONFIG_OCF_PATH 00:43:02.821 #define SPDK_CONFIG_OPENSSL_PATH 00:43:02.821 #undef SPDK_CONFIG_PGO_CAPTURE 00:43:02.821 #define SPDK_CONFIG_PGO_DIR 00:43:02.821 #undef SPDK_CONFIG_PGO_USE 00:43:02.821 #define SPDK_CONFIG_PREFIX /usr/local 00:43:02.821 #define SPDK_CONFIG_RAID5F 1 00:43:02.821 #undef SPDK_CONFIG_RBD 00:43:02.821 #define SPDK_CONFIG_RDMA 1 00:43:02.822 #define SPDK_CONFIG_RDMA_PROV verbs 00:43:02.822 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:43:02.822 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:43:02.822 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:43:02.822 #undef SPDK_CONFIG_SHARED 00:43:02.822 #undef SPDK_CONFIG_SMA 00:43:02.822 #define SPDK_CONFIG_TESTS 1 00:43:02.822 #undef SPDK_CONFIG_TSAN 00:43:02.822 #undef SPDK_CONFIG_UBLK 00:43:02.822 #define SPDK_CONFIG_UBSAN 1 00:43:02.822 #define SPDK_CONFIG_UNIT_TESTS 1 00:43:02.822 #undef SPDK_CONFIG_URING 00:43:02.822 #define SPDK_CONFIG_URING_PATH 00:43:02.822 #undef SPDK_CONFIG_URING_ZNS 00:43:02.822 #undef SPDK_CONFIG_USDT 00:43:02.822 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:43:02.822 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:43:02.822 #undef SPDK_CONFIG_VFIO_USER 00:43:02.822 #define SPDK_CONFIG_VFIO_USER_DIR 00:43:02.822 #define SPDK_CONFIG_VHOST 1 00:43:02.822 #define SPDK_CONFIG_VIRTIO 1 00:43:02.822 #undef SPDK_CONFIG_VTUNE 00:43:02.822 #define SPDK_CONFIG_VTUNE_DIR 00:43:02.822 #define SPDK_CONFIG_WERROR 1 00:43:02.822 #define SPDK_CONFIG_WPDK_DIR 00:43:02.822 #undef SPDK_CONFIG_XNVME 00:43:02.822 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:43:02.822 12:06:34 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:02.822 12:06:34 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:02.822 12:06:34 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:02.822 12:06:34 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:02.822 12:06:34 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:02.822 12:06:34 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:02.822 12:06:34 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:02.822 12:06:34 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:43:02.822 12:06:34 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:43:02.822 12:06:34 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 1 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 1 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 0 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 1 00:43:02.822 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 1 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@167 -- # : 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@200 -- # cat 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export QEMU_BIN= 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@253 -- # QEMU_BIN= 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@254 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@263 -- # export valgrind= 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@263 -- # valgrind= 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@269 -- # uname -s 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKE=make 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:43:02.823 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@299 -- # TEST_MODE= 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@318 -- # [[ -z 166493 ]] 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@318 -- # kill -0 166493 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@331 -- # local mount target_dir 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.XOe7av 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.XOe7av/tests/interrupt /tmp/spdk.XOe7av 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@327 -- # df -T 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=1248956416 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253683200 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4726784 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda1 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=10001739776 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=20616794112 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=10598277120 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6263689216 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6268399616 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=5242880 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=5242880 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda15 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=103061504 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=109395968 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=6334464 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=1253675008 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253679104 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=92226842624 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=7475937280 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:43:02.824 * Looking for test storage... 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@368 -- # local target_space new_size 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@372 -- # mount=/ 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@374 -- # target_space=10001739776 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ ext4 == tmpfs ]] 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ ext4 == ramfs ]] 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@381 -- # new_size=12812869632 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:43:02.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@389 -- # return 0 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@1681 -- # set -o errtrace 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # true 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@1688 -- # xtrace_fd 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:43:02.824 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:43:02.825 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:43:02.825 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:43:02.825 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:43:02.825 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:02.825 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:43:02.825 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:43:02.825 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:43:02.825 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:43:02.825 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:43:02.825 12:06:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:43:02.825 12:06:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:43:02.825 12:06:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:43:02.825 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:02.825 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:43:02.825 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=166539 00:43:02.825 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:43:02.825 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:43:02.825 12:06:34 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 166539 /var/tmp/spdk.sock 00:43:02.825 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@830 -- # '[' -z 166539 ']' 00:43:02.825 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:02.825 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:02.825 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:02.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:02.825 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:02.825 12:06:34 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:02.825 [2024-06-10 12:06:34.349612] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:02.825 [2024-06-10 12:06:34.350170] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166539 ] 00:43:02.825 [2024-06-10 12:06:34.572284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:02.825 [2024-06-10 12:06:34.781659] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:43:02.825 [2024-06-10 12:06:34.781828] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:02.825 [2024-06-10 12:06:34.781833] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:43:03.084 [2024-06-10 12:06:35.071329] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:03.342 12:06:35 reactor_set_interrupt -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:03.342 12:06:35 reactor_set_interrupt -- common/autotest_common.sh@863 -- # return 0 00:43:03.342 12:06:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:43:03.342 12:06:35 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:03.908 Malloc0 00:43:03.908 Malloc1 00:43:03.908 Malloc2 00:43:03.908 12:06:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:43:03.908 12:06:35 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:43:03.908 12:06:35 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:43:03.908 12:06:35 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:43:03.908 5000+0 records in 00:43:03.909 5000+0 records out 00:43:03.909 10240000 bytes (10 MB, 9.8 MiB) copied, 0.03546 s, 289 MB/s 00:43:03.909 12:06:35 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:43:04.167 AIO0 00:43:04.167 12:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 166539 00:43:04.167 12:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 166539 without_thd 00:43:04.167 12:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=166539 00:43:04.167 12:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:43:04.167 12:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:43:04.167 12:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:43:04.167 12:06:36 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:43:04.167 12:06:36 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:43:04.167 12:06:36 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:43:04.167 12:06:36 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:43:04.167 12:06:36 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:43:04.167 12:06:36 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:43:04.426 spdk_thread ids are 1 on reactor0. 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166539 0 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166539 0 idle 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166539 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:04.426 12:06:36 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:04.427 12:06:36 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:43:04.427 12:06:36 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:43:04.427 12:06:36 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:43:04.427 12:06:36 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:43:04.427 12:06:36 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:43:04.427 12:06:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166539 -w 256 00:43:04.427 12:06:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166539 root 20 0 20.1t 151648 31776 S 0.0 1.2 0:00.84 reactor_0' 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166539 root 20 0 20.1t 151648 31776 S 0.0 1.2 0:00.84 reactor_0 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166539 1 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166539 1 idle 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166539 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166539 -w 256 00:43:04.686 12:06:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:43:04.944 12:06:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166543 root 20 0 20.1t 151648 31776 S 0.0 1.2 0:00.00 reactor_1' 00:43:04.944 12:06:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166543 root 20 0 20.1t 151648 31776 S 0.0 1.2 0:00.00 reactor_1 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166539 2 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166539 2 idle 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166539 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166539 -w 256 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166544 root 20 0 20.1t 151648 31776 S 0.0 1.2 0:00.00 reactor_2' 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166544 root 20 0 20.1t 151648 31776 S 0.0 1.2 0:00.00 reactor_2 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:43:04.945 12:06:36 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:43:05.203 12:06:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:43:05.203 12:06:37 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:43:05.203 12:06:37 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:43:05.203 12:06:37 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:43:05.204 12:06:37 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:43:05.204 12:06:37 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:43:05.204 12:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:43:05.204 12:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:43:05.204 12:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:43:05.462 [2024-06-10 12:06:37.292338] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:05.462 12:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:43:05.462 [2024-06-10 12:06:37.491652] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:43:05.462 [2024-06-10 12:06:37.492175] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:43:05.462 12:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:43:05.721 [2024-06-10 12:06:37.743719] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:43:05.721 [2024-06-10 12:06:37.744608] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:43:05.721 12:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:43:05.721 12:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 166539 0 00:43:05.721 12:06:37 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 166539 0 busy 00:43:05.721 12:06:37 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166539 00:43:05.721 12:06:37 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:05.721 12:06:37 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:05.721 12:06:37 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:43:05.721 12:06:37 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:43:05.721 12:06:37 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:43:05.721 12:06:37 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:43:05.721 12:06:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166539 -w 256 00:43:05.721 12:06:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166539 root 20 0 20.1t 151760 31776 R 99.9 1.2 0:01.28 reactor_0' 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166539 root 20 0 20.1t 151760 31776 R 99.9 1.2 0:01.28 reactor_0 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 166539 2 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 166539 2 busy 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166539 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:43:05.979 12:06:37 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:43:05.980 12:06:37 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:43:05.980 12:06:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166539 -w 256 00:43:05.980 12:06:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:43:06.238 12:06:38 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166544 root 20 0 20.1t 151760 31776 R 99.9 1.2 0:00.34 reactor_2' 00:43:06.238 12:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:43:06.238 12:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:43:06.238 12:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166544 root 20 0 20.1t 151760 31776 R 99.9 1.2 0:00.34 reactor_2 00:43:06.238 12:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:43:06.238 12:06:38 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:43:06.238 12:06:38 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:43:06.238 12:06:38 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:43:06.239 12:06:38 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:43:06.239 12:06:38 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:43:06.239 12:06:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:43:06.497 [2024-06-10 12:06:38.367674] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:43:06.497 [2024-06-10 12:06:38.368089] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:43:06.497 12:06:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:43:06.497 12:06:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 166539 2 00:43:06.497 12:06:38 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166539 2 idle 00:43:06.497 12:06:38 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166539 00:43:06.497 12:06:38 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:43:06.497 12:06:38 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:06.497 12:06:38 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:43:06.497 12:06:38 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:43:06.497 12:06:38 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:43:06.497 12:06:38 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:43:06.497 12:06:38 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:43:06.497 12:06:38 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:43:06.497 12:06:38 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166539 -w 256 00:43:06.497 12:06:38 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166544 root 20 0 20.1t 151824 31776 S 0.0 1.2 0:00.61 reactor_2' 00:43:06.497 12:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166544 root 20 0 20.1t 151824 31776 S 0.0 1.2 0:00.61 reactor_2 00:43:06.756 12:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:43:06.756 12:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:43:06.756 12:06:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:43:06.756 12:06:38 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:43:06.756 12:06:38 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:43:06.756 12:06:38 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:43:06.756 12:06:38 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:43:06.756 12:06:38 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:43:06.756 12:06:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:43:07.015 [2024-06-10 12:06:38.831698] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:43:07.015 [2024-06-10 12:06:38.832689] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:43:07.015 12:06:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:43:07.015 12:06:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:43:07.015 12:06:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:43:07.015 [2024-06-10 12:06:39.048129] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:07.015 12:06:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 166539 0 00:43:07.015 12:06:39 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166539 0 idle 00:43:07.015 12:06:39 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166539 00:43:07.015 12:06:39 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:07.015 12:06:39 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:07.015 12:06:39 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:43:07.015 12:06:39 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:43:07.015 12:06:39 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:43:07.015 12:06:39 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:43:07.015 12:06:39 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:43:07.015 12:06:39 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166539 -w 256 00:43:07.015 12:06:39 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:43:07.274 12:06:39 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166539 root 20 0 20.1t 151912 31776 S 0.0 1.2 0:02.19 reactor_0' 00:43:07.274 12:06:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166539 root 20 0 20.1t 151912 31776 S 0.0 1.2 0:02.19 reactor_0 00:43:07.274 12:06:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:43:07.274 12:06:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:43:07.274 12:06:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:43:07.274 12:06:39 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:43:07.274 12:06:39 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:43:07.274 12:06:39 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:43:07.274 12:06:39 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:43:07.274 12:06:39 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:43:07.274 12:06:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:43:07.274 12:06:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:43:07.274 12:06:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:43:07.274 12:06:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 166539 00:43:07.274 12:06:39 reactor_set_interrupt -- common/autotest_common.sh@949 -- # '[' -z 166539 ']' 00:43:07.274 12:06:39 reactor_set_interrupt -- common/autotest_common.sh@953 -- # kill -0 166539 00:43:07.274 12:06:39 reactor_set_interrupt -- common/autotest_common.sh@954 -- # uname 00:43:07.274 12:06:39 reactor_set_interrupt -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:07.274 12:06:39 reactor_set_interrupt -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 166539 00:43:07.274 12:06:39 reactor_set_interrupt -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:43:07.274 12:06:39 reactor_set_interrupt -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:43:07.274 12:06:39 reactor_set_interrupt -- common/autotest_common.sh@967 -- # echo 'killing process with pid 166539' 00:43:07.274 killing process with pid 166539 00:43:07.274 12:06:39 reactor_set_interrupt -- common/autotest_common.sh@968 -- # kill 166539 00:43:07.274 12:06:39 reactor_set_interrupt -- common/autotest_common.sh@973 -- # wait 166539 00:43:09.177 12:06:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:43:09.177 12:06:40 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:43:09.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:09.177 12:06:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:43:09.177 12:06:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:09.177 12:06:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:43:09.177 12:06:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=166686 00:43:09.177 12:06:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:43:09.177 12:06:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 166686 /var/tmp/spdk.sock 00:43:09.177 12:06:40 reactor_set_interrupt -- common/autotest_common.sh@830 -- # '[' -z 166686 ']' 00:43:09.177 12:06:40 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:09.177 12:06:40 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:09.177 12:06:40 reactor_set_interrupt -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:09.177 12:06:40 reactor_set_interrupt -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:09.177 12:06:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:43:09.177 12:06:40 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:09.177 [2024-06-10 12:06:40.876994] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:09.177 [2024-06-10 12:06:40.877485] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166686 ] 00:43:09.177 [2024-06-10 12:06:41.052856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:09.435 [2024-06-10 12:06:41.286621] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:43:09.435 [2024-06-10 12:06:41.286700] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:09.435 [2024-06-10 12:06:41.286700] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:43:09.694 [2024-06-10 12:06:41.613439] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:09.953 12:06:41 reactor_set_interrupt -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:09.953 12:06:41 reactor_set_interrupt -- common/autotest_common.sh@863 -- # return 0 00:43:09.953 12:06:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:43:09.953 12:06:41 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:10.211 Malloc0 00:43:10.211 Malloc1 00:43:10.211 Malloc2 00:43:10.211 12:06:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:43:10.211 12:06:42 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:43:10.211 12:06:42 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:43:10.211 12:06:42 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:43:10.211 5000+0 records in 00:43:10.211 5000+0 records out 00:43:10.211 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0289093 s, 354 MB/s 00:43:10.211 12:06:42 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:43:10.471 AIO0 00:43:10.471 12:06:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 166686 00:43:10.471 12:06:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 166686 00:43:10.471 12:06:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=166686 00:43:10.471 12:06:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:43:10.471 12:06:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:43:10.471 12:06:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:43:10.471 12:06:42 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:43:10.471 12:06:42 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:43:10.471 12:06:42 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:43:10.471 12:06:42 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:43:10.471 12:06:42 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:43:10.471 12:06:42 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:43:10.731 12:06:42 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:43:10.731 12:06:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:43:10.731 12:06:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:43:10.731 12:06:42 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:43:10.731 12:06:42 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:43:10.731 12:06:42 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:43:10.731 12:06:42 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:43:10.731 12:06:42 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:43:10.731 12:06:42 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:43:10.990 12:06:42 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:43:10.990 12:06:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:43:10.990 12:06:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:43:10.990 spdk_thread ids are 1 on reactor0. 00:43:10.990 12:06:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:43:10.990 12:06:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166686 0 00:43:10.990 12:06:42 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166686 0 idle 00:43:10.990 12:06:42 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166686 00:43:10.990 12:06:42 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:10.990 12:06:42 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:10.990 12:06:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:43:10.990 12:06:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:43:10.990 12:06:42 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:43:10.990 12:06:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:43:10.990 12:06:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:43:10.990 12:06:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166686 -w 256 00:43:10.990 12:06:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166686 root 20 0 20.1t 151940 32044 S 0.0 1.2 0:00.82 reactor_0' 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166686 root 20 0 20.1t 151940 32044 S 0.0 1.2 0:00.82 reactor_0 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166686 1 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166686 1 idle 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166686 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166686 -w 256 00:43:11.250 12:06:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166689 root 20 0 20.1t 151940 32044 S 0.0 1.2 0:00.00 reactor_1' 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166689 root 20 0 20.1t 151940 32044 S 0.0 1.2 0:00.00 reactor_1 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166686 2 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166686 2 idle 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166686 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166686 -w 256 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166690 root 20 0 20.1t 151940 32044 S 0.0 1.2 0:00.00 reactor_2' 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166690 root 20 0 20.1t 151940 32044 S 0.0 1.2 0:00.00 reactor_2 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:43:11.509 12:06:43 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:43:11.510 12:06:43 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:43:11.510 12:06:43 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:43:11.510 12:06:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:43:11.510 12:06:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:43:11.768 [2024-06-10 12:06:43.736443] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:43:11.768 [2024-06-10 12:06:43.736967] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:43:11.768 [2024-06-10 12:06:43.737302] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:43:11.768 12:06:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:43:12.027 [2024-06-10 12:06:44.024429] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:43:12.027 [2024-06-10 12:06:44.025181] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:43:12.027 12:06:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:43:12.027 12:06:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 166686 0 00:43:12.027 12:06:44 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 166686 0 busy 00:43:12.027 12:06:44 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166686 00:43:12.027 12:06:44 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:12.027 12:06:44 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:12.027 12:06:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:43:12.027 12:06:44 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:43:12.027 12:06:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:43:12.027 12:06:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:43:12.027 12:06:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166686 -w 256 00:43:12.027 12:06:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:43:12.286 12:06:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166686 root 20 0 20.1t 152056 32044 R 99.9 1.2 0:01.30 reactor_0' 00:43:12.286 12:06:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166686 root 20 0 20.1t 152056 32044 R 99.9 1.2 0:01.30 reactor_0 00:43:12.286 12:06:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:43:12.286 12:06:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:43:12.286 12:06:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:43:12.286 12:06:44 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:43:12.286 12:06:44 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:43:12.286 12:06:44 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:43:12.286 12:06:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:43:12.286 12:06:44 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:43:12.286 12:06:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:43:12.286 12:06:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 166686 2 00:43:12.286 12:06:44 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 166686 2 busy 00:43:12.286 12:06:44 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166686 00:43:12.286 12:06:44 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:43:12.287 12:06:44 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:12.287 12:06:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:43:12.287 12:06:44 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:43:12.287 12:06:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:43:12.287 12:06:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:43:12.287 12:06:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166686 -w 256 00:43:12.287 12:06:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:43:12.546 12:06:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166690 root 20 0 20.1t 152056 32044 R 99.9 1.2 0:00.35 reactor_2' 00:43:12.546 12:06:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166690 root 20 0 20.1t 152056 32044 R 99.9 1.2 0:00.35 reactor_2 00:43:12.546 12:06:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:43:12.546 12:06:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:43:12.546 12:06:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:43:12.546 12:06:44 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:43:12.546 12:06:44 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:43:12.546 12:06:44 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:43:12.546 12:06:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:43:12.546 12:06:44 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:43:12.546 12:06:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:43:12.805 [2024-06-10 12:06:44.661230] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:43:12.805 [2024-06-10 12:06:44.661920] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 166686 2 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166686 2 idle 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166686 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166686 -w 256 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166690 root 20 0 20.1t 152100 32044 S 0.0 1.2 0:00.63 reactor_2' 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166690 root 20 0 20.1t 152100 32044 S 0.0 1.2 0:00.63 reactor_2 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:43:12.805 12:06:44 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:43:13.065 12:06:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:43:13.065 12:06:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:43:13.065 12:06:44 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:43:13.065 12:06:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:43:13.065 [2024-06-10 12:06:45.109322] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:43:13.065 [2024-06-10 12:06:45.110037] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:43:13.065 [2024-06-10 12:06:45.110199] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 166686 0 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166686 0 idle 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166686 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166686 -w 256 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166686 root 20 0 20.1t 152132 32044 S 0.0 1.2 0:02.21 reactor_0' 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166686 root 20 0 20.1t 152132 32044 S 0.0 1.2 0:02.21 reactor_0 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:43:13.324 12:06:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 166686 00:43:13.324 12:06:45 reactor_set_interrupt -- common/autotest_common.sh@949 -- # '[' -z 166686 ']' 00:43:13.324 12:06:45 reactor_set_interrupt -- common/autotest_common.sh@953 -- # kill -0 166686 00:43:13.324 12:06:45 reactor_set_interrupt -- common/autotest_common.sh@954 -- # uname 00:43:13.324 12:06:45 reactor_set_interrupt -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:13.324 12:06:45 reactor_set_interrupt -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 166686 00:43:13.324 12:06:45 reactor_set_interrupt -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:43:13.324 12:06:45 reactor_set_interrupt -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:43:13.324 12:06:45 reactor_set_interrupt -- common/autotest_common.sh@967 -- # echo 'killing process with pid 166686' 00:43:13.324 killing process with pid 166686 00:43:13.324 12:06:45 reactor_set_interrupt -- common/autotest_common.sh@968 -- # kill 166686 00:43:13.324 12:06:45 reactor_set_interrupt -- common/autotest_common.sh@973 -- # wait 166686 00:43:15.286 12:06:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:43:15.286 12:06:46 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:43:15.286 ************************************ 00:43:15.286 END TEST reactor_set_interrupt 00:43:15.286 ************************************ 00:43:15.286 00:43:15.286 real 0m13.001s 00:43:15.286 user 0m13.546s 00:43:15.286 sys 0m1.939s 00:43:15.286 12:06:46 reactor_set_interrupt -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:15.286 12:06:46 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:15.286 12:06:47 -- spdk/autotest.sh@198 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:43:15.286 12:06:47 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:43:15.286 12:06:47 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:15.286 12:06:47 -- common/autotest_common.sh@10 -- # set +x 00:43:15.286 ************************************ 00:43:15.286 START TEST reap_unregistered_poller 00:43:15.286 ************************************ 00:43:15.286 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:43:15.286 * Looking for test storage... 00:43:15.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:43:15.286 12:06:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:43:15.286 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:43:15.286 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:43:15.286 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:43:15.286 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:43:15.286 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:43:15.286 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:43:15.286 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:43:15.286 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:43:15.286 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:43:15.286 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:43:15.286 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:43:15.286 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:43:15.286 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:43:15.286 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:43:15.286 12:06:47 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:43:15.287 12:06:47 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:43:15.287 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:43:15.287 12:06:47 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:43:15.287 12:06:47 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:43:15.287 12:06:47 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:43:15.287 12:06:47 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:43:15.287 12:06:47 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:43:15.287 12:06:47 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:43:15.287 12:06:47 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:43:15.287 12:06:47 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:43:15.287 12:06:47 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:43:15.287 12:06:47 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:43:15.287 12:06:47 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:43:15.287 12:06:47 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:43:15.287 12:06:47 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:43:15.287 12:06:47 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:43:15.287 12:06:47 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:43:15.287 #define SPDK_CONFIG_H 00:43:15.287 #define SPDK_CONFIG_APPS 1 00:43:15.287 #define SPDK_CONFIG_ARCH native 00:43:15.287 #define SPDK_CONFIG_ASAN 1 00:43:15.287 #undef SPDK_CONFIG_AVAHI 00:43:15.287 #undef SPDK_CONFIG_CET 00:43:15.287 #define SPDK_CONFIG_COVERAGE 1 00:43:15.287 #define SPDK_CONFIG_CROSS_PREFIX 00:43:15.287 #undef SPDK_CONFIG_CRYPTO 00:43:15.287 #undef SPDK_CONFIG_CRYPTO_MLX5 00:43:15.287 #undef SPDK_CONFIG_CUSTOMOCF 00:43:15.287 #undef SPDK_CONFIG_DAOS 00:43:15.287 #define SPDK_CONFIG_DAOS_DIR 00:43:15.287 #define SPDK_CONFIG_DEBUG 1 00:43:15.287 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:43:15.287 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:43:15.287 #define SPDK_CONFIG_DPDK_INC_DIR 00:43:15.287 #define SPDK_CONFIG_DPDK_LIB_DIR 00:43:15.287 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:43:15.287 #undef SPDK_CONFIG_DPDK_UADK 00:43:15.287 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:43:15.287 #define SPDK_CONFIG_EXAMPLES 1 00:43:15.287 #undef SPDK_CONFIG_FC 00:43:15.287 #define SPDK_CONFIG_FC_PATH 00:43:15.287 #define SPDK_CONFIG_FIO_PLUGIN 1 00:43:15.287 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:43:15.287 #undef SPDK_CONFIG_FUSE 00:43:15.287 #undef SPDK_CONFIG_FUZZER 00:43:15.287 #define SPDK_CONFIG_FUZZER_LIB 00:43:15.287 #undef SPDK_CONFIG_GOLANG 00:43:15.287 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:43:15.287 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:43:15.287 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:43:15.287 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:43:15.287 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:43:15.287 #undef SPDK_CONFIG_HAVE_LIBBSD 00:43:15.287 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:43:15.287 #define SPDK_CONFIG_IDXD 1 00:43:15.287 #undef SPDK_CONFIG_IDXD_KERNEL 00:43:15.287 #undef SPDK_CONFIG_IPSEC_MB 00:43:15.287 #define SPDK_CONFIG_IPSEC_MB_DIR 00:43:15.287 #define SPDK_CONFIG_ISAL 1 00:43:15.287 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:43:15.287 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:43:15.287 #define SPDK_CONFIG_LIBDIR 00:43:15.287 #undef SPDK_CONFIG_LTO 00:43:15.287 #define SPDK_CONFIG_MAX_LCORES 00:43:15.287 #define SPDK_CONFIG_NVME_CUSE 1 00:43:15.287 #undef SPDK_CONFIG_OCF 00:43:15.287 #define SPDK_CONFIG_OCF_PATH 00:43:15.287 #define SPDK_CONFIG_OPENSSL_PATH 00:43:15.287 #undef SPDK_CONFIG_PGO_CAPTURE 00:43:15.287 #define SPDK_CONFIG_PGO_DIR 00:43:15.287 #undef SPDK_CONFIG_PGO_USE 00:43:15.287 #define SPDK_CONFIG_PREFIX /usr/local 00:43:15.287 #define SPDK_CONFIG_RAID5F 1 00:43:15.287 #undef SPDK_CONFIG_RBD 00:43:15.287 #define SPDK_CONFIG_RDMA 1 00:43:15.287 #define SPDK_CONFIG_RDMA_PROV verbs 00:43:15.287 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:43:15.287 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:43:15.287 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:43:15.287 #undef SPDK_CONFIG_SHARED 00:43:15.287 #undef SPDK_CONFIG_SMA 00:43:15.287 #define SPDK_CONFIG_TESTS 1 00:43:15.287 #undef SPDK_CONFIG_TSAN 00:43:15.287 #undef SPDK_CONFIG_UBLK 00:43:15.287 #define SPDK_CONFIG_UBSAN 1 00:43:15.287 #define SPDK_CONFIG_UNIT_TESTS 1 00:43:15.287 #undef SPDK_CONFIG_URING 00:43:15.287 #define SPDK_CONFIG_URING_PATH 00:43:15.287 #undef SPDK_CONFIG_URING_ZNS 00:43:15.287 #undef SPDK_CONFIG_USDT 00:43:15.287 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:43:15.287 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:43:15.287 #undef SPDK_CONFIG_VFIO_USER 00:43:15.287 #define SPDK_CONFIG_VFIO_USER_DIR 00:43:15.287 #define SPDK_CONFIG_VHOST 1 00:43:15.287 #define SPDK_CONFIG_VIRTIO 1 00:43:15.287 #undef SPDK_CONFIG_VTUNE 00:43:15.287 #define SPDK_CONFIG_VTUNE_DIR 00:43:15.287 #define SPDK_CONFIG_WERROR 1 00:43:15.287 #define SPDK_CONFIG_WPDK_DIR 00:43:15.287 #undef SPDK_CONFIG_XNVME 00:43:15.287 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:43:15.287 12:06:47 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:43:15.287 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:15.287 12:06:47 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:15.287 12:06:47 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:15.287 12:06:47 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:15.287 12:06:47 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:15.287 12:06:47 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:15.287 12:06:47 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:15.287 12:06:47 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:43:15.287 12:06:47 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:15.287 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:43:15.287 12:06:47 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:43:15.287 12:06:47 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:43:15.287 12:06:47 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:43:15.287 12:06:47 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:43:15.287 12:06:47 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:43:15.287 12:06:47 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:43:15.287 12:06:47 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:43:15.287 12:06:47 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:43:15.287 12:06:47 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:43:15.287 12:06:47 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:43:15.287 12:06:47 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:43:15.287 12:06:47 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:43:15.288 12:06:47 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:43:15.288 12:06:47 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:43:15.288 12:06:47 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:43:15.288 12:06:47 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:43:15.288 12:06:47 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:43:15.288 12:06:47 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:43:15.288 12:06:47 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:43:15.288 12:06:47 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:43:15.288 12:06:47 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:43:15.288 12:06:47 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:43:15.288 12:06:47 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 1 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 1 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 1 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 1 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@167 -- # : 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:43:15.288 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@200 -- # cat 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export QEMU_BIN= 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@253 -- # QEMU_BIN= 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@254 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@263 -- # export valgrind= 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@263 -- # valgrind= 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@269 -- # uname -s 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKE=make 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@299 -- # TEST_MODE= 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@318 -- # [[ -z 166871 ]] 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@318 -- # kill -0 166871 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@331 -- # local mount target_dir 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.LT0lSj 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.LT0lSj/tests/interrupt /tmp/spdk.LT0lSj 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@327 -- # df -T 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=1248956416 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253683200 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4726784 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda1 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=10001698816 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=20616794112 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=10598318080 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6263689216 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6268399616 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=5242880 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=5242880 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda15 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:43:15.289 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=103061504 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=109395968 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=6334464 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=1253675008 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253679104 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=92215050240 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=7487729664 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:43:15.290 * Looking for test storage... 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@368 -- # local target_space new_size 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@372 -- # mount=/ 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@374 -- # target_space=10001698816 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ ext4 == tmpfs ]] 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ ext4 == ramfs ]] 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@381 -- # new_size=12812910592 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:43:15.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@389 -- # return 0 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@1681 -- # set -o errtrace 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # true 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@1688 -- # xtrace_fd 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:43:15.290 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:43:15.290 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:15.290 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:43:15.290 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:43:15.290 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:43:15.290 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:43:15.290 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:43:15.290 12:06:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:43:15.290 12:06:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:43:15.290 12:06:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:43:15.290 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:15.290 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:43:15.290 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=166924 00:43:15.290 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:43:15.290 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:43:15.290 12:06:47 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 166924 /var/tmp/spdk.sock 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@830 -- # '[' -z 166924 ']' 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:15.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:15.290 12:06:47 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:43:15.290 [2024-06-10 12:06:47.312316] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:15.290 [2024-06-10 12:06:47.312493] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166924 ] 00:43:15.550 [2024-06-10 12:06:47.488771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:15.809 [2024-06-10 12:06:47.744025] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:43:15.809 [2024-06-10 12:06:47.744138] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:15.809 [2024-06-10 12:06:47.744142] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:43:16.068 [2024-06-10 12:06:48.110536] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:16.326 12:06:48 reap_unregistered_poller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:16.326 12:06:48 reap_unregistered_poller -- common/autotest_common.sh@863 -- # return 0 00:43:16.326 12:06:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:43:16.326 12:06:48 reap_unregistered_poller -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:16.326 12:06:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:43:16.327 12:06:48 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:43:16.327 12:06:48 reap_unregistered_poller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:16.327 12:06:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:43:16.327 "name": "app_thread", 00:43:16.327 "id": 1, 00:43:16.327 "active_pollers": [], 00:43:16.327 "timed_pollers": [ 00:43:16.327 { 00:43:16.327 "name": "rpc_subsystem_poll_servers", 00:43:16.327 "id": 1, 00:43:16.327 "state": "waiting", 00:43:16.327 "run_count": 0, 00:43:16.327 "busy_count": 0, 00:43:16.327 "period_ticks": 8400000 00:43:16.327 } 00:43:16.327 ], 00:43:16.327 "paused_pollers": [] 00:43:16.327 }' 00:43:16.327 12:06:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:43:16.327 12:06:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:43:16.327 12:06:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:43:16.327 12:06:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:43:16.327 12:06:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:43:16.327 12:06:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:43:16.327 12:06:48 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:43:16.327 12:06:48 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:43:16.327 12:06:48 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:43:16.327 5000+0 records in 00:43:16.327 5000+0 records out 00:43:16.327 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0228849 s, 447 MB/s 00:43:16.327 12:06:48 reap_unregistered_poller -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:43:16.892 AIO0 00:43:16.892 12:06:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:43:16.892 12:06:48 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:43:17.150 12:06:49 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:43:17.150 12:06:49 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:43:17.150 12:06:49 reap_unregistered_poller -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:17.150 12:06:49 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:43:17.150 12:06:49 reap_unregistered_poller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:17.150 12:06:49 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:43:17.150 "name": "app_thread", 00:43:17.150 "id": 1, 00:43:17.150 "active_pollers": [], 00:43:17.150 "timed_pollers": [ 00:43:17.150 { 00:43:17.150 "name": "rpc_subsystem_poll_servers", 00:43:17.150 "id": 1, 00:43:17.150 "state": "waiting", 00:43:17.150 "run_count": 0, 00:43:17.150 "busy_count": 0, 00:43:17.150 "period_ticks": 8400000 00:43:17.150 } 00:43:17.150 ], 00:43:17.150 "paused_pollers": [] 00:43:17.150 }' 00:43:17.150 12:06:49 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:43:17.150 12:06:49 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:43:17.150 12:06:49 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:43:17.150 12:06:49 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:43:17.150 12:06:49 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:43:17.150 12:06:49 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:43:17.150 12:06:49 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:43:17.150 12:06:49 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 166924 00:43:17.150 12:06:49 reap_unregistered_poller -- common/autotest_common.sh@949 -- # '[' -z 166924 ']' 00:43:17.150 12:06:49 reap_unregistered_poller -- common/autotest_common.sh@953 -- # kill -0 166924 00:43:17.150 12:06:49 reap_unregistered_poller -- common/autotest_common.sh@954 -- # uname 00:43:17.150 12:06:49 reap_unregistered_poller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:17.150 12:06:49 reap_unregistered_poller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 166924 00:43:17.150 12:06:49 reap_unregistered_poller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:43:17.150 killing process with pid 166924 00:43:17.150 12:06:49 reap_unregistered_poller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:43:17.150 12:06:49 reap_unregistered_poller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 166924' 00:43:17.150 12:06:49 reap_unregistered_poller -- common/autotest_common.sh@968 -- # kill 166924 00:43:17.150 12:06:49 reap_unregistered_poller -- common/autotest_common.sh@973 -- # wait 166924 00:43:18.526 12:06:50 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:43:18.526 12:06:50 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:43:18.786 00:43:18.786 real 0m3.563s 00:43:18.786 user 0m3.083s 00:43:18.786 sys 0m0.654s 00:43:18.786 12:06:50 reap_unregistered_poller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:18.786 12:06:50 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:43:18.786 ************************************ 00:43:18.786 END TEST reap_unregistered_poller 00:43:18.786 ************************************ 00:43:18.786 12:06:50 -- spdk/autotest.sh@202 -- # uname -s 00:43:18.786 12:06:50 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:43:18.786 12:06:50 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:43:18.786 12:06:50 -- spdk/autotest.sh@209 -- # [[ 0 -eq 0 ]] 00:43:18.786 12:06:50 -- spdk/autotest.sh@210 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:43:18.786 12:06:50 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:43:18.786 12:06:50 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:18.786 12:06:50 -- common/autotest_common.sh@10 -- # set +x 00:43:18.786 ************************************ 00:43:18.786 START TEST spdk_dd 00:43:18.786 ************************************ 00:43:18.786 12:06:50 spdk_dd -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:43:18.786 * Looking for test storage... 00:43:18.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:43:18.786 12:06:50 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:18.786 12:06:50 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:18.786 12:06:50 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:18.786 12:06:50 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:18.786 12:06:50 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:18.786 12:06:50 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:18.786 12:06:50 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:18.786 12:06:50 spdk_dd -- paths/export.sh@5 -- # export PATH 00:43:18.786 12:06:50 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:18.786 12:06:50 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:43:19.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:43:19.394 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:43:20.334 12:06:52 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:43:20.334 12:06:52 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@230 -- # local class 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@232 -- # local progif 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@233 -- # class=01 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@15 -- # local i 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@24 -- # return 0 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@325 -- # (( 1 )) 00:43:20.334 12:06:52 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:43:20.334 12:06:52 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@139 -- # local lib so 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ libkeyutils.so.1 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:43:20.334 12:06:52 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:43:20.334 12:06:52 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:43:20.334 12:06:52 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:43:20.334 12:06:52 spdk_dd -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:43:20.334 12:06:52 spdk_dd -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:20.334 12:06:52 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:43:20.334 ************************************ 00:43:20.334 START TEST spdk_dd_basic_rw 00:43:20.334 ************************************ 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:43:20.334 * Looking for test storage... 00:43:20.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:20.334 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:43:20.335 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:43:20.335 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:43:20.335 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:43:20.595 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 113 Data Units Written: 7 Host Read Commands: 2410 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:43:20.595 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 113 Data Units Written: 7 Host Read Commands: 2410 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:43:20.596 ************************************ 00:43:20.596 START TEST dd_bs_lt_native_bs 00:43:20.596 ************************************ 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@649 -- # local es=0 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:43:20.596 12:06:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:43:20.596 { 00:43:20.596 "subsystems": [ 00:43:20.596 { 00:43:20.596 "subsystem": "bdev", 00:43:20.596 "config": [ 00:43:20.596 { 00:43:20.596 "params": { 00:43:20.596 "trtype": "pcie", 00:43:20.596 "traddr": "0000:00:10.0", 00:43:20.596 "name": "Nvme0" 00:43:20.596 }, 00:43:20.596 "method": "bdev_nvme_attach_controller" 00:43:20.596 }, 00:43:20.596 { 00:43:20.596 "method": "bdev_wait_for_examine" 00:43:20.596 } 00:43:20.596 ] 00:43:20.596 } 00:43:20.596 ] 00:43:20.596 } 00:43:20.596 [2024-06-10 12:06:52.650635] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:20.596 [2024-06-10 12:06:52.650819] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167238 ] 00:43:20.854 [2024-06-10 12:06:52.823193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:21.112 [2024-06-10 12:06:53.103606] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:21.680 [2024-06-10 12:06:53.535674] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:43:21.680 [2024-06-10 12:06:53.535792] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:22.615 [2024-06-10 12:06:54.408057] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:43:22.874 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # es=234 00:43:22.874 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:43:22.874 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # es=106 00:43:22.874 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # case "$es" in 00:43:22.874 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@669 -- # es=1 00:43:22.874 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:43:22.874 00:43:22.874 real 0m2.315s 00:43:22.874 user 0m2.007s 00:43:22.874 sys 0m0.262s 00:43:22.874 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:22.874 ************************************ 00:43:22.874 END TEST dd_bs_lt_native_bs 00:43:22.874 ************************************ 00:43:22.874 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:43:23.132 12:06:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:43:23.132 12:06:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:43:23.132 12:06:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:23.132 12:06:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:43:23.132 ************************************ 00:43:23.132 START TEST dd_rw 00:43:23.132 ************************************ 00:43:23.132 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # basic_rw 4096 00:43:23.132 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:43:23.132 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:43:23.132 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:43:23.132 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:43:23.132 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:43:23.132 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:43:23.132 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:43:23.132 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:43:23.133 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:43:23.133 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:43:23.133 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:43:23.133 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:43:23.133 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:43:23.133 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:43:23.133 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:43:23.133 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:43:23.133 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:43:23.133 12:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:23.699 12:06:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:43:23.699 12:06:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:43:23.699 12:06:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:23.699 12:06:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:23.699 { 00:43:23.699 "subsystems": [ 00:43:23.699 { 00:43:23.699 "subsystem": "bdev", 00:43:23.699 "config": [ 00:43:23.699 { 00:43:23.699 "params": { 00:43:23.699 "trtype": "pcie", 00:43:23.699 "traddr": "0000:00:10.0", 00:43:23.699 "name": "Nvme0" 00:43:23.699 }, 00:43:23.699 "method": "bdev_nvme_attach_controller" 00:43:23.699 }, 00:43:23.699 { 00:43:23.699 "method": "bdev_wait_for_examine" 00:43:23.699 } 00:43:23.699 ] 00:43:23.700 } 00:43:23.700 ] 00:43:23.700 } 00:43:23.700 [2024-06-10 12:06:55.574288] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:23.700 [2024-06-10 12:06:55.575027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167301 ] 00:43:23.700 [2024-06-10 12:06:55.755826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:23.958 [2024-06-10 12:06:55.973289] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:25.918  Copying: 60/60 [kB] (average 19 MBps) 00:43:25.918 00:43:25.918 12:06:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:43:25.918 12:06:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:43:25.918 12:06:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:25.918 12:06:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:25.918 { 00:43:25.918 "subsystems": [ 00:43:25.918 { 00:43:25.918 "subsystem": "bdev", 00:43:25.918 "config": [ 00:43:25.918 { 00:43:25.918 "params": { 00:43:25.918 "trtype": "pcie", 00:43:25.918 "traddr": "0000:00:10.0", 00:43:25.918 "name": "Nvme0" 00:43:25.918 }, 00:43:25.918 "method": "bdev_nvme_attach_controller" 00:43:25.918 }, 00:43:25.918 { 00:43:25.918 "method": "bdev_wait_for_examine" 00:43:25.918 } 00:43:25.918 ] 00:43:25.918 } 00:43:25.918 ] 00:43:25.918 } 00:43:25.918 [2024-06-10 12:06:57.639099] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:25.918 [2024-06-10 12:06:57.639269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167329 ] 00:43:25.918 [2024-06-10 12:06:57.800839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:26.178 [2024-06-10 12:06:58.019109] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:27.812  Copying: 60/60 [kB] (average 19 MBps) 00:43:27.812 00:43:27.812 12:06:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:27.812 12:06:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:43:27.812 12:06:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:43:27.812 12:06:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:43:27.812 12:06:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:43:27.812 12:06:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:43:27.812 12:06:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:43:27.812 12:06:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:43:27.812 12:06:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:43:27.812 12:06:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:27.812 12:06:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:28.070 { 00:43:28.070 "subsystems": [ 00:43:28.070 { 00:43:28.070 "subsystem": "bdev", 00:43:28.070 "config": [ 00:43:28.070 { 00:43:28.070 "params": { 00:43:28.070 "trtype": "pcie", 00:43:28.070 "traddr": "0000:00:10.0", 00:43:28.070 "name": "Nvme0" 00:43:28.070 }, 00:43:28.070 "method": "bdev_nvme_attach_controller" 00:43:28.070 }, 00:43:28.070 { 00:43:28.070 "method": "bdev_wait_for_examine" 00:43:28.070 } 00:43:28.070 ] 00:43:28.070 } 00:43:28.070 ] 00:43:28.070 } 00:43:28.070 [2024-06-10 12:06:59.909968] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:28.070 [2024-06-10 12:06:59.910493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167361 ] 00:43:28.070 [2024-06-10 12:07:00.117196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:28.328 [2024-06-10 12:07:00.384813] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:30.271  Copying: 1024/1024 [kB] (average 500 MBps) 00:43:30.271 00:43:30.271 12:07:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:43:30.271 12:07:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:43:30.271 12:07:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:43:30.271 12:07:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:43:30.271 12:07:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:43:30.271 12:07:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:43:30.271 12:07:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:30.836 12:07:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:43:30.836 12:07:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:43:30.836 12:07:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:30.836 12:07:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:30.836 { 00:43:30.836 "subsystems": [ 00:43:30.836 { 00:43:30.836 "subsystem": "bdev", 00:43:30.836 "config": [ 00:43:30.836 { 00:43:30.836 "params": { 00:43:30.836 "trtype": "pcie", 00:43:30.836 "traddr": "0000:00:10.0", 00:43:30.836 "name": "Nvme0" 00:43:30.836 }, 00:43:30.836 "method": "bdev_nvme_attach_controller" 00:43:30.836 }, 00:43:30.836 { 00:43:30.836 "method": "bdev_wait_for_examine" 00:43:30.836 } 00:43:30.836 ] 00:43:30.836 } 00:43:30.836 ] 00:43:30.836 } 00:43:30.836 [2024-06-10 12:07:02.729330] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:30.836 [2024-06-10 12:07:02.729709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167395 ] 00:43:31.095 [2024-06-10 12:07:02.912705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:31.095 [2024-06-10 12:07:03.147677] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:33.036  Copying: 60/60 [kB] (average 58 MBps) 00:43:33.036 00:43:33.036 12:07:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:43:33.036 12:07:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:43:33.036 12:07:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:33.036 12:07:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:33.036 { 00:43:33.036 "subsystems": [ 00:43:33.036 { 00:43:33.036 "subsystem": "bdev", 00:43:33.036 "config": [ 00:43:33.036 { 00:43:33.036 "params": { 00:43:33.037 "trtype": "pcie", 00:43:33.037 "traddr": "0000:00:10.0", 00:43:33.037 "name": "Nvme0" 00:43:33.037 }, 00:43:33.037 "method": "bdev_nvme_attach_controller" 00:43:33.037 }, 00:43:33.037 { 00:43:33.037 "method": "bdev_wait_for_examine" 00:43:33.037 } 00:43:33.037 ] 00:43:33.037 } 00:43:33.037 ] 00:43:33.037 } 00:43:33.037 [2024-06-10 12:07:05.069265] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:33.037 [2024-06-10 12:07:05.069725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167430 ] 00:43:33.295 [2024-06-10 12:07:05.252103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:33.553 [2024-06-10 12:07:05.479377] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:35.493  Copying: 60/60 [kB] (average 58 MBps) 00:43:35.493 00:43:35.493 12:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:35.493 12:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:43:35.493 12:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:43:35.493 12:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:43:35.493 12:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:43:35.493 12:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:43:35.493 12:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:43:35.493 12:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:43:35.493 12:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:43:35.493 12:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:35.493 12:07:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:35.493 [2024-06-10 12:07:07.322082] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:35.493 [2024-06-10 12:07:07.322835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167462 ] 00:43:35.493 { 00:43:35.493 "subsystems": [ 00:43:35.493 { 00:43:35.493 "subsystem": "bdev", 00:43:35.493 "config": [ 00:43:35.493 { 00:43:35.493 "params": { 00:43:35.493 "trtype": "pcie", 00:43:35.493 "traddr": "0000:00:10.0", 00:43:35.493 "name": "Nvme0" 00:43:35.493 }, 00:43:35.493 "method": "bdev_nvme_attach_controller" 00:43:35.493 }, 00:43:35.493 { 00:43:35.493 "method": "bdev_wait_for_examine" 00:43:35.493 } 00:43:35.493 ] 00:43:35.494 } 00:43:35.494 ] 00:43:35.494 } 00:43:35.494 [2024-06-10 12:07:07.487620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:35.826 [2024-06-10 12:07:07.761742] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:37.773  Copying: 1024/1024 [kB] (average 1000 MBps) 00:43:37.773 00:43:37.773 12:07:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:43:37.773 12:07:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:43:37.773 12:07:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:43:37.773 12:07:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:43:37.773 12:07:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:43:37.773 12:07:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:43:37.773 12:07:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:43:37.773 12:07:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:38.338 12:07:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:43:38.338 12:07:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:43:38.338 12:07:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:38.338 12:07:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:38.338 { 00:43:38.338 "subsystems": [ 00:43:38.338 { 00:43:38.338 "subsystem": "bdev", 00:43:38.338 "config": [ 00:43:38.338 { 00:43:38.338 "params": { 00:43:38.338 "trtype": "pcie", 00:43:38.338 "traddr": "0000:00:10.0", 00:43:38.338 "name": "Nvme0" 00:43:38.338 }, 00:43:38.338 "method": "bdev_nvme_attach_controller" 00:43:38.338 }, 00:43:38.338 { 00:43:38.338 "method": "bdev_wait_for_examine" 00:43:38.338 } 00:43:38.338 ] 00:43:38.338 } 00:43:38.338 ] 00:43:38.338 } 00:43:38.338 [2024-06-10 12:07:10.343830] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:38.338 [2024-06-10 12:07:10.344249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167502 ] 00:43:38.596 [2024-06-10 12:07:10.514279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:38.854 [2024-06-10 12:07:10.747106] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:40.882  Copying: 56/56 [kB] (average 54 MBps) 00:43:40.882 00:43:40.882 12:07:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:43:40.882 12:07:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:43:40.882 12:07:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:40.882 12:07:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:40.882 { 00:43:40.882 "subsystems": [ 00:43:40.882 { 00:43:40.882 "subsystem": "bdev", 00:43:40.882 "config": [ 00:43:40.882 { 00:43:40.882 "params": { 00:43:40.882 "trtype": "pcie", 00:43:40.882 "traddr": "0000:00:10.0", 00:43:40.882 "name": "Nvme0" 00:43:40.882 }, 00:43:40.882 "method": "bdev_nvme_attach_controller" 00:43:40.882 }, 00:43:40.882 { 00:43:40.882 "method": "bdev_wait_for_examine" 00:43:40.882 } 00:43:40.882 ] 00:43:40.882 } 00:43:40.882 ] 00:43:40.882 } 00:43:40.882 [2024-06-10 12:07:12.562238] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:40.882 [2024-06-10 12:07:12.562723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167529 ] 00:43:40.882 [2024-06-10 12:07:12.773194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:41.139 [2024-06-10 12:07:13.007974] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:43.075  Copying: 56/56 [kB] (average 54 MBps) 00:43:43.075 00:43:43.075 12:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:43.075 12:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:43:43.075 12:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:43:43.075 12:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:43:43.075 12:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:43:43.075 12:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:43:43.075 12:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:43:43.075 12:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:43:43.075 12:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:43:43.075 12:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:43.075 12:07:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:43.075 { 00:43:43.075 "subsystems": [ 00:43:43.075 { 00:43:43.075 "subsystem": "bdev", 00:43:43.075 "config": [ 00:43:43.075 { 00:43:43.075 "params": { 00:43:43.075 "trtype": "pcie", 00:43:43.075 "traddr": "0000:00:10.0", 00:43:43.076 "name": "Nvme0" 00:43:43.076 }, 00:43:43.076 "method": "bdev_nvme_attach_controller" 00:43:43.076 }, 00:43:43.076 { 00:43:43.076 "method": "bdev_wait_for_examine" 00:43:43.076 } 00:43:43.076 ] 00:43:43.076 } 00:43:43.076 ] 00:43:43.076 } 00:43:43.076 [2024-06-10 12:07:15.020103] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:43.076 [2024-06-10 12:07:15.020616] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167571 ] 00:43:43.333 [2024-06-10 12:07:15.212615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:43.591 [2024-06-10 12:07:15.441397] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:45.223  Copying: 1024/1024 [kB] (average 1000 MBps) 00:43:45.223 00:43:45.223 12:07:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:43:45.223 12:07:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:43:45.223 12:07:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:43:45.223 12:07:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:43:45.223 12:07:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:43:45.223 12:07:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:43:45.223 12:07:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:45.788 12:07:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:43:45.788 12:07:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:43:45.788 12:07:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:45.788 12:07:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:46.046 { 00:43:46.046 "subsystems": [ 00:43:46.046 { 00:43:46.046 "subsystem": "bdev", 00:43:46.046 "config": [ 00:43:46.046 { 00:43:46.046 "params": { 00:43:46.046 "trtype": "pcie", 00:43:46.046 "traddr": "0000:00:10.0", 00:43:46.046 "name": "Nvme0" 00:43:46.046 }, 00:43:46.046 "method": "bdev_nvme_attach_controller" 00:43:46.046 }, 00:43:46.046 { 00:43:46.046 "method": "bdev_wait_for_examine" 00:43:46.046 } 00:43:46.046 ] 00:43:46.046 } 00:43:46.046 ] 00:43:46.046 } 00:43:46.046 [2024-06-10 12:07:17.873921] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:46.046 [2024-06-10 12:07:17.874284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167602 ] 00:43:46.046 [2024-06-10 12:07:18.057519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:46.304 [2024-06-10 12:07:18.291879] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:48.244  Copying: 56/56 [kB] (average 54 MBps) 00:43:48.244 00:43:48.244 12:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:43:48.244 12:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:43:48.244 12:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:48.244 12:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:48.244 { 00:43:48.244 "subsystems": [ 00:43:48.244 { 00:43:48.244 "subsystem": "bdev", 00:43:48.244 "config": [ 00:43:48.244 { 00:43:48.244 "params": { 00:43:48.244 "trtype": "pcie", 00:43:48.244 "traddr": "0000:00:10.0", 00:43:48.244 "name": "Nvme0" 00:43:48.244 }, 00:43:48.244 "method": "bdev_nvme_attach_controller" 00:43:48.244 }, 00:43:48.244 { 00:43:48.244 "method": "bdev_wait_for_examine" 00:43:48.244 } 00:43:48.244 ] 00:43:48.244 } 00:43:48.244 ] 00:43:48.244 } 00:43:48.244 [2024-06-10 12:07:20.245364] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:48.244 [2024-06-10 12:07:20.245749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167637 ] 00:43:48.503 [2024-06-10 12:07:20.414027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:48.762 [2024-06-10 12:07:20.661876] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:50.701  Copying: 56/56 [kB] (average 54 MBps) 00:43:50.701 00:43:50.701 12:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:50.701 12:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:43:50.701 12:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:43:50.701 12:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:43:50.701 12:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:43:50.701 12:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:43:50.701 12:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:43:50.701 12:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:43:50.701 12:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:43:50.701 12:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:50.701 12:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:50.701 { 00:43:50.701 "subsystems": [ 00:43:50.701 { 00:43:50.701 "subsystem": "bdev", 00:43:50.701 "config": [ 00:43:50.701 { 00:43:50.701 "params": { 00:43:50.701 "trtype": "pcie", 00:43:50.701 "traddr": "0000:00:10.0", 00:43:50.701 "name": "Nvme0" 00:43:50.701 }, 00:43:50.701 "method": "bdev_nvme_attach_controller" 00:43:50.701 }, 00:43:50.701 { 00:43:50.701 "method": "bdev_wait_for_examine" 00:43:50.701 } 00:43:50.701 ] 00:43:50.701 } 00:43:50.701 ] 00:43:50.701 } 00:43:50.701 [2024-06-10 12:07:22.551270] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:50.701 [2024-06-10 12:07:22.552223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167670 ] 00:43:50.701 [2024-06-10 12:07:22.738511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:50.959 [2024-06-10 12:07:22.970734] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:52.896  Copying: 1024/1024 [kB] (average 1000 MBps) 00:43:52.896 00:43:52.896 12:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:43:52.896 12:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:43:52.896 12:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:43:52.896 12:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:43:52.896 12:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:43:52.896 12:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:43:52.896 12:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:43:52.896 12:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:53.830 12:07:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:43:53.830 12:07:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:43:53.830 12:07:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:53.830 12:07:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:53.830 { 00:43:53.830 "subsystems": [ 00:43:53.830 { 00:43:53.830 "subsystem": "bdev", 00:43:53.830 "config": [ 00:43:53.830 { 00:43:53.830 "params": { 00:43:53.830 "trtype": "pcie", 00:43:53.830 "traddr": "0000:00:10.0", 00:43:53.830 "name": "Nvme0" 00:43:53.830 }, 00:43:53.830 "method": "bdev_nvme_attach_controller" 00:43:53.830 }, 00:43:53.830 { 00:43:53.830 "method": "bdev_wait_for_examine" 00:43:53.830 } 00:43:53.830 ] 00:43:53.830 } 00:43:53.830 ] 00:43:53.830 } 00:43:53.830 [2024-06-10 12:07:25.640050] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:53.830 [2024-06-10 12:07:25.640599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167711 ] 00:43:53.830 [2024-06-10 12:07:25.820128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:54.089 [2024-06-10 12:07:26.051984] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:56.029  Copying: 48/48 [kB] (average 46 MBps) 00:43:56.029 00:43:56.029 12:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:43:56.029 12:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:43:56.029 12:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:56.029 12:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:56.029 { 00:43:56.029 "subsystems": [ 00:43:56.029 { 00:43:56.029 "subsystem": "bdev", 00:43:56.029 "config": [ 00:43:56.029 { 00:43:56.029 "params": { 00:43:56.029 "trtype": "pcie", 00:43:56.029 "traddr": "0000:00:10.0", 00:43:56.029 "name": "Nvme0" 00:43:56.029 }, 00:43:56.029 "method": "bdev_nvme_attach_controller" 00:43:56.029 }, 00:43:56.029 { 00:43:56.029 "method": "bdev_wait_for_examine" 00:43:56.029 } 00:43:56.029 ] 00:43:56.030 } 00:43:56.030 ] 00:43:56.030 } 00:43:56.030 [2024-06-10 12:07:27.901997] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:56.030 [2024-06-10 12:07:27.902411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167742 ] 00:43:56.030 [2024-06-10 12:07:28.084963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:56.598 [2024-06-10 12:07:28.364471] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:58.232  Copying: 48/48 [kB] (average 46 MBps) 00:43:58.232 00:43:58.232 12:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:58.232 12:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:43:58.232 12:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:43:58.232 12:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:43:58.232 12:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:43:58.232 12:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:43:58.232 12:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:43:58.232 12:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:43:58.232 12:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:43:58.232 12:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:58.232 12:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:58.490 { 00:43:58.490 "subsystems": [ 00:43:58.490 { 00:43:58.490 "subsystem": "bdev", 00:43:58.490 "config": [ 00:43:58.490 { 00:43:58.490 "params": { 00:43:58.490 "trtype": "pcie", 00:43:58.490 "traddr": "0000:00:10.0", 00:43:58.490 "name": "Nvme0" 00:43:58.490 }, 00:43:58.490 "method": "bdev_nvme_attach_controller" 00:43:58.490 }, 00:43:58.490 { 00:43:58.490 "method": "bdev_wait_for_examine" 00:43:58.490 } 00:43:58.490 ] 00:43:58.490 } 00:43:58.490 ] 00:43:58.490 } 00:43:58.490 [2024-06-10 12:07:30.349228] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:43:58.490 [2024-06-10 12:07:30.349685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167778 ] 00:43:58.490 [2024-06-10 12:07:30.531423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:58.747 [2024-06-10 12:07:30.771390] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:00.700  Copying: 1024/1024 [kB] (average 1000 MBps) 00:44:00.700 00:44:00.700 12:07:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:44:00.700 12:07:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:44:00.700 12:07:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:44:00.700 12:07:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:44:00.700 12:07:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:44:00.700 12:07:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:44:00.700 12:07:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:44:01.268 12:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:44:01.268 12:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:44:01.268 12:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:44:01.268 12:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:44:01.268 { 00:44:01.268 "subsystems": [ 00:44:01.268 { 00:44:01.268 "subsystem": "bdev", 00:44:01.268 "config": [ 00:44:01.268 { 00:44:01.268 "params": { 00:44:01.268 "trtype": "pcie", 00:44:01.268 "traddr": "0000:00:10.0", 00:44:01.268 "name": "Nvme0" 00:44:01.268 }, 00:44:01.268 "method": "bdev_nvme_attach_controller" 00:44:01.268 }, 00:44:01.268 { 00:44:01.268 "method": "bdev_wait_for_examine" 00:44:01.268 } 00:44:01.268 ] 00:44:01.268 } 00:44:01.268 ] 00:44:01.268 } 00:44:01.268 [2024-06-10 12:07:33.102000] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:01.268 [2024-06-10 12:07:33.102387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167810 ] 00:44:01.268 [2024-06-10 12:07:33.307841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:01.527 [2024-06-10 12:07:33.569521] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:03.465  Copying: 48/48 [kB] (average 46 MBps) 00:44:03.465 00:44:03.465 12:07:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:44:03.465 12:07:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:44:03.465 12:07:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:44:03.465 12:07:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:44:03.722 { 00:44:03.722 "subsystems": [ 00:44:03.722 { 00:44:03.722 "subsystem": "bdev", 00:44:03.722 "config": [ 00:44:03.722 { 00:44:03.722 "params": { 00:44:03.722 "trtype": "pcie", 00:44:03.722 "traddr": "0000:00:10.0", 00:44:03.722 "name": "Nvme0" 00:44:03.722 }, 00:44:03.722 "method": "bdev_nvme_attach_controller" 00:44:03.722 }, 00:44:03.722 { 00:44:03.722 "method": "bdev_wait_for_examine" 00:44:03.722 } 00:44:03.722 ] 00:44:03.722 } 00:44:03.722 ] 00:44:03.722 } 00:44:03.722 [2024-06-10 12:07:35.577370] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:03.722 [2024-06-10 12:07:35.577977] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167850 ] 00:44:03.722 [2024-06-10 12:07:35.766041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:03.979 [2024-06-10 12:07:36.000553] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:05.919  Copying: 48/48 [kB] (average 46 MBps) 00:44:05.919 00:44:05.919 12:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:05.919 12:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:44:05.919 12:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:44:05.919 12:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:44:05.919 12:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:44:05.919 12:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:44:05.919 12:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:44:05.919 12:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:44:05.919 12:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:44:05.919 12:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:44:05.919 12:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:44:05.919 { 00:44:05.919 "subsystems": [ 00:44:05.919 { 00:44:05.919 "subsystem": "bdev", 00:44:05.919 "config": [ 00:44:05.919 { 00:44:05.919 "params": { 00:44:05.919 "trtype": "pcie", 00:44:05.919 "traddr": "0000:00:10.0", 00:44:05.919 "name": "Nvme0" 00:44:05.919 }, 00:44:05.919 "method": "bdev_nvme_attach_controller" 00:44:05.919 }, 00:44:05.919 { 00:44:05.919 "method": "bdev_wait_for_examine" 00:44:05.919 } 00:44:05.919 ] 00:44:05.919 } 00:44:05.919 ] 00:44:05.919 } 00:44:05.919 [2024-06-10 12:07:37.855327] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:05.919 [2024-06-10 12:07:37.855848] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167879 ] 00:44:06.177 [2024-06-10 12:07:38.050968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:06.434 [2024-06-10 12:07:38.310467] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:08.377  Copying: 1024/1024 [kB] (average 1000 MBps) 00:44:08.377 00:44:08.377 ************************************ 00:44:08.377 END TEST dd_rw 00:44:08.377 ************************************ 00:44:08.377 00:44:08.377 real 0m45.332s 00:44:08.377 user 0m38.802s 00:44:08.377 sys 0m5.219s 00:44:08.377 12:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # xtrace_disable 00:44:08.377 12:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:44:08.377 12:07:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:44:08.377 12:07:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:44:08.377 12:07:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1106 -- # xtrace_disable 00:44:08.377 12:07:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:44:08.377 ************************************ 00:44:08.377 START TEST dd_rw_offset 00:44:08.377 ************************************ 00:44:08.377 12:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # basic_offset 00:44:08.377 12:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:44:08.377 12:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:44:08.377 12:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:44:08.377 12:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:44:08.377 12:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:44:08.377 12:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=41e0zwmoan3amqztafzvd2wioz2bnf7mq4irrih1embeulfg32luta9kiwc4klptqf66wom4tvhrjsesib0wun21ouwokvvmsjnml8khrywcllkvgog3z13w23nbw0qs5rsec7jgx803m8yaz0aj6dof74k6zgvgsk9hfjv6g7nd896eqim3fmg5i882voma3gw15zx1uvh0s3sus8zljowxe7y397u6ek6tqxlyuoa95wz40xzxgpcif1fzib0bfbdv1xixqssa3v6wcvgc3e4ensh4vowuofuh4drwbfmvtylvyc1rjdagxwkmh35wvbqeq8q35us522mmeyn95zueiq2bplr660kgbbvb8dvzulm4ox773229z46x2sv7gj52qhqagzspdw6d62m6nav6qabi274ia15vsemvsg0dxr4t9j0kp4v8q2ol1vwy8zkqyaeo68y9evud41sq5kophdw5hcqlfz6xpx1c97g5bobtie2dacsbpkywz4ekz9x19sluzlzhnbjrwessa67digzs92elsdc1k2ztzlrkfb9u0yi3mbdm2wxmfwjpbbjmk47purqasov8xjs532u8r7t0fq8wl6vp28prm63hspt9ogoczk7kpuybmzgd9speas3q94z2ebz2yz1gjvawidn2sxkts5kgfh37btrlo9cjunzqzkc0p0ovlwnpc816ccw8cnt6pprpk25ggmio5yvkhtiu62cka45p7mrexqt9d2v03cb5giqbufhckb45257b0zl5a5m6ydmnsoohiklixx4wegd56y661z6jdixcuiy3vndylk96xvpdco13jlftxpyg2pbg27fxhg6ypo99ptefesh9iyrifgnjwawj6qoi70zt7quj72y1gopioapzmxih34v9jhzrbjhayti5rn80hv86i1cmhdc1fctb3hrhn3b41cgcszxdqh4iqsafhir4utiq58bk4vapeu81sffr92j1epaz6w9gcdhpdgk6jd68uwhpjdp2e90jhh4l7t1wzl1lzcp7r3h7x84rtpkezmrppx1fw51mf68hm7xi9izxpg7msyd3zhyhq885o6tlyevwvq0zsaofmb4fp7qep3ceaa2idior8754fnn7fe64j8o6zg0n2mzdrrkhl0rrh9fymf0m6f4a9i2pn8vx6uojsq0utggeydyfazo5pm0r8na7p81hzti38mtj3c8ad66ezjecm3x73n51mnkltlbwmizl8lx3w1pkgy6luluqlvzyedzvj67q1enr0ef8rpb218wdvjqwd82qq8vj838hjbz24fzvt5apbmg79r813h1z3ti5uj864f5kvfve0qvkrab7n6yqnwt4eu6rvjaggn2fef5ckzytjb05dfq83jzbdigkkamohsawfpoaei6ehxzh54ntunvzuxwpco6c60wj95h8b0j95p71hzg8yxc0r5eozldl25nz16lxarm2xg3q0grb0dijaxplp7m6dbqobq9uhfm3f4j73bdx02pjgt9lpux2zt1yqwp6c7hjoh8q8u9ormovrg6gi2z3gvkq11r0h54g729x3yxds1zgl4zwol5bhf7xodq0zla97ph6homvddjcppimw2fybngc22qtcd7kv4g36vsk83kkvursjgufe7rvzgy9w3heubvrjgjc4d5kv1r3op9qvzvek4vig7fiuksbhshbu7oy4s5x8eg7tpgrnfjxi7m1tswq88d02vg6runcmgnch13r5kdntd0ifqy77ug9ysxc2ehrwgiga7ozdpi8ppzw6tug1whgkyuqf41w53mbs7e4xijztymfcx5hrznnwmj35dpsyr6s4ix7y0ykdf618plk51i5r6ee624y1lqtyqa7w0h5epq8m4du7g8ur9kqtxdbymy91841os3vorc0ywy1292oguiunqki0csk7vhm84s9o7gt184vusn7hfxvma4kztjijt5xgb2g22narsnf7jf4ebg2d5qx7rcpc8u6zxnylrwmvj448pp5rq9kwj20759lb6glrbkwgos1b67q1umpbj1urehs2jrkjhyznaambwps2ixsbhagk3xm3o6dlbo24uule0onn3fo3bay4mo57ia2tshczvbs0ad061n8j7k101mz7me3h50wzti6obr9hwkf38ntwnr39zarr35t5g7nrpqwzk5mqmupw9iwqmozmj9h9prbh6gxhv9h53bq17mpnzh9dcogj75ayyc6fcylveb8oicmiumt0zwkmhvhnq7ymovveg3knkzt4x4d0z2e790xrxoun1l7n7knko5rlnj83fiv227xecs0r1an6mp9uu4p9a87yxvm17cjh7nnhqtz64ma1rb1052u8i77mwr6brm5vcrgrkj3ygnj9syk8y2ojq847dstjiok1u20l5vbc7colnrk6d02871fvvn76humq0nv05s9fxtvsjpz18sla0lk6q839jevjmbfy8mgflrmcm1dt6ek3nriqggti6z28z8ql0e0hk49ndmeopbl72lj96cqclrlv9vs0fl1p1fhdi03zm6iw77cij8m774b4ie9wfmgzw1onxr7uukjzeoxhf5naorpkepmc5lk4xizexweagapfjhn4czy7rr6qb5e72xfp0hntarn4yzwknokbirxsa0dkxvbdqu8ye88jqwk20rmw4qztejhzvi3tfq9h22a8evzkg51i1s4e12laubekwhh89c5l0xf54mclh8a44357y620ntq6e9o9eordp028q57dgr0nn0rflce0mgl85j3moxhbphy9htgl596ode5xt99dbj72e01b2r9ca7lakgg6pyf2fsyo3vqamp0vfwwx9twzj3ywhbuzsrhld767echqfiiga9t8h5cg0hnwpp20mvdgdwao3bl2nnqlpez485ri19xoc3jf6rrbxjl0znqmxexzq005bwlzmqx009gxmd26c3hlf3pfnlunpzx5wyk0rspk7iyx6kgvgftrcljqq4rgbj6xmhmorcrypgkbw1p2a51o8i4cd48309t1ov07xp530ipskwutpwdp333aeqfuwe4vrogfbjk0pziuxmefh8twx34jg0v1kq1kgbi8xlpvmgkeq27h49dkzkyb79a9f8dx2r0x4k34udinxnf2eva2tz3fxm07nmj3cpkcym9numoahlbyz0nr1hcon2fm86jswn81nwplibkco69qzolmla4uddmx7absj408bdyvfwkj6yatf7glvnqggrq4e58m11awgmib23ha6pg0zf0wlakx2m5ce9r930ulirhvec95uf2rnk291903ek5idmxoanjoq0xc2sknchwcv8uza72tkecceqvvq2xcs4cvkb6mtmjvaz2fvsf6ud972f22alwi4smvvou0s61kf36m6tlrd6jk44mlzkj4421c8hpxn9yvre8jlhdpcmpd5mtjnludtecurxclm7bw06ve1jigwr1m9qu9a7vp3illbgar8tw6vgivfdl0ud3yzyhss2zyd2og0uxi7rk8aaruf6r0gpimv9z87szp3jkpshlghmwyui94k7mxmso6xzelafumbyzi7hj7yigifax6nfoed98trs3r2rahymn043jjj4oyfj8incjebahjxgdr4xt2ijqtw0gd8618yvadhvrgi9szjd91x3oorzesq2ib7yevjd37745lwa01lm50w6dfwxnaqrjh653ac0rmc3jqlfp5c4x9b1cc7licvn017p99y7hxlxtlk9ojwrpdrmyrywfgzpe6rrmyo6jkeob7au4ocq0cxj64495yg8v21ko0wejkjlmb94bost9h62q3z3ondbbror21ln6u3rj25p724mszfnri3fqq934v5rp1u8psicvux3dwhytc0mb4g13xvqw92dvcwatn9r51wj7gu3meqdu6xg2pfrgetgbzd1255ak2t0s0li2dn1oz2hudgxnumge2ifk48akz2niezwpkofftyld67vskqhiaip915luvawa0h9v24cr7mbrr8tzzmwqeh4fy7g6h23u 00:44:08.377 12:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:44:08.377 12:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:44:08.377 12:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:44:08.377 12:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:44:08.377 { 00:44:08.377 "subsystems": [ 00:44:08.377 { 00:44:08.377 "subsystem": "bdev", 00:44:08.377 "config": [ 00:44:08.377 { 00:44:08.377 "params": { 00:44:08.377 "trtype": "pcie", 00:44:08.377 "traddr": "0000:00:10.0", 00:44:08.377 "name": "Nvme0" 00:44:08.377 }, 00:44:08.377 "method": "bdev_nvme_attach_controller" 00:44:08.377 }, 00:44:08.377 { 00:44:08.377 "method": "bdev_wait_for_examine" 00:44:08.377 } 00:44:08.377 ] 00:44:08.377 } 00:44:08.377 ] 00:44:08.377 } 00:44:08.636 [2024-06-10 12:07:40.454166] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:08.636 [2024-06-10 12:07:40.454525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167940 ] 00:44:08.636 [2024-06-10 12:07:40.648427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:08.894 [2024-06-10 12:07:40.902579] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:10.835  Copying: 4096/4096 [B] (average 4000 kBps) 00:44:10.835 00:44:10.835 12:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:44:10.835 12:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:44:10.835 12:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:44:10.835 12:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:44:10.835 { 00:44:10.835 "subsystems": [ 00:44:10.835 { 00:44:10.835 "subsystem": "bdev", 00:44:10.835 "config": [ 00:44:10.835 { 00:44:10.835 "params": { 00:44:10.835 "trtype": "pcie", 00:44:10.835 "traddr": "0000:00:10.0", 00:44:10.835 "name": "Nvme0" 00:44:10.835 }, 00:44:10.835 "method": "bdev_nvme_attach_controller" 00:44:10.835 }, 00:44:10.835 { 00:44:10.835 "method": "bdev_wait_for_examine" 00:44:10.835 } 00:44:10.835 ] 00:44:10.835 } 00:44:10.835 ] 00:44:10.835 } 00:44:10.835 [2024-06-10 12:07:42.770902] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:10.835 [2024-06-10 12:07:42.771456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167973 ] 00:44:11.093 [2024-06-10 12:07:42.934863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:11.351 [2024-06-10 12:07:43.190519] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:12.983  Copying: 4096/4096 [B] (average 4000 kBps) 00:44:12.983 00:44:12.983 12:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:44:12.984 12:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 41e0zwmoan3amqztafzvd2wioz2bnf7mq4irrih1embeulfg32luta9kiwc4klptqf66wom4tvhrjsesib0wun21ouwokvvmsjnml8khrywcllkvgog3z13w23nbw0qs5rsec7jgx803m8yaz0aj6dof74k6zgvgsk9hfjv6g7nd896eqim3fmg5i882voma3gw15zx1uvh0s3sus8zljowxe7y397u6ek6tqxlyuoa95wz40xzxgpcif1fzib0bfbdv1xixqssa3v6wcvgc3e4ensh4vowuofuh4drwbfmvtylvyc1rjdagxwkmh35wvbqeq8q35us522mmeyn95zueiq2bplr660kgbbvb8dvzulm4ox773229z46x2sv7gj52qhqagzspdw6d62m6nav6qabi274ia15vsemvsg0dxr4t9j0kp4v8q2ol1vwy8zkqyaeo68y9evud41sq5kophdw5hcqlfz6xpx1c97g5bobtie2dacsbpkywz4ekz9x19sluzlzhnbjrwessa67digzs92elsdc1k2ztzlrkfb9u0yi3mbdm2wxmfwjpbbjmk47purqasov8xjs532u8r7t0fq8wl6vp28prm63hspt9ogoczk7kpuybmzgd9speas3q94z2ebz2yz1gjvawidn2sxkts5kgfh37btrlo9cjunzqzkc0p0ovlwnpc816ccw8cnt6pprpk25ggmio5yvkhtiu62cka45p7mrexqt9d2v03cb5giqbufhckb45257b0zl5a5m6ydmnsoohiklixx4wegd56y661z6jdixcuiy3vndylk96xvpdco13jlftxpyg2pbg27fxhg6ypo99ptefesh9iyrifgnjwawj6qoi70zt7quj72y1gopioapzmxih34v9jhzrbjhayti5rn80hv86i1cmhdc1fctb3hrhn3b41cgcszxdqh4iqsafhir4utiq58bk4vapeu81sffr92j1epaz6w9gcdhpdgk6jd68uwhpjdp2e90jhh4l7t1wzl1lzcp7r3h7x84rtpkezmrppx1fw51mf68hm7xi9izxpg7msyd3zhyhq885o6tlyevwvq0zsaofmb4fp7qep3ceaa2idior8754fnn7fe64j8o6zg0n2mzdrrkhl0rrh9fymf0m6f4a9i2pn8vx6uojsq0utggeydyfazo5pm0r8na7p81hzti38mtj3c8ad66ezjecm3x73n51mnkltlbwmizl8lx3w1pkgy6luluqlvzyedzvj67q1enr0ef8rpb218wdvjqwd82qq8vj838hjbz24fzvt5apbmg79r813h1z3ti5uj864f5kvfve0qvkrab7n6yqnwt4eu6rvjaggn2fef5ckzytjb05dfq83jzbdigkkamohsawfpoaei6ehxzh54ntunvzuxwpco6c60wj95h8b0j95p71hzg8yxc0r5eozldl25nz16lxarm2xg3q0grb0dijaxplp7m6dbqobq9uhfm3f4j73bdx02pjgt9lpux2zt1yqwp6c7hjoh8q8u9ormovrg6gi2z3gvkq11r0h54g729x3yxds1zgl4zwol5bhf7xodq0zla97ph6homvddjcppimw2fybngc22qtcd7kv4g36vsk83kkvursjgufe7rvzgy9w3heubvrjgjc4d5kv1r3op9qvzvek4vig7fiuksbhshbu7oy4s5x8eg7tpgrnfjxi7m1tswq88d02vg6runcmgnch13r5kdntd0ifqy77ug9ysxc2ehrwgiga7ozdpi8ppzw6tug1whgkyuqf41w53mbs7e4xijztymfcx5hrznnwmj35dpsyr6s4ix7y0ykdf618plk51i5r6ee624y1lqtyqa7w0h5epq8m4du7g8ur9kqtxdbymy91841os3vorc0ywy1292oguiunqki0csk7vhm84s9o7gt184vusn7hfxvma4kztjijt5xgb2g22narsnf7jf4ebg2d5qx7rcpc8u6zxnylrwmvj448pp5rq9kwj20759lb6glrbkwgos1b67q1umpbj1urehs2jrkjhyznaambwps2ixsbhagk3xm3o6dlbo24uule0onn3fo3bay4mo57ia2tshczvbs0ad061n8j7k101mz7me3h50wzti6obr9hwkf38ntwnr39zarr35t5g7nrpqwzk5mqmupw9iwqmozmj9h9prbh6gxhv9h53bq17mpnzh9dcogj75ayyc6fcylveb8oicmiumt0zwkmhvhnq7ymovveg3knkzt4x4d0z2e790xrxoun1l7n7knko5rlnj83fiv227xecs0r1an6mp9uu4p9a87yxvm17cjh7nnhqtz64ma1rb1052u8i77mwr6brm5vcrgrkj3ygnj9syk8y2ojq847dstjiok1u20l5vbc7colnrk6d02871fvvn76humq0nv05s9fxtvsjpz18sla0lk6q839jevjmbfy8mgflrmcm1dt6ek3nriqggti6z28z8ql0e0hk49ndmeopbl72lj96cqclrlv9vs0fl1p1fhdi03zm6iw77cij8m774b4ie9wfmgzw1onxr7uukjzeoxhf5naorpkepmc5lk4xizexweagapfjhn4czy7rr6qb5e72xfp0hntarn4yzwknokbirxsa0dkxvbdqu8ye88jqwk20rmw4qztejhzvi3tfq9h22a8evzkg51i1s4e12laubekwhh89c5l0xf54mclh8a44357y620ntq6e9o9eordp028q57dgr0nn0rflce0mgl85j3moxhbphy9htgl596ode5xt99dbj72e01b2r9ca7lakgg6pyf2fsyo3vqamp0vfwwx9twzj3ywhbuzsrhld767echqfiiga9t8h5cg0hnwpp20mvdgdwao3bl2nnqlpez485ri19xoc3jf6rrbxjl0znqmxexzq005bwlzmqx009gxmd26c3hlf3pfnlunpzx5wyk0rspk7iyx6kgvgftrcljqq4rgbj6xmhmorcrypgkbw1p2a51o8i4cd48309t1ov07xp530ipskwutpwdp333aeqfuwe4vrogfbjk0pziuxmefh8twx34jg0v1kq1kgbi8xlpvmgkeq27h49dkzkyb79a9f8dx2r0x4k34udinxnf2eva2tz3fxm07nmj3cpkcym9numoahlbyz0nr1hcon2fm86jswn81nwplibkco69qzolmla4uddmx7absj408bdyvfwkj6yatf7glvnqggrq4e58m11awgmib23ha6pg0zf0wlakx2m5ce9r930ulirhvec95uf2rnk291903ek5idmxoanjoq0xc2sknchwcv8uza72tkecceqvvq2xcs4cvkb6mtmjvaz2fvsf6ud972f22alwi4smvvou0s61kf36m6tlrd6jk44mlzkj4421c8hpxn9yvre8jlhdpcmpd5mtjnludtecurxclm7bw06ve1jigwr1m9qu9a7vp3illbgar8tw6vgivfdl0ud3yzyhss2zyd2og0uxi7rk8aaruf6r0gpimv9z87szp3jkpshlghmwyui94k7mxmso6xzelafumbyzi7hj7yigifax6nfoed98trs3r2rahymn043jjj4oyfj8incjebahjxgdr4xt2ijqtw0gd8618yvadhvrgi9szjd91x3oorzesq2ib7yevjd37745lwa01lm50w6dfwxnaqrjh653ac0rmc3jqlfp5c4x9b1cc7licvn017p99y7hxlxtlk9ojwrpdrmyrywfgzpe6rrmyo6jkeob7au4ocq0cxj64495yg8v21ko0wejkjlmb94bost9h62q3z3ondbbror21ln6u3rj25p724mszfnri3fqq934v5rp1u8psicvux3dwhytc0mb4g13xvqw92dvcwatn9r51wj7gu3meqdu6xg2pfrgetgbzd1255ak2t0s0li2dn1oz2hudgxnumge2ifk48akz2niezwpkofftyld67vskqhiaip915luvawa0h9v24cr7mbrr8tzzmwqeh4fy7g6h23u == \4\1\e\0\z\w\m\o\a\n\3\a\m\q\z\t\a\f\z\v\d\2\w\i\o\z\2\b\n\f\7\m\q\4\i\r\r\i\h\1\e\m\b\e\u\l\f\g\3\2\l\u\t\a\9\k\i\w\c\4\k\l\p\t\q\f\6\6\w\o\m\4\t\v\h\r\j\s\e\s\i\b\0\w\u\n\2\1\o\u\w\o\k\v\v\m\s\j\n\m\l\8\k\h\r\y\w\c\l\l\k\v\g\o\g\3\z\1\3\w\2\3\n\b\w\0\q\s\5\r\s\e\c\7\j\g\x\8\0\3\m\8\y\a\z\0\a\j\6\d\o\f\7\4\k\6\z\g\v\g\s\k\9\h\f\j\v\6\g\7\n\d\8\9\6\e\q\i\m\3\f\m\g\5\i\8\8\2\v\o\m\a\3\g\w\1\5\z\x\1\u\v\h\0\s\3\s\u\s\8\z\l\j\o\w\x\e\7\y\3\9\7\u\6\e\k\6\t\q\x\l\y\u\o\a\9\5\w\z\4\0\x\z\x\g\p\c\i\f\1\f\z\i\b\0\b\f\b\d\v\1\x\i\x\q\s\s\a\3\v\6\w\c\v\g\c\3\e\4\e\n\s\h\4\v\o\w\u\o\f\u\h\4\d\r\w\b\f\m\v\t\y\l\v\y\c\1\r\j\d\a\g\x\w\k\m\h\3\5\w\v\b\q\e\q\8\q\3\5\u\s\5\2\2\m\m\e\y\n\9\5\z\u\e\i\q\2\b\p\l\r\6\6\0\k\g\b\b\v\b\8\d\v\z\u\l\m\4\o\x\7\7\3\2\2\9\z\4\6\x\2\s\v\7\g\j\5\2\q\h\q\a\g\z\s\p\d\w\6\d\6\2\m\6\n\a\v\6\q\a\b\i\2\7\4\i\a\1\5\v\s\e\m\v\s\g\0\d\x\r\4\t\9\j\0\k\p\4\v\8\q\2\o\l\1\v\w\y\8\z\k\q\y\a\e\o\6\8\y\9\e\v\u\d\4\1\s\q\5\k\o\p\h\d\w\5\h\c\q\l\f\z\6\x\p\x\1\c\9\7\g\5\b\o\b\t\i\e\2\d\a\c\s\b\p\k\y\w\z\4\e\k\z\9\x\1\9\s\l\u\z\l\z\h\n\b\j\r\w\e\s\s\a\6\7\d\i\g\z\s\9\2\e\l\s\d\c\1\k\2\z\t\z\l\r\k\f\b\9\u\0\y\i\3\m\b\d\m\2\w\x\m\f\w\j\p\b\b\j\m\k\4\7\p\u\r\q\a\s\o\v\8\x\j\s\5\3\2\u\8\r\7\t\0\f\q\8\w\l\6\v\p\2\8\p\r\m\6\3\h\s\p\t\9\o\g\o\c\z\k\7\k\p\u\y\b\m\z\g\d\9\s\p\e\a\s\3\q\9\4\z\2\e\b\z\2\y\z\1\g\j\v\a\w\i\d\n\2\s\x\k\t\s\5\k\g\f\h\3\7\b\t\r\l\o\9\c\j\u\n\z\q\z\k\c\0\p\0\o\v\l\w\n\p\c\8\1\6\c\c\w\8\c\n\t\6\p\p\r\p\k\2\5\g\g\m\i\o\5\y\v\k\h\t\i\u\6\2\c\k\a\4\5\p\7\m\r\e\x\q\t\9\d\2\v\0\3\c\b\5\g\i\q\b\u\f\h\c\k\b\4\5\2\5\7\b\0\z\l\5\a\5\m\6\y\d\m\n\s\o\o\h\i\k\l\i\x\x\4\w\e\g\d\5\6\y\6\6\1\z\6\j\d\i\x\c\u\i\y\3\v\n\d\y\l\k\9\6\x\v\p\d\c\o\1\3\j\l\f\t\x\p\y\g\2\p\b\g\2\7\f\x\h\g\6\y\p\o\9\9\p\t\e\f\e\s\h\9\i\y\r\i\f\g\n\j\w\a\w\j\6\q\o\i\7\0\z\t\7\q\u\j\7\2\y\1\g\o\p\i\o\a\p\z\m\x\i\h\3\4\v\9\j\h\z\r\b\j\h\a\y\t\i\5\r\n\8\0\h\v\8\6\i\1\c\m\h\d\c\1\f\c\t\b\3\h\r\h\n\3\b\4\1\c\g\c\s\z\x\d\q\h\4\i\q\s\a\f\h\i\r\4\u\t\i\q\5\8\b\k\4\v\a\p\e\u\8\1\s\f\f\r\9\2\j\1\e\p\a\z\6\w\9\g\c\d\h\p\d\g\k\6\j\d\6\8\u\w\h\p\j\d\p\2\e\9\0\j\h\h\4\l\7\t\1\w\z\l\1\l\z\c\p\7\r\3\h\7\x\8\4\r\t\p\k\e\z\m\r\p\p\x\1\f\w\5\1\m\f\6\8\h\m\7\x\i\9\i\z\x\p\g\7\m\s\y\d\3\z\h\y\h\q\8\8\5\o\6\t\l\y\e\v\w\v\q\0\z\s\a\o\f\m\b\4\f\p\7\q\e\p\3\c\e\a\a\2\i\d\i\o\r\8\7\5\4\f\n\n\7\f\e\6\4\j\8\o\6\z\g\0\n\2\m\z\d\r\r\k\h\l\0\r\r\h\9\f\y\m\f\0\m\6\f\4\a\9\i\2\p\n\8\v\x\6\u\o\j\s\q\0\u\t\g\g\e\y\d\y\f\a\z\o\5\p\m\0\r\8\n\a\7\p\8\1\h\z\t\i\3\8\m\t\j\3\c\8\a\d\6\6\e\z\j\e\c\m\3\x\7\3\n\5\1\m\n\k\l\t\l\b\w\m\i\z\l\8\l\x\3\w\1\p\k\g\y\6\l\u\l\u\q\l\v\z\y\e\d\z\v\j\6\7\q\1\e\n\r\0\e\f\8\r\p\b\2\1\8\w\d\v\j\q\w\d\8\2\q\q\8\v\j\8\3\8\h\j\b\z\2\4\f\z\v\t\5\a\p\b\m\g\7\9\r\8\1\3\h\1\z\3\t\i\5\u\j\8\6\4\f\5\k\v\f\v\e\0\q\v\k\r\a\b\7\n\6\y\q\n\w\t\4\e\u\6\r\v\j\a\g\g\n\2\f\e\f\5\c\k\z\y\t\j\b\0\5\d\f\q\8\3\j\z\b\d\i\g\k\k\a\m\o\h\s\a\w\f\p\o\a\e\i\6\e\h\x\z\h\5\4\n\t\u\n\v\z\u\x\w\p\c\o\6\c\6\0\w\j\9\5\h\8\b\0\j\9\5\p\7\1\h\z\g\8\y\x\c\0\r\5\e\o\z\l\d\l\2\5\n\z\1\6\l\x\a\r\m\2\x\g\3\q\0\g\r\b\0\d\i\j\a\x\p\l\p\7\m\6\d\b\q\o\b\q\9\u\h\f\m\3\f\4\j\7\3\b\d\x\0\2\p\j\g\t\9\l\p\u\x\2\z\t\1\y\q\w\p\6\c\7\h\j\o\h\8\q\8\u\9\o\r\m\o\v\r\g\6\g\i\2\z\3\g\v\k\q\1\1\r\0\h\5\4\g\7\2\9\x\3\y\x\d\s\1\z\g\l\4\z\w\o\l\5\b\h\f\7\x\o\d\q\0\z\l\a\9\7\p\h\6\h\o\m\v\d\d\j\c\p\p\i\m\w\2\f\y\b\n\g\c\2\2\q\t\c\d\7\k\v\4\g\3\6\v\s\k\8\3\k\k\v\u\r\s\j\g\u\f\e\7\r\v\z\g\y\9\w\3\h\e\u\b\v\r\j\g\j\c\4\d\5\k\v\1\r\3\o\p\9\q\v\z\v\e\k\4\v\i\g\7\f\i\u\k\s\b\h\s\h\b\u\7\o\y\4\s\5\x\8\e\g\7\t\p\g\r\n\f\j\x\i\7\m\1\t\s\w\q\8\8\d\0\2\v\g\6\r\u\n\c\m\g\n\c\h\1\3\r\5\k\d\n\t\d\0\i\f\q\y\7\7\u\g\9\y\s\x\c\2\e\h\r\w\g\i\g\a\7\o\z\d\p\i\8\p\p\z\w\6\t\u\g\1\w\h\g\k\y\u\q\f\4\1\w\5\3\m\b\s\7\e\4\x\i\j\z\t\y\m\f\c\x\5\h\r\z\n\n\w\m\j\3\5\d\p\s\y\r\6\s\4\i\x\7\y\0\y\k\d\f\6\1\8\p\l\k\5\1\i\5\r\6\e\e\6\2\4\y\1\l\q\t\y\q\a\7\w\0\h\5\e\p\q\8\m\4\d\u\7\g\8\u\r\9\k\q\t\x\d\b\y\m\y\9\1\8\4\1\o\s\3\v\o\r\c\0\y\w\y\1\2\9\2\o\g\u\i\u\n\q\k\i\0\c\s\k\7\v\h\m\8\4\s\9\o\7\g\t\1\8\4\v\u\s\n\7\h\f\x\v\m\a\4\k\z\t\j\i\j\t\5\x\g\b\2\g\2\2\n\a\r\s\n\f\7\j\f\4\e\b\g\2\d\5\q\x\7\r\c\p\c\8\u\6\z\x\n\y\l\r\w\m\v\j\4\4\8\p\p\5\r\q\9\k\w\j\2\0\7\5\9\l\b\6\g\l\r\b\k\w\g\o\s\1\b\6\7\q\1\u\m\p\b\j\1\u\r\e\h\s\2\j\r\k\j\h\y\z\n\a\a\m\b\w\p\s\2\i\x\s\b\h\a\g\k\3\x\m\3\o\6\d\l\b\o\2\4\u\u\l\e\0\o\n\n\3\f\o\3\b\a\y\4\m\o\5\7\i\a\2\t\s\h\c\z\v\b\s\0\a\d\0\6\1\n\8\j\7\k\1\0\1\m\z\7\m\e\3\h\5\0\w\z\t\i\6\o\b\r\9\h\w\k\f\3\8\n\t\w\n\r\3\9\z\a\r\r\3\5\t\5\g\7\n\r\p\q\w\z\k\5\m\q\m\u\p\w\9\i\w\q\m\o\z\m\j\9\h\9\p\r\b\h\6\g\x\h\v\9\h\5\3\b\q\1\7\m\p\n\z\h\9\d\c\o\g\j\7\5\a\y\y\c\6\f\c\y\l\v\e\b\8\o\i\c\m\i\u\m\t\0\z\w\k\m\h\v\h\n\q\7\y\m\o\v\v\e\g\3\k\n\k\z\t\4\x\4\d\0\z\2\e\7\9\0\x\r\x\o\u\n\1\l\7\n\7\k\n\k\o\5\r\l\n\j\8\3\f\i\v\2\2\7\x\e\c\s\0\r\1\a\n\6\m\p\9\u\u\4\p\9\a\8\7\y\x\v\m\1\7\c\j\h\7\n\n\h\q\t\z\6\4\m\a\1\r\b\1\0\5\2\u\8\i\7\7\m\w\r\6\b\r\m\5\v\c\r\g\r\k\j\3\y\g\n\j\9\s\y\k\8\y\2\o\j\q\8\4\7\d\s\t\j\i\o\k\1\u\2\0\l\5\v\b\c\7\c\o\l\n\r\k\6\d\0\2\8\7\1\f\v\v\n\7\6\h\u\m\q\0\n\v\0\5\s\9\f\x\t\v\s\j\p\z\1\8\s\l\a\0\l\k\6\q\8\3\9\j\e\v\j\m\b\f\y\8\m\g\f\l\r\m\c\m\1\d\t\6\e\k\3\n\r\i\q\g\g\t\i\6\z\2\8\z\8\q\l\0\e\0\h\k\4\9\n\d\m\e\o\p\b\l\7\2\l\j\9\6\c\q\c\l\r\l\v\9\v\s\0\f\l\1\p\1\f\h\d\i\0\3\z\m\6\i\w\7\7\c\i\j\8\m\7\7\4\b\4\i\e\9\w\f\m\g\z\w\1\o\n\x\r\7\u\u\k\j\z\e\o\x\h\f\5\n\a\o\r\p\k\e\p\m\c\5\l\k\4\x\i\z\e\x\w\e\a\g\a\p\f\j\h\n\4\c\z\y\7\r\r\6\q\b\5\e\7\2\x\f\p\0\h\n\t\a\r\n\4\y\z\w\k\n\o\k\b\i\r\x\s\a\0\d\k\x\v\b\d\q\u\8\y\e\8\8\j\q\w\k\2\0\r\m\w\4\q\z\t\e\j\h\z\v\i\3\t\f\q\9\h\2\2\a\8\e\v\z\k\g\5\1\i\1\s\4\e\1\2\l\a\u\b\e\k\w\h\h\8\9\c\5\l\0\x\f\5\4\m\c\l\h\8\a\4\4\3\5\7\y\6\2\0\n\t\q\6\e\9\o\9\e\o\r\d\p\0\2\8\q\5\7\d\g\r\0\n\n\0\r\f\l\c\e\0\m\g\l\8\5\j\3\m\o\x\h\b\p\h\y\9\h\t\g\l\5\9\6\o\d\e\5\x\t\9\9\d\b\j\7\2\e\0\1\b\2\r\9\c\a\7\l\a\k\g\g\6\p\y\f\2\f\s\y\o\3\v\q\a\m\p\0\v\f\w\w\x\9\t\w\z\j\3\y\w\h\b\u\z\s\r\h\l\d\7\6\7\e\c\h\q\f\i\i\g\a\9\t\8\h\5\c\g\0\h\n\w\p\p\2\0\m\v\d\g\d\w\a\o\3\b\l\2\n\n\q\l\p\e\z\4\8\5\r\i\1\9\x\o\c\3\j\f\6\r\r\b\x\j\l\0\z\n\q\m\x\e\x\z\q\0\0\5\b\w\l\z\m\q\x\0\0\9\g\x\m\d\2\6\c\3\h\l\f\3\p\f\n\l\u\n\p\z\x\5\w\y\k\0\r\s\p\k\7\i\y\x\6\k\g\v\g\f\t\r\c\l\j\q\q\4\r\g\b\j\6\x\m\h\m\o\r\c\r\y\p\g\k\b\w\1\p\2\a\5\1\o\8\i\4\c\d\4\8\3\0\9\t\1\o\v\0\7\x\p\5\3\0\i\p\s\k\w\u\t\p\w\d\p\3\3\3\a\e\q\f\u\w\e\4\v\r\o\g\f\b\j\k\0\p\z\i\u\x\m\e\f\h\8\t\w\x\3\4\j\g\0\v\1\k\q\1\k\g\b\i\8\x\l\p\v\m\g\k\e\q\2\7\h\4\9\d\k\z\k\y\b\7\9\a\9\f\8\d\x\2\r\0\x\4\k\3\4\u\d\i\n\x\n\f\2\e\v\a\2\t\z\3\f\x\m\0\7\n\m\j\3\c\p\k\c\y\m\9\n\u\m\o\a\h\l\b\y\z\0\n\r\1\h\c\o\n\2\f\m\8\6\j\s\w\n\8\1\n\w\p\l\i\b\k\c\o\6\9\q\z\o\l\m\l\a\4\u\d\d\m\x\7\a\b\s\j\4\0\8\b\d\y\v\f\w\k\j\6\y\a\t\f\7\g\l\v\n\q\g\g\r\q\4\e\5\8\m\1\1\a\w\g\m\i\b\2\3\h\a\6\p\g\0\z\f\0\w\l\a\k\x\2\m\5\c\e\9\r\9\3\0\u\l\i\r\h\v\e\c\9\5\u\f\2\r\n\k\2\9\1\9\0\3\e\k\5\i\d\m\x\o\a\n\j\o\q\0\x\c\2\s\k\n\c\h\w\c\v\8\u\z\a\7\2\t\k\e\c\c\e\q\v\v\q\2\x\c\s\4\c\v\k\b\6\m\t\m\j\v\a\z\2\f\v\s\f\6\u\d\9\7\2\f\2\2\a\l\w\i\4\s\m\v\v\o\u\0\s\6\1\k\f\3\6\m\6\t\l\r\d\6\j\k\4\4\m\l\z\k\j\4\4\2\1\c\8\h\p\x\n\9\y\v\r\e\8\j\l\h\d\p\c\m\p\d\5\m\t\j\n\l\u\d\t\e\c\u\r\x\c\l\m\7\b\w\0\6\v\e\1\j\i\g\w\r\1\m\9\q\u\9\a\7\v\p\3\i\l\l\b\g\a\r\8\t\w\6\v\g\i\v\f\d\l\0\u\d\3\y\z\y\h\s\s\2\z\y\d\2\o\g\0\u\x\i\7\r\k\8\a\a\r\u\f\6\r\0\g\p\i\m\v\9\z\8\7\s\z\p\3\j\k\p\s\h\l\g\h\m\w\y\u\i\9\4\k\7\m\x\m\s\o\6\x\z\e\l\a\f\u\m\b\y\z\i\7\h\j\7\y\i\g\i\f\a\x\6\n\f\o\e\d\9\8\t\r\s\3\r\2\r\a\h\y\m\n\0\4\3\j\j\j\4\o\y\f\j\8\i\n\c\j\e\b\a\h\j\x\g\d\r\4\x\t\2\i\j\q\t\w\0\g\d\8\6\1\8\y\v\a\d\h\v\r\g\i\9\s\z\j\d\9\1\x\3\o\o\r\z\e\s\q\2\i\b\7\y\e\v\j\d\3\7\7\4\5\l\w\a\0\1\l\m\5\0\w\6\d\f\w\x\n\a\q\r\j\h\6\5\3\a\c\0\r\m\c\3\j\q\l\f\p\5\c\4\x\9\b\1\c\c\7\l\i\c\v\n\0\1\7\p\9\9\y\7\h\x\l\x\t\l\k\9\o\j\w\r\p\d\r\m\y\r\y\w\f\g\z\p\e\6\r\r\m\y\o\6\j\k\e\o\b\7\a\u\4\o\c\q\0\c\x\j\6\4\4\9\5\y\g\8\v\2\1\k\o\0\w\e\j\k\j\l\m\b\9\4\b\o\s\t\9\h\6\2\q\3\z\3\o\n\d\b\b\r\o\r\2\1\l\n\6\u\3\r\j\2\5\p\7\2\4\m\s\z\f\n\r\i\3\f\q\q\9\3\4\v\5\r\p\1\u\8\p\s\i\c\v\u\x\3\d\w\h\y\t\c\0\m\b\4\g\1\3\x\v\q\w\9\2\d\v\c\w\a\t\n\9\r\5\1\w\j\7\g\u\3\m\e\q\d\u\6\x\g\2\p\f\r\g\e\t\g\b\z\d\1\2\5\5\a\k\2\t\0\s\0\l\i\2\d\n\1\o\z\2\h\u\d\g\x\n\u\m\g\e\2\i\f\k\4\8\a\k\z\2\n\i\e\z\w\p\k\o\f\f\t\y\l\d\6\7\v\s\k\q\h\i\a\i\p\9\1\5\l\u\v\a\w\a\0\h\9\v\2\4\c\r\7\m\b\r\r\8\t\z\z\m\w\q\e\h\4\f\y\7\g\6\h\2\3\u ]] 00:44:12.984 00:44:12.984 real 0m4.658s 00:44:12.984 user 0m3.939s 00:44:12.984 sys 0m0.555s 00:44:12.984 12:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # xtrace_disable 00:44:12.984 12:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:44:12.984 ************************************ 00:44:12.984 END TEST dd_rw_offset 00:44:12.984 ************************************ 00:44:12.984 12:07:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:44:12.984 12:07:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:44:12.984 12:07:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:44:12.984 12:07:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:44:12.984 12:07:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:44:12.984 12:07:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:44:12.984 12:07:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:44:12.984 12:07:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:44:12.984 12:07:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:44:12.984 12:07:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:44:12.984 12:07:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:44:13.242 { 00:44:13.242 "subsystems": [ 00:44:13.242 { 00:44:13.242 "subsystem": "bdev", 00:44:13.242 "config": [ 00:44:13.242 { 00:44:13.242 "params": { 00:44:13.242 "trtype": "pcie", 00:44:13.242 "traddr": "0000:00:10.0", 00:44:13.242 "name": "Nvme0" 00:44:13.242 }, 00:44:13.242 "method": "bdev_nvme_attach_controller" 00:44:13.242 }, 00:44:13.242 { 00:44:13.242 "method": "bdev_wait_for_examine" 00:44:13.242 } 00:44:13.242 ] 00:44:13.242 } 00:44:13.242 ] 00:44:13.242 } 00:44:13.242 [2024-06-10 12:07:45.111260] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:13.242 [2024-06-10 12:07:45.111727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168016 ] 00:44:13.242 [2024-06-10 12:07:45.292845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:13.499 [2024-06-10 12:07:45.518047] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:15.436  Copying: 1024/1024 [kB] (average 1000 MBps) 00:44:15.436 00:44:15.436 12:07:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:15.436 ************************************ 00:44:15.436 END TEST spdk_dd_basic_rw 00:44:15.436 ************************************ 00:44:15.436 00:44:15.436 real 0m55.035s 00:44:15.436 user 0m46.797s 00:44:15.436 sys 0m6.561s 00:44:15.436 12:07:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # xtrace_disable 00:44:15.436 12:07:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:44:15.436 12:07:47 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:44:15.436 12:07:47 spdk_dd -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:44:15.436 12:07:47 spdk_dd -- common/autotest_common.sh@1106 -- # xtrace_disable 00:44:15.436 12:07:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:44:15.436 ************************************ 00:44:15.436 START TEST spdk_dd_posix 00:44:15.436 ************************************ 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:44:15.436 * Looking for test storage... 00:44:15.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:44:15.436 * First test run, using AIO 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:44:15.436 ************************************ 00:44:15.436 START TEST dd_flag_append 00:44:15.436 ************************************ 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # append 00:44:15.436 12:07:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:44:15.437 12:07:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:44:15.437 12:07:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:44:15.437 12:07:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:44:15.437 12:07:47 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:44:15.437 12:07:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=iby0u9on5rebeniw7tqhlgq4oq4rkinj 00:44:15.437 12:07:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:44:15.437 12:07:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:44:15.437 12:07:47 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:44:15.437 12:07:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=2c69iyadme4do32scg6n1a2krvu44e30 00:44:15.437 12:07:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s iby0u9on5rebeniw7tqhlgq4oq4rkinj 00:44:15.437 12:07:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 2c69iyadme4do32scg6n1a2krvu44e30 00:44:15.437 12:07:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:44:15.437 [2024-06-10 12:07:47.407443] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:15.437 [2024-06-10 12:07:47.407878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168105 ] 00:44:15.694 [2024-06-10 12:07:47.594076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:15.952 [2024-06-10 12:07:47.822570] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:17.583  Copying: 32/32 [B] (average 31 kBps) 00:44:17.583 00:44:17.583 ************************************ 00:44:17.583 END TEST dd_flag_append 00:44:17.583 ************************************ 00:44:17.583 12:07:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 2c69iyadme4do32scg6n1a2krvu44e30iby0u9on5rebeniw7tqhlgq4oq4rkinj == \2\c\6\9\i\y\a\d\m\e\4\d\o\3\2\s\c\g\6\n\1\a\2\k\r\v\u\4\4\e\3\0\i\b\y\0\u\9\o\n\5\r\e\b\e\n\i\w\7\t\q\h\l\g\q\4\o\q\4\r\k\i\n\j ]] 00:44:17.583 00:44:17.583 real 0m2.285s 00:44:17.583 user 0m1.902s 00:44:17.583 sys 0m0.249s 00:44:17.583 12:07:49 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # xtrace_disable 00:44:17.583 12:07:49 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:44:17.904 12:07:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:44:17.904 12:07:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:44:17.904 12:07:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:44:17.904 12:07:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:44:17.904 ************************************ 00:44:17.904 START TEST dd_flag_directory 00:44:17.904 ************************************ 00:44:17.904 12:07:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # directory 00:44:17.904 12:07:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:44:17.904 12:07:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@649 -- # local es=0 00:44:17.904 12:07:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:44:17.904 12:07:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:17.904 12:07:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:17.904 12:07:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:17.904 12:07:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:17.904 12:07:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:17.904 12:07:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:17.904 12:07:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:17.904 12:07:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:44:17.904 12:07:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:44:17.904 [2024-06-10 12:07:49.729932] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:17.904 [2024-06-10 12:07:49.730328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168159 ] 00:44:17.904 [2024-06-10 12:07:49.895503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:18.162 [2024-06-10 12:07:50.130108] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:18.728 [2024-06-10 12:07:50.489838] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:44:18.728 [2024-06-10 12:07:50.490183] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:44:18.728 [2024-06-10 12:07:50.490256] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:19.294 [2024-06-10 12:07:51.319276] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:44:19.859 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # es=236 00:44:19.859 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:44:19.859 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # es=108 00:44:19.859 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # case "$es" in 00:44:19.859 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@669 -- # es=1 00:44:19.859 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:44:19.859 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:44:19.859 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@649 -- # local es=0 00:44:19.859 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:44:19.859 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:19.859 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:19.859 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:19.859 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:19.859 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:19.860 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:19.860 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:19.860 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:44:19.860 12:07:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:44:19.860 [2024-06-10 12:07:51.873046] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:19.860 [2024-06-10 12:07:51.873516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168191 ] 00:44:20.118 [2024-06-10 12:07:52.056124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:20.376 [2024-06-10 12:07:52.277724] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:20.634 [2024-06-10 12:07:52.649505] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:44:20.634 [2024-06-10 12:07:52.649829] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:44:20.634 [2024-06-10 12:07:52.649911] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:21.567 [2024-06-10 12:07:53.557799] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # es=236 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:44:22.134 ************************************ 00:44:22.134 END TEST dd_flag_directory 00:44:22.134 ************************************ 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # es=108 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # case "$es" in 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@669 -- # es=1 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:44:22.134 00:44:22.134 real 0m4.375s 00:44:22.134 user 0m3.725s 00:44:22.134 sys 0m0.445s 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:44:22.134 ************************************ 00:44:22.134 START TEST dd_flag_nofollow 00:44:22.134 ************************************ 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # nofollow 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@649 -- # local es=0 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:44:22.134 12:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:22.134 [2024-06-10 12:07:54.176543] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:22.134 [2024-06-10 12:07:54.176958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168237 ] 00:44:22.392 [2024-06-10 12:07:54.342026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:22.650 [2024-06-10 12:07:54.572248] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:22.908 [2024-06-10 12:07:54.939615] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:44:22.908 [2024-06-10 12:07:54.939973] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:44:22.908 [2024-06-10 12:07:54.940057] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:23.874 [2024-06-10 12:07:55.836362] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # es=216 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # es=88 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # case "$es" in 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@669 -- # es=1 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@649 -- # local es=0 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:44:24.440 12:07:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:44:24.440 [2024-06-10 12:07:56.418233] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:24.440 [2024-06-10 12:07:56.418799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168272 ] 00:44:24.698 [2024-06-10 12:07:56.600371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:24.956 [2024-06-10 12:07:56.832415] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:25.214 [2024-06-10 12:07:57.210756] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:44:25.214 [2024-06-10 12:07:57.211101] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:44:25.214 [2024-06-10 12:07:57.211239] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:26.148 [2024-06-10 12:07:58.131137] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:44:26.713 12:07:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # es=216 00:44:26.713 12:07:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:44:26.713 12:07:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # es=88 00:44:26.713 12:07:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # case "$es" in 00:44:26.713 12:07:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@669 -- # es=1 00:44:26.713 12:07:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:44:26.713 12:07:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:44:26.713 12:07:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:44:26.713 12:07:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:44:26.713 12:07:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:26.713 [2024-06-10 12:07:58.685014] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:26.713 [2024-06-10 12:07:58.685949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168294 ] 00:44:26.971 [2024-06-10 12:07:58.850859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:27.228 [2024-06-10 12:07:59.093167] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:28.898  Copying: 512/512 [B] (average 500 kBps) 00:44:28.898 00:44:28.898 ************************************ 00:44:28.898 END TEST dd_flag_nofollow 00:44:28.898 ************************************ 00:44:28.898 12:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ h9kr8fiu6fd8u7tgn5ux3aq7pr2yz3e2fvg38p2pxpoy9tn5qf91pevbiscyo5x34pgpst8b47qe5gykjy8dbsixk0ajlgqvd537ohu1iulhevrptu6by4qv6bj0ql2vbq2ygzbxakjf8q5wj3vo7sukqujpjxvn3250hxnwzvaawbe35vhhc5i4chyafuaisk64z8nyua66nobu51focvqdxzdt1okpct48kyll27mw578qgeji4vdepa7lwcqokozqey0hogoytx1384ji4w84m7eju39ckaf8mqogw9e6sau6893vjumbv8jx2641k8m2dgpbl3wmvuh698h00z40zibi7pcteuilxsdnalmk1mqtbqetzzdvxptiemqn3z0yctrsal4uyj1xa59gnpr07qeakrin0kb70yxa4q8fkkayputbstelp509aymg767v6i5uh1gpn353lfgyv7vbx3igwwjeq6ubqfwm4ckesyzahlk4myf9zys220kj == \h\9\k\r\8\f\i\u\6\f\d\8\u\7\t\g\n\5\u\x\3\a\q\7\p\r\2\y\z\3\e\2\f\v\g\3\8\p\2\p\x\p\o\y\9\t\n\5\q\f\9\1\p\e\v\b\i\s\c\y\o\5\x\3\4\p\g\p\s\t\8\b\4\7\q\e\5\g\y\k\j\y\8\d\b\s\i\x\k\0\a\j\l\g\q\v\d\5\3\7\o\h\u\1\i\u\l\h\e\v\r\p\t\u\6\b\y\4\q\v\6\b\j\0\q\l\2\v\b\q\2\y\g\z\b\x\a\k\j\f\8\q\5\w\j\3\v\o\7\s\u\k\q\u\j\p\j\x\v\n\3\2\5\0\h\x\n\w\z\v\a\a\w\b\e\3\5\v\h\h\c\5\i\4\c\h\y\a\f\u\a\i\s\k\6\4\z\8\n\y\u\a\6\6\n\o\b\u\5\1\f\o\c\v\q\d\x\z\d\t\1\o\k\p\c\t\4\8\k\y\l\l\2\7\m\w\5\7\8\q\g\e\j\i\4\v\d\e\p\a\7\l\w\c\q\o\k\o\z\q\e\y\0\h\o\g\o\y\t\x\1\3\8\4\j\i\4\w\8\4\m\7\e\j\u\3\9\c\k\a\f\8\m\q\o\g\w\9\e\6\s\a\u\6\8\9\3\v\j\u\m\b\v\8\j\x\2\6\4\1\k\8\m\2\d\g\p\b\l\3\w\m\v\u\h\6\9\8\h\0\0\z\4\0\z\i\b\i\7\p\c\t\e\u\i\l\x\s\d\n\a\l\m\k\1\m\q\t\b\q\e\t\z\z\d\v\x\p\t\i\e\m\q\n\3\z\0\y\c\t\r\s\a\l\4\u\y\j\1\x\a\5\9\g\n\p\r\0\7\q\e\a\k\r\i\n\0\k\b\7\0\y\x\a\4\q\8\f\k\k\a\y\p\u\t\b\s\t\e\l\p\5\0\9\a\y\m\g\7\6\7\v\6\i\5\u\h\1\g\p\n\3\5\3\l\f\g\y\v\7\v\b\x\3\i\g\w\w\j\e\q\6\u\b\q\f\w\m\4\c\k\e\s\y\z\a\h\l\k\4\m\y\f\9\z\y\s\2\2\0\k\j ]] 00:44:28.898 00:44:28.898 real 0m6.823s 00:44:28.898 user 0m5.772s 00:44:28.898 sys 0m0.709s 00:44:28.898 12:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:44:28.898 12:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:44:29.157 12:08:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:44:29.157 12:08:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:44:29.157 12:08:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:44:29.157 12:08:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:44:29.157 ************************************ 00:44:29.157 START TEST dd_flag_noatime 00:44:29.157 ************************************ 00:44:29.157 12:08:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # noatime 00:44:29.157 12:08:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:44:29.157 12:08:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:44:29.157 12:08:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:44:29.157 12:08:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:44:29.157 12:08:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:44:29.157 12:08:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:44:29.157 12:08:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1718021279 00:44:29.157 12:08:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:29.157 12:08:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1718021280 00:44:29.157 12:08:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:44:30.090 12:08:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:30.091 [2024-06-10 12:08:02.092392] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:30.091 [2024-06-10 12:08:02.092876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168369 ] 00:44:30.347 [2024-06-10 12:08:02.275165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:30.605 [2024-06-10 12:08:02.592108] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:32.542  Copying: 512/512 [B] (average 500 kBps) 00:44:32.542 00:44:32.542 12:08:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:44:32.542 12:08:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1718021279 )) 00:44:32.542 12:08:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:32.542 12:08:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1718021280 )) 00:44:32.542 12:08:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:32.542 [2024-06-10 12:08:04.540703] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:32.542 [2024-06-10 12:08:04.541345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168396 ] 00:44:32.800 [2024-06-10 12:08:04.731419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:33.132 [2024-06-10 12:08:04.966354] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:34.767  Copying: 512/512 [B] (average 500 kBps) 00:44:34.767 00:44:34.767 12:08:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:44:34.767 ************************************ 00:44:34.767 END TEST dd_flag_noatime 00:44:34.767 ************************************ 00:44:34.767 12:08:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1718021285 )) 00:44:34.767 00:44:34.767 real 0m5.784s 00:44:34.767 user 0m3.976s 00:44:34.767 sys 0m0.525s 00:44:34.767 12:08:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # xtrace_disable 00:44:34.767 12:08:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:44:34.767 12:08:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:44:34.767 12:08:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:44:34.767 12:08:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:44:34.767 12:08:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:44:35.025 ************************************ 00:44:35.025 START TEST dd_flags_misc 00:44:35.025 ************************************ 00:44:35.025 12:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # io 00:44:35.025 12:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:44:35.025 12:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:44:35.025 12:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:44:35.025 12:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:44:35.025 12:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:44:35.025 12:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:44:35.025 12:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:44:35.025 12:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:35.025 12:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:44:35.025 [2024-06-10 12:08:06.921662] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:35.025 [2024-06-10 12:08:06.921892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168447 ] 00:44:35.283 [2024-06-10 12:08:07.109629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:35.542 [2024-06-10 12:08:07.394088] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:37.175  Copying: 512/512 [B] (average 500 kBps) 00:44:37.175 00:44:37.175 12:08:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ nb6vidbn9damg601tnfutlfp5md80w9uxbrtjafmzqwwedd2jfpnvjnoomxwf1ujllb0hj714cq0e1799wr946biy6ulj0xuyzbger1ddogmoz2qty81ibpaluzmwobg9motojs1xqkxv6sds2c2cszedvkuv2bnzk8fclhenxqh3zhxqdlggdeffi4r4q0t52bdfkhnfnq6025vigrr7vditofeuk2idxbu5f9xsbcn9g4cju3jt749o3q5p2iigw5b5alw2ovnw8jzzmb0lr767wb1t7hlxcygvsla4nbe8b9t588fdycykxivyfm0vzi8zxvh0sg3pmm4vzuvnj3wrybeqbr63ngo4aog7aqkkh96mff0c7r7fljnvsetgmsyz0nysjlojwe0dqq6ad1z5c10cr563q4bw7ytlng9u0iyjpdgkiw4cmwv2s9xv4n65qyg57dvm9cnlm8fyj3t88zldqssu6y13ybwrac7wprrr0wkalemxp37a5bn == \n\b\6\v\i\d\b\n\9\d\a\m\g\6\0\1\t\n\f\u\t\l\f\p\5\m\d\8\0\w\9\u\x\b\r\t\j\a\f\m\z\q\w\w\e\d\d\2\j\f\p\n\v\j\n\o\o\m\x\w\f\1\u\j\l\l\b\0\h\j\7\1\4\c\q\0\e\1\7\9\9\w\r\9\4\6\b\i\y\6\u\l\j\0\x\u\y\z\b\g\e\r\1\d\d\o\g\m\o\z\2\q\t\y\8\1\i\b\p\a\l\u\z\m\w\o\b\g\9\m\o\t\o\j\s\1\x\q\k\x\v\6\s\d\s\2\c\2\c\s\z\e\d\v\k\u\v\2\b\n\z\k\8\f\c\l\h\e\n\x\q\h\3\z\h\x\q\d\l\g\g\d\e\f\f\i\4\r\4\q\0\t\5\2\b\d\f\k\h\n\f\n\q\6\0\2\5\v\i\g\r\r\7\v\d\i\t\o\f\e\u\k\2\i\d\x\b\u\5\f\9\x\s\b\c\n\9\g\4\c\j\u\3\j\t\7\4\9\o\3\q\5\p\2\i\i\g\w\5\b\5\a\l\w\2\o\v\n\w\8\j\z\z\m\b\0\l\r\7\6\7\w\b\1\t\7\h\l\x\c\y\g\v\s\l\a\4\n\b\e\8\b\9\t\5\8\8\f\d\y\c\y\k\x\i\v\y\f\m\0\v\z\i\8\z\x\v\h\0\s\g\3\p\m\m\4\v\z\u\v\n\j\3\w\r\y\b\e\q\b\r\6\3\n\g\o\4\a\o\g\7\a\q\k\k\h\9\6\m\f\f\0\c\7\r\7\f\l\j\n\v\s\e\t\g\m\s\y\z\0\n\y\s\j\l\o\j\w\e\0\d\q\q\6\a\d\1\z\5\c\1\0\c\r\5\6\3\q\4\b\w\7\y\t\l\n\g\9\u\0\i\y\j\p\d\g\k\i\w\4\c\m\w\v\2\s\9\x\v\4\n\6\5\q\y\g\5\7\d\v\m\9\c\n\l\m\8\f\y\j\3\t\8\8\z\l\d\q\s\s\u\6\y\1\3\y\b\w\r\a\c\7\w\p\r\r\r\0\w\k\a\l\e\m\x\p\3\7\a\5\b\n ]] 00:44:37.175 12:08:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:37.175 12:08:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:44:37.433 [2024-06-10 12:08:09.312051] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:37.433 [2024-06-10 12:08:09.312324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168481 ] 00:44:37.691 [2024-06-10 12:08:09.503242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:37.949 [2024-06-10 12:08:09.805301] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:39.611  Copying: 512/512 [B] (average 500 kBps) 00:44:39.611 00:44:39.611 12:08:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ nb6vidbn9damg601tnfutlfp5md80w9uxbrtjafmzqwwedd2jfpnvjnoomxwf1ujllb0hj714cq0e1799wr946biy6ulj0xuyzbger1ddogmoz2qty81ibpaluzmwobg9motojs1xqkxv6sds2c2cszedvkuv2bnzk8fclhenxqh3zhxqdlggdeffi4r4q0t52bdfkhnfnq6025vigrr7vditofeuk2idxbu5f9xsbcn9g4cju3jt749o3q5p2iigw5b5alw2ovnw8jzzmb0lr767wb1t7hlxcygvsla4nbe8b9t588fdycykxivyfm0vzi8zxvh0sg3pmm4vzuvnj3wrybeqbr63ngo4aog7aqkkh96mff0c7r7fljnvsetgmsyz0nysjlojwe0dqq6ad1z5c10cr563q4bw7ytlng9u0iyjpdgkiw4cmwv2s9xv4n65qyg57dvm9cnlm8fyj3t88zldqssu6y13ybwrac7wprrr0wkalemxp37a5bn == \n\b\6\v\i\d\b\n\9\d\a\m\g\6\0\1\t\n\f\u\t\l\f\p\5\m\d\8\0\w\9\u\x\b\r\t\j\a\f\m\z\q\w\w\e\d\d\2\j\f\p\n\v\j\n\o\o\m\x\w\f\1\u\j\l\l\b\0\h\j\7\1\4\c\q\0\e\1\7\9\9\w\r\9\4\6\b\i\y\6\u\l\j\0\x\u\y\z\b\g\e\r\1\d\d\o\g\m\o\z\2\q\t\y\8\1\i\b\p\a\l\u\z\m\w\o\b\g\9\m\o\t\o\j\s\1\x\q\k\x\v\6\s\d\s\2\c\2\c\s\z\e\d\v\k\u\v\2\b\n\z\k\8\f\c\l\h\e\n\x\q\h\3\z\h\x\q\d\l\g\g\d\e\f\f\i\4\r\4\q\0\t\5\2\b\d\f\k\h\n\f\n\q\6\0\2\5\v\i\g\r\r\7\v\d\i\t\o\f\e\u\k\2\i\d\x\b\u\5\f\9\x\s\b\c\n\9\g\4\c\j\u\3\j\t\7\4\9\o\3\q\5\p\2\i\i\g\w\5\b\5\a\l\w\2\o\v\n\w\8\j\z\z\m\b\0\l\r\7\6\7\w\b\1\t\7\h\l\x\c\y\g\v\s\l\a\4\n\b\e\8\b\9\t\5\8\8\f\d\y\c\y\k\x\i\v\y\f\m\0\v\z\i\8\z\x\v\h\0\s\g\3\p\m\m\4\v\z\u\v\n\j\3\w\r\y\b\e\q\b\r\6\3\n\g\o\4\a\o\g\7\a\q\k\k\h\9\6\m\f\f\0\c\7\r\7\f\l\j\n\v\s\e\t\g\m\s\y\z\0\n\y\s\j\l\o\j\w\e\0\d\q\q\6\a\d\1\z\5\c\1\0\c\r\5\6\3\q\4\b\w\7\y\t\l\n\g\9\u\0\i\y\j\p\d\g\k\i\w\4\c\m\w\v\2\s\9\x\v\4\n\6\5\q\y\g\5\7\d\v\m\9\c\n\l\m\8\f\y\j\3\t\8\8\z\l\d\q\s\s\u\6\y\1\3\y\b\w\r\a\c\7\w\p\r\r\r\0\w\k\a\l\e\m\x\p\3\7\a\5\b\n ]] 00:44:39.611 12:08:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:39.611 12:08:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:44:39.869 [2024-06-10 12:08:11.710798] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:39.869 [2024-06-10 12:08:11.710962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168509 ] 00:44:39.869 [2024-06-10 12:08:11.876088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:40.127 [2024-06-10 12:08:12.129129] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:42.066  Copying: 512/512 [B] (average 166 kBps) 00:44:42.066 00:44:42.066 12:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ nb6vidbn9damg601tnfutlfp5md80w9uxbrtjafmzqwwedd2jfpnvjnoomxwf1ujllb0hj714cq0e1799wr946biy6ulj0xuyzbger1ddogmoz2qty81ibpaluzmwobg9motojs1xqkxv6sds2c2cszedvkuv2bnzk8fclhenxqh3zhxqdlggdeffi4r4q0t52bdfkhnfnq6025vigrr7vditofeuk2idxbu5f9xsbcn9g4cju3jt749o3q5p2iigw5b5alw2ovnw8jzzmb0lr767wb1t7hlxcygvsla4nbe8b9t588fdycykxivyfm0vzi8zxvh0sg3pmm4vzuvnj3wrybeqbr63ngo4aog7aqkkh96mff0c7r7fljnvsetgmsyz0nysjlojwe0dqq6ad1z5c10cr563q4bw7ytlng9u0iyjpdgkiw4cmwv2s9xv4n65qyg57dvm9cnlm8fyj3t88zldqssu6y13ybwrac7wprrr0wkalemxp37a5bn == \n\b\6\v\i\d\b\n\9\d\a\m\g\6\0\1\t\n\f\u\t\l\f\p\5\m\d\8\0\w\9\u\x\b\r\t\j\a\f\m\z\q\w\w\e\d\d\2\j\f\p\n\v\j\n\o\o\m\x\w\f\1\u\j\l\l\b\0\h\j\7\1\4\c\q\0\e\1\7\9\9\w\r\9\4\6\b\i\y\6\u\l\j\0\x\u\y\z\b\g\e\r\1\d\d\o\g\m\o\z\2\q\t\y\8\1\i\b\p\a\l\u\z\m\w\o\b\g\9\m\o\t\o\j\s\1\x\q\k\x\v\6\s\d\s\2\c\2\c\s\z\e\d\v\k\u\v\2\b\n\z\k\8\f\c\l\h\e\n\x\q\h\3\z\h\x\q\d\l\g\g\d\e\f\f\i\4\r\4\q\0\t\5\2\b\d\f\k\h\n\f\n\q\6\0\2\5\v\i\g\r\r\7\v\d\i\t\o\f\e\u\k\2\i\d\x\b\u\5\f\9\x\s\b\c\n\9\g\4\c\j\u\3\j\t\7\4\9\o\3\q\5\p\2\i\i\g\w\5\b\5\a\l\w\2\o\v\n\w\8\j\z\z\m\b\0\l\r\7\6\7\w\b\1\t\7\h\l\x\c\y\g\v\s\l\a\4\n\b\e\8\b\9\t\5\8\8\f\d\y\c\y\k\x\i\v\y\f\m\0\v\z\i\8\z\x\v\h\0\s\g\3\p\m\m\4\v\z\u\v\n\j\3\w\r\y\b\e\q\b\r\6\3\n\g\o\4\a\o\g\7\a\q\k\k\h\9\6\m\f\f\0\c\7\r\7\f\l\j\n\v\s\e\t\g\m\s\y\z\0\n\y\s\j\l\o\j\w\e\0\d\q\q\6\a\d\1\z\5\c\1\0\c\r\5\6\3\q\4\b\w\7\y\t\l\n\g\9\u\0\i\y\j\p\d\g\k\i\w\4\c\m\w\v\2\s\9\x\v\4\n\6\5\q\y\g\5\7\d\v\m\9\c\n\l\m\8\f\y\j\3\t\8\8\z\l\d\q\s\s\u\6\y\1\3\y\b\w\r\a\c\7\w\p\r\r\r\0\w\k\a\l\e\m\x\p\3\7\a\5\b\n ]] 00:44:42.066 12:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:42.066 12:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:44:42.339 [2024-06-10 12:08:14.166565] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:42.339 [2024-06-10 12:08:14.166911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168534 ] 00:44:42.339 [2024-06-10 12:08:14.351915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:42.618 [2024-06-10 12:08:14.641337] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:44.557  Copying: 512/512 [B] (average 250 kBps) 00:44:44.557 00:44:44.557 12:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ nb6vidbn9damg601tnfutlfp5md80w9uxbrtjafmzqwwedd2jfpnvjnoomxwf1ujllb0hj714cq0e1799wr946biy6ulj0xuyzbger1ddogmoz2qty81ibpaluzmwobg9motojs1xqkxv6sds2c2cszedvkuv2bnzk8fclhenxqh3zhxqdlggdeffi4r4q0t52bdfkhnfnq6025vigrr7vditofeuk2idxbu5f9xsbcn9g4cju3jt749o3q5p2iigw5b5alw2ovnw8jzzmb0lr767wb1t7hlxcygvsla4nbe8b9t588fdycykxivyfm0vzi8zxvh0sg3pmm4vzuvnj3wrybeqbr63ngo4aog7aqkkh96mff0c7r7fljnvsetgmsyz0nysjlojwe0dqq6ad1z5c10cr563q4bw7ytlng9u0iyjpdgkiw4cmwv2s9xv4n65qyg57dvm9cnlm8fyj3t88zldqssu6y13ybwrac7wprrr0wkalemxp37a5bn == \n\b\6\v\i\d\b\n\9\d\a\m\g\6\0\1\t\n\f\u\t\l\f\p\5\m\d\8\0\w\9\u\x\b\r\t\j\a\f\m\z\q\w\w\e\d\d\2\j\f\p\n\v\j\n\o\o\m\x\w\f\1\u\j\l\l\b\0\h\j\7\1\4\c\q\0\e\1\7\9\9\w\r\9\4\6\b\i\y\6\u\l\j\0\x\u\y\z\b\g\e\r\1\d\d\o\g\m\o\z\2\q\t\y\8\1\i\b\p\a\l\u\z\m\w\o\b\g\9\m\o\t\o\j\s\1\x\q\k\x\v\6\s\d\s\2\c\2\c\s\z\e\d\v\k\u\v\2\b\n\z\k\8\f\c\l\h\e\n\x\q\h\3\z\h\x\q\d\l\g\g\d\e\f\f\i\4\r\4\q\0\t\5\2\b\d\f\k\h\n\f\n\q\6\0\2\5\v\i\g\r\r\7\v\d\i\t\o\f\e\u\k\2\i\d\x\b\u\5\f\9\x\s\b\c\n\9\g\4\c\j\u\3\j\t\7\4\9\o\3\q\5\p\2\i\i\g\w\5\b\5\a\l\w\2\o\v\n\w\8\j\z\z\m\b\0\l\r\7\6\7\w\b\1\t\7\h\l\x\c\y\g\v\s\l\a\4\n\b\e\8\b\9\t\5\8\8\f\d\y\c\y\k\x\i\v\y\f\m\0\v\z\i\8\z\x\v\h\0\s\g\3\p\m\m\4\v\z\u\v\n\j\3\w\r\y\b\e\q\b\r\6\3\n\g\o\4\a\o\g\7\a\q\k\k\h\9\6\m\f\f\0\c\7\r\7\f\l\j\n\v\s\e\t\g\m\s\y\z\0\n\y\s\j\l\o\j\w\e\0\d\q\q\6\a\d\1\z\5\c\1\0\c\r\5\6\3\q\4\b\w\7\y\t\l\n\g\9\u\0\i\y\j\p\d\g\k\i\w\4\c\m\w\v\2\s\9\x\v\4\n\6\5\q\y\g\5\7\d\v\m\9\c\n\l\m\8\f\y\j\3\t\8\8\z\l\d\q\s\s\u\6\y\1\3\y\b\w\r\a\c\7\w\p\r\r\r\0\w\k\a\l\e\m\x\p\3\7\a\5\b\n ]] 00:44:44.557 12:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:44:44.557 12:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:44:44.557 12:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:44:44.557 12:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:44:44.557 12:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:44.557 12:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:44:44.557 [2024-06-10 12:08:16.561028] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:44.557 [2024-06-10 12:08:16.561282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168572 ] 00:44:44.815 [2024-06-10 12:08:16.736957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:45.072 [2024-06-10 12:08:17.033205] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:47.010  Copying: 512/512 [B] (average 500 kBps) 00:44:47.010 00:44:47.011 12:08:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x5t7g50889i9kq7v4ileqnu96qn8q5vcpxsix4d220y6axxgjp7q7feh1ckuin4un201fuorayu6g3e2vsopuc77mec7fa2n48twjvggse4fb5z9qq29tlb8ojat4hfgfm3gacgjj5thbjv5ld9wghwsvj157dhvtzzzsva00r13p14om2azziklac0mip0tavqdqhpgonugoa8dy6d5n569ri0rl0e3nll8zols858eypcxwa85qfhb83p12rioic0ve4269ub0pd8id3ej0dgo7l92i501t9r97nbdz0v6mjf0i3lhgzlo0ecwjo8fhcqc3454yjjxrzbw2i5h2w4oyad3wa8m6i1uy6neljv9h8a6pqku06u3hmcjyicwzuhc0fkzdhstts81g42fopci4o56y4mtppw4osvbnoydtlvrhuepetnj89pw7h2dcxa2egnr5xnvnhdvkub1cyp4ki7m9tem5ogbdtk76p1btzpzbm4758q94x1ewt3r == \x\5\t\7\g\5\0\8\8\9\i\9\k\q\7\v\4\i\l\e\q\n\u\9\6\q\n\8\q\5\v\c\p\x\s\i\x\4\d\2\2\0\y\6\a\x\x\g\j\p\7\q\7\f\e\h\1\c\k\u\i\n\4\u\n\2\0\1\f\u\o\r\a\y\u\6\g\3\e\2\v\s\o\p\u\c\7\7\m\e\c\7\f\a\2\n\4\8\t\w\j\v\g\g\s\e\4\f\b\5\z\9\q\q\2\9\t\l\b\8\o\j\a\t\4\h\f\g\f\m\3\g\a\c\g\j\j\5\t\h\b\j\v\5\l\d\9\w\g\h\w\s\v\j\1\5\7\d\h\v\t\z\z\z\s\v\a\0\0\r\1\3\p\1\4\o\m\2\a\z\z\i\k\l\a\c\0\m\i\p\0\t\a\v\q\d\q\h\p\g\o\n\u\g\o\a\8\d\y\6\d\5\n\5\6\9\r\i\0\r\l\0\e\3\n\l\l\8\z\o\l\s\8\5\8\e\y\p\c\x\w\a\8\5\q\f\h\b\8\3\p\1\2\r\i\o\i\c\0\v\e\4\2\6\9\u\b\0\p\d\8\i\d\3\e\j\0\d\g\o\7\l\9\2\i\5\0\1\t\9\r\9\7\n\b\d\z\0\v\6\m\j\f\0\i\3\l\h\g\z\l\o\0\e\c\w\j\o\8\f\h\c\q\c\3\4\5\4\y\j\j\x\r\z\b\w\2\i\5\h\2\w\4\o\y\a\d\3\w\a\8\m\6\i\1\u\y\6\n\e\l\j\v\9\h\8\a\6\p\q\k\u\0\6\u\3\h\m\c\j\y\i\c\w\z\u\h\c\0\f\k\z\d\h\s\t\t\s\8\1\g\4\2\f\o\p\c\i\4\o\5\6\y\4\m\t\p\p\w\4\o\s\v\b\n\o\y\d\t\l\v\r\h\u\e\p\e\t\n\j\8\9\p\w\7\h\2\d\c\x\a\2\e\g\n\r\5\x\n\v\n\h\d\v\k\u\b\1\c\y\p\4\k\i\7\m\9\t\e\m\5\o\g\b\d\t\k\7\6\p\1\b\t\z\p\z\b\m\4\7\5\8\q\9\4\x\1\e\w\t\3\r ]] 00:44:47.011 12:08:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:47.011 12:08:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:44:47.011 [2024-06-10 12:08:18.985705] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:47.011 [2024-06-10 12:08:18.987049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168596 ] 00:44:47.268 [2024-06-10 12:08:19.175895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:47.526 [2024-06-10 12:08:19.465591] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:49.467  Copying: 512/512 [B] (average 500 kBps) 00:44:49.467 00:44:49.467 12:08:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x5t7g50889i9kq7v4ileqnu96qn8q5vcpxsix4d220y6axxgjp7q7feh1ckuin4un201fuorayu6g3e2vsopuc77mec7fa2n48twjvggse4fb5z9qq29tlb8ojat4hfgfm3gacgjj5thbjv5ld9wghwsvj157dhvtzzzsva00r13p14om2azziklac0mip0tavqdqhpgonugoa8dy6d5n569ri0rl0e3nll8zols858eypcxwa85qfhb83p12rioic0ve4269ub0pd8id3ej0dgo7l92i501t9r97nbdz0v6mjf0i3lhgzlo0ecwjo8fhcqc3454yjjxrzbw2i5h2w4oyad3wa8m6i1uy6neljv9h8a6pqku06u3hmcjyicwzuhc0fkzdhstts81g42fopci4o56y4mtppw4osvbnoydtlvrhuepetnj89pw7h2dcxa2egnr5xnvnhdvkub1cyp4ki7m9tem5ogbdtk76p1btzpzbm4758q94x1ewt3r == \x\5\t\7\g\5\0\8\8\9\i\9\k\q\7\v\4\i\l\e\q\n\u\9\6\q\n\8\q\5\v\c\p\x\s\i\x\4\d\2\2\0\y\6\a\x\x\g\j\p\7\q\7\f\e\h\1\c\k\u\i\n\4\u\n\2\0\1\f\u\o\r\a\y\u\6\g\3\e\2\v\s\o\p\u\c\7\7\m\e\c\7\f\a\2\n\4\8\t\w\j\v\g\g\s\e\4\f\b\5\z\9\q\q\2\9\t\l\b\8\o\j\a\t\4\h\f\g\f\m\3\g\a\c\g\j\j\5\t\h\b\j\v\5\l\d\9\w\g\h\w\s\v\j\1\5\7\d\h\v\t\z\z\z\s\v\a\0\0\r\1\3\p\1\4\o\m\2\a\z\z\i\k\l\a\c\0\m\i\p\0\t\a\v\q\d\q\h\p\g\o\n\u\g\o\a\8\d\y\6\d\5\n\5\6\9\r\i\0\r\l\0\e\3\n\l\l\8\z\o\l\s\8\5\8\e\y\p\c\x\w\a\8\5\q\f\h\b\8\3\p\1\2\r\i\o\i\c\0\v\e\4\2\6\9\u\b\0\p\d\8\i\d\3\e\j\0\d\g\o\7\l\9\2\i\5\0\1\t\9\r\9\7\n\b\d\z\0\v\6\m\j\f\0\i\3\l\h\g\z\l\o\0\e\c\w\j\o\8\f\h\c\q\c\3\4\5\4\y\j\j\x\r\z\b\w\2\i\5\h\2\w\4\o\y\a\d\3\w\a\8\m\6\i\1\u\y\6\n\e\l\j\v\9\h\8\a\6\p\q\k\u\0\6\u\3\h\m\c\j\y\i\c\w\z\u\h\c\0\f\k\z\d\h\s\t\t\s\8\1\g\4\2\f\o\p\c\i\4\o\5\6\y\4\m\t\p\p\w\4\o\s\v\b\n\o\y\d\t\l\v\r\h\u\e\p\e\t\n\j\8\9\p\w\7\h\2\d\c\x\a\2\e\g\n\r\5\x\n\v\n\h\d\v\k\u\b\1\c\y\p\4\k\i\7\m\9\t\e\m\5\o\g\b\d\t\k\7\6\p\1\b\t\z\p\z\b\m\4\7\5\8\q\9\4\x\1\e\w\t\3\r ]] 00:44:49.467 12:08:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:49.467 12:08:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:44:49.467 [2024-06-10 12:08:21.510844] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:49.467 [2024-06-10 12:08:21.511011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168632 ] 00:44:49.725 [2024-06-10 12:08:21.678642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:49.983 [2024-06-10 12:08:21.918108] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:51.918  Copying: 512/512 [B] (average 500 kBps) 00:44:51.918 00:44:51.918 12:08:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x5t7g50889i9kq7v4ileqnu96qn8q5vcpxsix4d220y6axxgjp7q7feh1ckuin4un201fuorayu6g3e2vsopuc77mec7fa2n48twjvggse4fb5z9qq29tlb8ojat4hfgfm3gacgjj5thbjv5ld9wghwsvj157dhvtzzzsva00r13p14om2azziklac0mip0tavqdqhpgonugoa8dy6d5n569ri0rl0e3nll8zols858eypcxwa85qfhb83p12rioic0ve4269ub0pd8id3ej0dgo7l92i501t9r97nbdz0v6mjf0i3lhgzlo0ecwjo8fhcqc3454yjjxrzbw2i5h2w4oyad3wa8m6i1uy6neljv9h8a6pqku06u3hmcjyicwzuhc0fkzdhstts81g42fopci4o56y4mtppw4osvbnoydtlvrhuepetnj89pw7h2dcxa2egnr5xnvnhdvkub1cyp4ki7m9tem5ogbdtk76p1btzpzbm4758q94x1ewt3r == \x\5\t\7\g\5\0\8\8\9\i\9\k\q\7\v\4\i\l\e\q\n\u\9\6\q\n\8\q\5\v\c\p\x\s\i\x\4\d\2\2\0\y\6\a\x\x\g\j\p\7\q\7\f\e\h\1\c\k\u\i\n\4\u\n\2\0\1\f\u\o\r\a\y\u\6\g\3\e\2\v\s\o\p\u\c\7\7\m\e\c\7\f\a\2\n\4\8\t\w\j\v\g\g\s\e\4\f\b\5\z\9\q\q\2\9\t\l\b\8\o\j\a\t\4\h\f\g\f\m\3\g\a\c\g\j\j\5\t\h\b\j\v\5\l\d\9\w\g\h\w\s\v\j\1\5\7\d\h\v\t\z\z\z\s\v\a\0\0\r\1\3\p\1\4\o\m\2\a\z\z\i\k\l\a\c\0\m\i\p\0\t\a\v\q\d\q\h\p\g\o\n\u\g\o\a\8\d\y\6\d\5\n\5\6\9\r\i\0\r\l\0\e\3\n\l\l\8\z\o\l\s\8\5\8\e\y\p\c\x\w\a\8\5\q\f\h\b\8\3\p\1\2\r\i\o\i\c\0\v\e\4\2\6\9\u\b\0\p\d\8\i\d\3\e\j\0\d\g\o\7\l\9\2\i\5\0\1\t\9\r\9\7\n\b\d\z\0\v\6\m\j\f\0\i\3\l\h\g\z\l\o\0\e\c\w\j\o\8\f\h\c\q\c\3\4\5\4\y\j\j\x\r\z\b\w\2\i\5\h\2\w\4\o\y\a\d\3\w\a\8\m\6\i\1\u\y\6\n\e\l\j\v\9\h\8\a\6\p\q\k\u\0\6\u\3\h\m\c\j\y\i\c\w\z\u\h\c\0\f\k\z\d\h\s\t\t\s\8\1\g\4\2\f\o\p\c\i\4\o\5\6\y\4\m\t\p\p\w\4\o\s\v\b\n\o\y\d\t\l\v\r\h\u\e\p\e\t\n\j\8\9\p\w\7\h\2\d\c\x\a\2\e\g\n\r\5\x\n\v\n\h\d\v\k\u\b\1\c\y\p\4\k\i\7\m\9\t\e\m\5\o\g\b\d\t\k\7\6\p\1\b\t\z\p\z\b\m\4\7\5\8\q\9\4\x\1\e\w\t\3\r ]] 00:44:51.918 12:08:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:51.918 12:08:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:44:51.918 [2024-06-10 12:08:23.776662] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:51.918 [2024-06-10 12:08:23.776837] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168660 ] 00:44:51.918 [2024-06-10 12:08:23.940720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:52.218 [2024-06-10 12:08:24.161120] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:53.847  Copying: 512/512 [B] (average 125 kBps) 00:44:53.847 00:44:53.847 ************************************ 00:44:53.847 END TEST dd_flags_misc 00:44:53.847 ************************************ 00:44:53.847 12:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x5t7g50889i9kq7v4ileqnu96qn8q5vcpxsix4d220y6axxgjp7q7feh1ckuin4un201fuorayu6g3e2vsopuc77mec7fa2n48twjvggse4fb5z9qq29tlb8ojat4hfgfm3gacgjj5thbjv5ld9wghwsvj157dhvtzzzsva00r13p14om2azziklac0mip0tavqdqhpgonugoa8dy6d5n569ri0rl0e3nll8zols858eypcxwa85qfhb83p12rioic0ve4269ub0pd8id3ej0dgo7l92i501t9r97nbdz0v6mjf0i3lhgzlo0ecwjo8fhcqc3454yjjxrzbw2i5h2w4oyad3wa8m6i1uy6neljv9h8a6pqku06u3hmcjyicwzuhc0fkzdhstts81g42fopci4o56y4mtppw4osvbnoydtlvrhuepetnj89pw7h2dcxa2egnr5xnvnhdvkub1cyp4ki7m9tem5ogbdtk76p1btzpzbm4758q94x1ewt3r == \x\5\t\7\g\5\0\8\8\9\i\9\k\q\7\v\4\i\l\e\q\n\u\9\6\q\n\8\q\5\v\c\p\x\s\i\x\4\d\2\2\0\y\6\a\x\x\g\j\p\7\q\7\f\e\h\1\c\k\u\i\n\4\u\n\2\0\1\f\u\o\r\a\y\u\6\g\3\e\2\v\s\o\p\u\c\7\7\m\e\c\7\f\a\2\n\4\8\t\w\j\v\g\g\s\e\4\f\b\5\z\9\q\q\2\9\t\l\b\8\o\j\a\t\4\h\f\g\f\m\3\g\a\c\g\j\j\5\t\h\b\j\v\5\l\d\9\w\g\h\w\s\v\j\1\5\7\d\h\v\t\z\z\z\s\v\a\0\0\r\1\3\p\1\4\o\m\2\a\z\z\i\k\l\a\c\0\m\i\p\0\t\a\v\q\d\q\h\p\g\o\n\u\g\o\a\8\d\y\6\d\5\n\5\6\9\r\i\0\r\l\0\e\3\n\l\l\8\z\o\l\s\8\5\8\e\y\p\c\x\w\a\8\5\q\f\h\b\8\3\p\1\2\r\i\o\i\c\0\v\e\4\2\6\9\u\b\0\p\d\8\i\d\3\e\j\0\d\g\o\7\l\9\2\i\5\0\1\t\9\r\9\7\n\b\d\z\0\v\6\m\j\f\0\i\3\l\h\g\z\l\o\0\e\c\w\j\o\8\f\h\c\q\c\3\4\5\4\y\j\j\x\r\z\b\w\2\i\5\h\2\w\4\o\y\a\d\3\w\a\8\m\6\i\1\u\y\6\n\e\l\j\v\9\h\8\a\6\p\q\k\u\0\6\u\3\h\m\c\j\y\i\c\w\z\u\h\c\0\f\k\z\d\h\s\t\t\s\8\1\g\4\2\f\o\p\c\i\4\o\5\6\y\4\m\t\p\p\w\4\o\s\v\b\n\o\y\d\t\l\v\r\h\u\e\p\e\t\n\j\8\9\p\w\7\h\2\d\c\x\a\2\e\g\n\r\5\x\n\v\n\h\d\v\k\u\b\1\c\y\p\4\k\i\7\m\9\t\e\m\5\o\g\b\d\t\k\7\6\p\1\b\t\z\p\z\b\m\4\7\5\8\q\9\4\x\1\e\w\t\3\r ]] 00:44:53.847 00:44:53.847 real 0m19.037s 00:44:53.847 user 0m16.062s 00:44:53.847 sys 0m1.900s 00:44:53.847 12:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:44:53.847 12:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:44:54.106 * Second test run, using AIO 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:44:54.106 ************************************ 00:44:54.106 START TEST dd_flag_append_forced_aio 00:44:54.106 ************************************ 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # append 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=afig4nx4yn14zpzv6gadk29dwdxd6exy 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=4eapiflt4awhopexzczzhel0g6sxtarq 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s afig4nx4yn14zpzv6gadk29dwdxd6exy 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 4eapiflt4awhopexzczzhel0g6sxtarq 00:44:54.106 12:08:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:44:54.106 [2024-06-10 12:08:25.996494] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:54.106 [2024-06-10 12:08:25.996642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168706 ] 00:44:54.106 [2024-06-10 12:08:26.162845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:54.365 [2024-06-10 12:08:26.389354] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:56.304  Copying: 32/32 [B] (average 31 kBps) 00:44:56.304 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 4eapiflt4awhopexzczzhel0g6sxtarqafig4nx4yn14zpzv6gadk29dwdxd6exy == \4\e\a\p\i\f\l\t\4\a\w\h\o\p\e\x\z\c\z\z\h\e\l\0\g\6\s\x\t\a\r\q\a\f\i\g\4\n\x\4\y\n\1\4\z\p\z\v\6\g\a\d\k\2\9\d\w\d\x\d\6\e\x\y ]] 00:44:56.304 00:44:56.304 real 0m2.222s 00:44:56.304 user 0m1.840s 00:44:56.304 sys 0m0.255s 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:44:56.304 ************************************ 00:44:56.304 END TEST dd_flag_append_forced_aio 00:44:56.304 ************************************ 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:44:56.304 ************************************ 00:44:56.304 START TEST dd_flag_directory_forced_aio 00:44:56.304 ************************************ 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # directory 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@649 -- # local es=0 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:44:56.304 12:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:44:56.304 [2024-06-10 12:08:28.299324] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:56.304 [2024-06-10 12:08:28.299792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168762 ] 00:44:56.562 [2024-06-10 12:08:28.483199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:56.819 [2024-06-10 12:08:28.780472] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:57.078 [2024-06-10 12:08:29.129902] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:44:57.078 [2024-06-10 12:08:29.130250] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:44:57.078 [2024-06-10 12:08:29.130410] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:58.012 [2024-06-10 12:08:29.944203] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:44:58.579 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # es=236 00:44:58.579 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:44:58.579 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # es=108 00:44:58.579 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # case "$es" in 00:44:58.579 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@669 -- # es=1 00:44:58.579 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:44:58.579 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:44:58.579 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@649 -- # local es=0 00:44:58.579 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:44:58.579 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:58.579 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:58.579 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:58.579 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:58.580 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:58.580 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:58.580 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:58.580 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:44:58.580 12:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:44:58.580 [2024-06-10 12:08:30.496304] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:44:58.580 [2024-06-10 12:08:30.496757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168790 ] 00:44:58.838 [2024-06-10 12:08:30.677405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:58.838 [2024-06-10 12:08:30.889915] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:59.405 [2024-06-10 12:08:31.228676] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:44:59.405 [2024-06-10 12:08:31.228999] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:44:59.405 [2024-06-10 12:08:31.229271] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:45:00.338 [2024-06-10 12:08:32.072839] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:45:00.596 ************************************ 00:45:00.596 END TEST dd_flag_directory_forced_aio 00:45:00.596 ************************************ 00:45:00.596 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # es=236 00:45:00.596 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:45:00.596 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # es=108 00:45:00.596 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # case "$es" in 00:45:00.596 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@669 -- # es=1 00:45:00.596 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:45:00.596 00:45:00.596 real 0m4.286s 00:45:00.596 user 0m3.620s 00:45:00.596 sys 0m0.461s 00:45:00.596 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:45:00.596 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:45:00.596 12:08:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:45:00.596 12:08:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:45:00.596 12:08:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:45:00.596 12:08:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:45:00.596 ************************************ 00:45:00.596 START TEST dd_flag_nofollow_forced_aio 00:45:00.596 ************************************ 00:45:00.596 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # nofollow 00:45:00.597 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:45:00.597 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:45:00.597 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:45:00.597 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:45:00.597 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:00.597 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@649 -- # local es=0 00:45:00.597 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:00.597 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:00.597 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:00.597 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:00.597 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:00.597 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:00.597 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:00.597 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:00.597 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:00.597 12:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:00.854 [2024-06-10 12:08:32.666484] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:00.855 [2024-06-10 12:08:32.666961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168844 ] 00:45:00.855 [2024-06-10 12:08:32.854195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:01.113 [2024-06-10 12:08:33.112677] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:01.679 [2024-06-10 12:08:33.456159] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:45:01.679 [2024-06-10 12:08:33.456258] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:45:01.679 [2024-06-10 12:08:33.456298] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:45:02.614 [2024-06-10 12:08:34.321290] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # es=216 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # es=88 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # case "$es" in 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@669 -- # es=1 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@649 -- # local es=0 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:02.873 12:08:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:45:02.873 [2024-06-10 12:08:34.872231] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:02.873 [2024-06-10 12:08:34.872479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168875 ] 00:45:03.131 [2024-06-10 12:08:35.065629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:03.389 [2024-06-10 12:08:35.281447] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:03.647 [2024-06-10 12:08:35.625426] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:45:03.647 [2024-06-10 12:08:35.625526] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:45:03.647 [2024-06-10 12:08:35.625569] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:45:04.582 [2024-06-10 12:08:36.511492] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:45:05.148 12:08:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # es=216 00:45:05.149 12:08:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:45:05.149 12:08:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # es=88 00:45:05.149 12:08:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # case "$es" in 00:45:05.149 12:08:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@669 -- # es=1 00:45:05.149 12:08:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:45:05.149 12:08:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:45:05.149 12:08:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:45:05.149 12:08:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:45:05.149 12:08:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:05.149 [2024-06-10 12:08:37.071318] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:05.149 [2024-06-10 12:08:37.071536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168899 ] 00:45:05.406 [2024-06-10 12:08:37.253673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:05.662 [2024-06-10 12:08:37.481048] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:07.315  Copying: 512/512 [B] (average 500 kBps) 00:45:07.316 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ jdsv8o8a6o8bnog8fgmst7765hsdt965dw8jeie88w9cwkxq6gv0k26xtquudr5gpqitn022ew4mp1jtir9j72bt85y2egmpdml1d9d7wkmtuao8r9kveyeofl822qosw6rco2ptkfgzt2ndcd1pzuf9a9hshkgyj7n3rkkv52pxnmdmh0fewmmqust49cdzpahwnivkan6v1mdqmq9zxinmkmad5a9nnx44ymcqcjnkjjiz5jega4dyfygmtvqlkn3auusjhrmflwtfgyc1x6tq5o0qluqb8zezq442ezcg7vbdjxi0ldtpbsuuzkhkfn98nbdhuim3q3exzksigbethp0nxltfilanbhw48s25o0uf3mz0titkcv5w3u7fowbl0uljn91ywl3bp6utwpkwud7f4b53fh3dtn18amlk373gh11u17kyq2yn6cpgmgk6nw6rlueqscvxfvqc9b53edwudik2me4n8438a4uxxsuwu6a2mc5x4629hyvb == \j\d\s\v\8\o\8\a\6\o\8\b\n\o\g\8\f\g\m\s\t\7\7\6\5\h\s\d\t\9\6\5\d\w\8\j\e\i\e\8\8\w\9\c\w\k\x\q\6\g\v\0\k\2\6\x\t\q\u\u\d\r\5\g\p\q\i\t\n\0\2\2\e\w\4\m\p\1\j\t\i\r\9\j\7\2\b\t\8\5\y\2\e\g\m\p\d\m\l\1\d\9\d\7\w\k\m\t\u\a\o\8\r\9\k\v\e\y\e\o\f\l\8\2\2\q\o\s\w\6\r\c\o\2\p\t\k\f\g\z\t\2\n\d\c\d\1\p\z\u\f\9\a\9\h\s\h\k\g\y\j\7\n\3\r\k\k\v\5\2\p\x\n\m\d\m\h\0\f\e\w\m\m\q\u\s\t\4\9\c\d\z\p\a\h\w\n\i\v\k\a\n\6\v\1\m\d\q\m\q\9\z\x\i\n\m\k\m\a\d\5\a\9\n\n\x\4\4\y\m\c\q\c\j\n\k\j\j\i\z\5\j\e\g\a\4\d\y\f\y\g\m\t\v\q\l\k\n\3\a\u\u\s\j\h\r\m\f\l\w\t\f\g\y\c\1\x\6\t\q\5\o\0\q\l\u\q\b\8\z\e\z\q\4\4\2\e\z\c\g\7\v\b\d\j\x\i\0\l\d\t\p\b\s\u\u\z\k\h\k\f\n\9\8\n\b\d\h\u\i\m\3\q\3\e\x\z\k\s\i\g\b\e\t\h\p\0\n\x\l\t\f\i\l\a\n\b\h\w\4\8\s\2\5\o\0\u\f\3\m\z\0\t\i\t\k\c\v\5\w\3\u\7\f\o\w\b\l\0\u\l\j\n\9\1\y\w\l\3\b\p\6\u\t\w\p\k\w\u\d\7\f\4\b\5\3\f\h\3\d\t\n\1\8\a\m\l\k\3\7\3\g\h\1\1\u\1\7\k\y\q\2\y\n\6\c\p\g\m\g\k\6\n\w\6\r\l\u\e\q\s\c\v\x\f\v\q\c\9\b\5\3\e\d\w\u\d\i\k\2\m\e\4\n\8\4\3\8\a\4\u\x\x\s\u\w\u\6\a\2\m\c\5\x\4\6\2\9\h\y\v\b ]] 00:45:07.316 00:45:07.316 real 0m6.678s 00:45:07.316 user 0m5.650s 00:45:07.316 sys 0m0.687s 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:45:07.316 ************************************ 00:45:07.316 END TEST dd_flag_nofollow_forced_aio 00:45:07.316 ************************************ 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:45:07.316 ************************************ 00:45:07.316 START TEST dd_flag_noatime_forced_aio 00:45:07.316 ************************************ 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # noatime 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1718021317 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1718021319 00:45:07.316 12:08:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:45:08.688 12:08:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:08.688 [2024-06-10 12:08:40.399275] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:08.688 [2024-06-10 12:08:40.399456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168967 ] 00:45:08.688 [2024-06-10 12:08:40.574231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:08.947 [2024-06-10 12:08:40.860254] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:10.571  Copying: 512/512 [B] (average 500 kBps) 00:45:10.571 00:45:10.828 12:08:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:10.828 12:08:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1718021317 )) 00:45:10.828 12:08:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:10.828 12:08:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1718021319 )) 00:45:10.828 12:08:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:10.828 [2024-06-10 12:08:42.713478] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:10.828 [2024-06-10 12:08:42.713646] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168994 ] 00:45:10.828 [2024-06-10 12:08:42.878946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:11.086 [2024-06-10 12:08:43.100032] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:13.062  Copying: 512/512 [B] (average 500 kBps) 00:45:13.062 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1718021323 )) 00:45:13.062 00:45:13.062 real 0m5.599s 00:45:13.062 user 0m3.854s 00:45:13.062 sys 0m0.480s 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:45:13.062 ************************************ 00:45:13.062 END TEST dd_flag_noatime_forced_aio 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:45:13.062 ************************************ 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1106 -- # xtrace_disable 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:45:13.062 ************************************ 00:45:13.062 START TEST dd_flags_misc_forced_aio 00:45:13.062 ************************************ 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # io 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:45:13.062 12:08:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:45:13.062 [2024-06-10 12:08:45.032318] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:13.062 [2024-06-10 12:08:45.032485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169049 ] 00:45:13.320 [2024-06-10 12:08:45.196924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:13.578 [2024-06-10 12:08:45.420645] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:15.207  Copying: 512/512 [B] (average 500 kBps) 00:45:15.207 00:45:15.207 12:08:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ kds4f60w2dtz1ws203evw8f8ps61t53x84pdmfxmti5mp2khfuydt16o96gsizqu04vg875tcnx15etl85fzg7ys78a9r9hajcukjpw5x3q6atvourb8n9im3arrjia5g41zj70gckj8ovlkxsc5uqlm3llopysgt3qkwnp6snwr6q804nawtad1c4o411ijy33t9bl9ie6vdhzpsokue024y17vxj8ije3xcdclw4v7qqli3t45wrv33of6nj1wcd11pf9p6lecwp9m6cq5sl97hzyuyoonxipa1rdxu5wqwvr9qehjpm3mqevgsi5mbnasm057t382vdshf1fcalq55qtbveqkfywamyizqqgr4qlts8s3kpp4f812obux2nlz6te8n88cbujvf38yvdaa927p3mjdj2f386y9r8ie4487wscl0bd9nsfrr3l1j5dliivq16wbtgxdub4zo3vugt5ftlqlsgsyy8z8pf9cbijuw9mezdlcfar2mn9y == \k\d\s\4\f\6\0\w\2\d\t\z\1\w\s\2\0\3\e\v\w\8\f\8\p\s\6\1\t\5\3\x\8\4\p\d\m\f\x\m\t\i\5\m\p\2\k\h\f\u\y\d\t\1\6\o\9\6\g\s\i\z\q\u\0\4\v\g\8\7\5\t\c\n\x\1\5\e\t\l\8\5\f\z\g\7\y\s\7\8\a\9\r\9\h\a\j\c\u\k\j\p\w\5\x\3\q\6\a\t\v\o\u\r\b\8\n\9\i\m\3\a\r\r\j\i\a\5\g\4\1\z\j\7\0\g\c\k\j\8\o\v\l\k\x\s\c\5\u\q\l\m\3\l\l\o\p\y\s\g\t\3\q\k\w\n\p\6\s\n\w\r\6\q\8\0\4\n\a\w\t\a\d\1\c\4\o\4\1\1\i\j\y\3\3\t\9\b\l\9\i\e\6\v\d\h\z\p\s\o\k\u\e\0\2\4\y\1\7\v\x\j\8\i\j\e\3\x\c\d\c\l\w\4\v\7\q\q\l\i\3\t\4\5\w\r\v\3\3\o\f\6\n\j\1\w\c\d\1\1\p\f\9\p\6\l\e\c\w\p\9\m\6\c\q\5\s\l\9\7\h\z\y\u\y\o\o\n\x\i\p\a\1\r\d\x\u\5\w\q\w\v\r\9\q\e\h\j\p\m\3\m\q\e\v\g\s\i\5\m\b\n\a\s\m\0\5\7\t\3\8\2\v\d\s\h\f\1\f\c\a\l\q\5\5\q\t\b\v\e\q\k\f\y\w\a\m\y\i\z\q\q\g\r\4\q\l\t\s\8\s\3\k\p\p\4\f\8\1\2\o\b\u\x\2\n\l\z\6\t\e\8\n\8\8\c\b\u\j\v\f\3\8\y\v\d\a\a\9\2\7\p\3\m\j\d\j\2\f\3\8\6\y\9\r\8\i\e\4\4\8\7\w\s\c\l\0\b\d\9\n\s\f\r\r\3\l\1\j\5\d\l\i\i\v\q\1\6\w\b\t\g\x\d\u\b\4\z\o\3\v\u\g\t\5\f\t\l\q\l\s\g\s\y\y\8\z\8\p\f\9\c\b\i\j\u\w\9\m\e\z\d\l\c\f\a\r\2\m\n\9\y ]] 00:45:15.207 12:08:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:45:15.207 12:08:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:45:15.207 [2024-06-10 12:08:47.225258] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:15.207 [2024-06-10 12:08:47.225479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169078 ] 00:45:15.465 [2024-06-10 12:08:47.418383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:15.723 [2024-06-10 12:08:47.651766] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:17.374  Copying: 512/512 [B] (average 500 kBps) 00:45:17.374 00:45:17.374 12:08:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ kds4f60w2dtz1ws203evw8f8ps61t53x84pdmfxmti5mp2khfuydt16o96gsizqu04vg875tcnx15etl85fzg7ys78a9r9hajcukjpw5x3q6atvourb8n9im3arrjia5g41zj70gckj8ovlkxsc5uqlm3llopysgt3qkwnp6snwr6q804nawtad1c4o411ijy33t9bl9ie6vdhzpsokue024y17vxj8ije3xcdclw4v7qqli3t45wrv33of6nj1wcd11pf9p6lecwp9m6cq5sl97hzyuyoonxipa1rdxu5wqwvr9qehjpm3mqevgsi5mbnasm057t382vdshf1fcalq55qtbveqkfywamyizqqgr4qlts8s3kpp4f812obux2nlz6te8n88cbujvf38yvdaa927p3mjdj2f386y9r8ie4487wscl0bd9nsfrr3l1j5dliivq16wbtgxdub4zo3vugt5ftlqlsgsyy8z8pf9cbijuw9mezdlcfar2mn9y == \k\d\s\4\f\6\0\w\2\d\t\z\1\w\s\2\0\3\e\v\w\8\f\8\p\s\6\1\t\5\3\x\8\4\p\d\m\f\x\m\t\i\5\m\p\2\k\h\f\u\y\d\t\1\6\o\9\6\g\s\i\z\q\u\0\4\v\g\8\7\5\t\c\n\x\1\5\e\t\l\8\5\f\z\g\7\y\s\7\8\a\9\r\9\h\a\j\c\u\k\j\p\w\5\x\3\q\6\a\t\v\o\u\r\b\8\n\9\i\m\3\a\r\r\j\i\a\5\g\4\1\z\j\7\0\g\c\k\j\8\o\v\l\k\x\s\c\5\u\q\l\m\3\l\l\o\p\y\s\g\t\3\q\k\w\n\p\6\s\n\w\r\6\q\8\0\4\n\a\w\t\a\d\1\c\4\o\4\1\1\i\j\y\3\3\t\9\b\l\9\i\e\6\v\d\h\z\p\s\o\k\u\e\0\2\4\y\1\7\v\x\j\8\i\j\e\3\x\c\d\c\l\w\4\v\7\q\q\l\i\3\t\4\5\w\r\v\3\3\o\f\6\n\j\1\w\c\d\1\1\p\f\9\p\6\l\e\c\w\p\9\m\6\c\q\5\s\l\9\7\h\z\y\u\y\o\o\n\x\i\p\a\1\r\d\x\u\5\w\q\w\v\r\9\q\e\h\j\p\m\3\m\q\e\v\g\s\i\5\m\b\n\a\s\m\0\5\7\t\3\8\2\v\d\s\h\f\1\f\c\a\l\q\5\5\q\t\b\v\e\q\k\f\y\w\a\m\y\i\z\q\q\g\r\4\q\l\t\s\8\s\3\k\p\p\4\f\8\1\2\o\b\u\x\2\n\l\z\6\t\e\8\n\8\8\c\b\u\j\v\f\3\8\y\v\d\a\a\9\2\7\p\3\m\j\d\j\2\f\3\8\6\y\9\r\8\i\e\4\4\8\7\w\s\c\l\0\b\d\9\n\s\f\r\r\3\l\1\j\5\d\l\i\i\v\q\1\6\w\b\t\g\x\d\u\b\4\z\o\3\v\u\g\t\5\f\t\l\q\l\s\g\s\y\y\8\z\8\p\f\9\c\b\i\j\u\w\9\m\e\z\d\l\c\f\a\r\2\m\n\9\y ]] 00:45:17.374 12:08:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:45:17.374 12:08:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:45:17.632 [2024-06-10 12:08:49.445635] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:17.632 [2024-06-10 12:08:49.445851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169108 ] 00:45:17.632 [2024-06-10 12:08:49.622919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:17.890 [2024-06-10 12:08:49.852380] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:19.834  Copying: 512/512 [B] (average 250 kBps) 00:45:19.834 00:45:19.834 12:08:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ kds4f60w2dtz1ws203evw8f8ps61t53x84pdmfxmti5mp2khfuydt16o96gsizqu04vg875tcnx15etl85fzg7ys78a9r9hajcukjpw5x3q6atvourb8n9im3arrjia5g41zj70gckj8ovlkxsc5uqlm3llopysgt3qkwnp6snwr6q804nawtad1c4o411ijy33t9bl9ie6vdhzpsokue024y17vxj8ije3xcdclw4v7qqli3t45wrv33of6nj1wcd11pf9p6lecwp9m6cq5sl97hzyuyoonxipa1rdxu5wqwvr9qehjpm3mqevgsi5mbnasm057t382vdshf1fcalq55qtbveqkfywamyizqqgr4qlts8s3kpp4f812obux2nlz6te8n88cbujvf38yvdaa927p3mjdj2f386y9r8ie4487wscl0bd9nsfrr3l1j5dliivq16wbtgxdub4zo3vugt5ftlqlsgsyy8z8pf9cbijuw9mezdlcfar2mn9y == \k\d\s\4\f\6\0\w\2\d\t\z\1\w\s\2\0\3\e\v\w\8\f\8\p\s\6\1\t\5\3\x\8\4\p\d\m\f\x\m\t\i\5\m\p\2\k\h\f\u\y\d\t\1\6\o\9\6\g\s\i\z\q\u\0\4\v\g\8\7\5\t\c\n\x\1\5\e\t\l\8\5\f\z\g\7\y\s\7\8\a\9\r\9\h\a\j\c\u\k\j\p\w\5\x\3\q\6\a\t\v\o\u\r\b\8\n\9\i\m\3\a\r\r\j\i\a\5\g\4\1\z\j\7\0\g\c\k\j\8\o\v\l\k\x\s\c\5\u\q\l\m\3\l\l\o\p\y\s\g\t\3\q\k\w\n\p\6\s\n\w\r\6\q\8\0\4\n\a\w\t\a\d\1\c\4\o\4\1\1\i\j\y\3\3\t\9\b\l\9\i\e\6\v\d\h\z\p\s\o\k\u\e\0\2\4\y\1\7\v\x\j\8\i\j\e\3\x\c\d\c\l\w\4\v\7\q\q\l\i\3\t\4\5\w\r\v\3\3\o\f\6\n\j\1\w\c\d\1\1\p\f\9\p\6\l\e\c\w\p\9\m\6\c\q\5\s\l\9\7\h\z\y\u\y\o\o\n\x\i\p\a\1\r\d\x\u\5\w\q\w\v\r\9\q\e\h\j\p\m\3\m\q\e\v\g\s\i\5\m\b\n\a\s\m\0\5\7\t\3\8\2\v\d\s\h\f\1\f\c\a\l\q\5\5\q\t\b\v\e\q\k\f\y\w\a\m\y\i\z\q\q\g\r\4\q\l\t\s\8\s\3\k\p\p\4\f\8\1\2\o\b\u\x\2\n\l\z\6\t\e\8\n\8\8\c\b\u\j\v\f\3\8\y\v\d\a\a\9\2\7\p\3\m\j\d\j\2\f\3\8\6\y\9\r\8\i\e\4\4\8\7\w\s\c\l\0\b\d\9\n\s\f\r\r\3\l\1\j\5\d\l\i\i\v\q\1\6\w\b\t\g\x\d\u\b\4\z\o\3\v\u\g\t\5\f\t\l\q\l\s\g\s\y\y\8\z\8\p\f\9\c\b\i\j\u\w\9\m\e\z\d\l\c\f\a\r\2\m\n\9\y ]] 00:45:19.834 12:08:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:45:19.834 12:08:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:45:19.834 [2024-06-10 12:08:51.718967] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:19.834 [2024-06-10 12:08:51.719223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169132 ] 00:45:20.093 [2024-06-10 12:08:51.906116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:20.093 [2024-06-10 12:08:52.136148] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:22.032  Copying: 512/512 [B] (average 250 kBps) 00:45:22.032 00:45:22.032 12:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ kds4f60w2dtz1ws203evw8f8ps61t53x84pdmfxmti5mp2khfuydt16o96gsizqu04vg875tcnx15etl85fzg7ys78a9r9hajcukjpw5x3q6atvourb8n9im3arrjia5g41zj70gckj8ovlkxsc5uqlm3llopysgt3qkwnp6snwr6q804nawtad1c4o411ijy33t9bl9ie6vdhzpsokue024y17vxj8ije3xcdclw4v7qqli3t45wrv33of6nj1wcd11pf9p6lecwp9m6cq5sl97hzyuyoonxipa1rdxu5wqwvr9qehjpm3mqevgsi5mbnasm057t382vdshf1fcalq55qtbveqkfywamyizqqgr4qlts8s3kpp4f812obux2nlz6te8n88cbujvf38yvdaa927p3mjdj2f386y9r8ie4487wscl0bd9nsfrr3l1j5dliivq16wbtgxdub4zo3vugt5ftlqlsgsyy8z8pf9cbijuw9mezdlcfar2mn9y == \k\d\s\4\f\6\0\w\2\d\t\z\1\w\s\2\0\3\e\v\w\8\f\8\p\s\6\1\t\5\3\x\8\4\p\d\m\f\x\m\t\i\5\m\p\2\k\h\f\u\y\d\t\1\6\o\9\6\g\s\i\z\q\u\0\4\v\g\8\7\5\t\c\n\x\1\5\e\t\l\8\5\f\z\g\7\y\s\7\8\a\9\r\9\h\a\j\c\u\k\j\p\w\5\x\3\q\6\a\t\v\o\u\r\b\8\n\9\i\m\3\a\r\r\j\i\a\5\g\4\1\z\j\7\0\g\c\k\j\8\o\v\l\k\x\s\c\5\u\q\l\m\3\l\l\o\p\y\s\g\t\3\q\k\w\n\p\6\s\n\w\r\6\q\8\0\4\n\a\w\t\a\d\1\c\4\o\4\1\1\i\j\y\3\3\t\9\b\l\9\i\e\6\v\d\h\z\p\s\o\k\u\e\0\2\4\y\1\7\v\x\j\8\i\j\e\3\x\c\d\c\l\w\4\v\7\q\q\l\i\3\t\4\5\w\r\v\3\3\o\f\6\n\j\1\w\c\d\1\1\p\f\9\p\6\l\e\c\w\p\9\m\6\c\q\5\s\l\9\7\h\z\y\u\y\o\o\n\x\i\p\a\1\r\d\x\u\5\w\q\w\v\r\9\q\e\h\j\p\m\3\m\q\e\v\g\s\i\5\m\b\n\a\s\m\0\5\7\t\3\8\2\v\d\s\h\f\1\f\c\a\l\q\5\5\q\t\b\v\e\q\k\f\y\w\a\m\y\i\z\q\q\g\r\4\q\l\t\s\8\s\3\k\p\p\4\f\8\1\2\o\b\u\x\2\n\l\z\6\t\e\8\n\8\8\c\b\u\j\v\f\3\8\y\v\d\a\a\9\2\7\p\3\m\j\d\j\2\f\3\8\6\y\9\r\8\i\e\4\4\8\7\w\s\c\l\0\b\d\9\n\s\f\r\r\3\l\1\j\5\d\l\i\i\v\q\1\6\w\b\t\g\x\d\u\b\4\z\o\3\v\u\g\t\5\f\t\l\q\l\s\g\s\y\y\8\z\8\p\f\9\c\b\i\j\u\w\9\m\e\z\d\l\c\f\a\r\2\m\n\9\y ]] 00:45:22.032 12:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:45:22.032 12:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:45:22.032 12:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:45:22.032 12:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:45:22.032 12:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:45:22.032 12:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:45:22.032 [2024-06-10 12:08:54.030209] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:22.032 [2024-06-10 12:08:54.030439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169160 ] 00:45:22.290 [2024-06-10 12:08:54.213152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:22.548 [2024-06-10 12:08:54.430923] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:24.179  Copying: 512/512 [B] (average 500 kBps) 00:45:24.179 00:45:24.180 12:08:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ umuu5nhhe9ogecoqvkyabk9z2jiueo8q3dvw8h840snz6ggio1nq5ljbmj36thhy0fjnt6gzyl8hli9qol94ltt7v0dvz84nlkgktvdvpiaxkejluoch0h5w7imuihprjd6t9rs0ltdqi4lm2er0kh11p5rgiqsjxxsemvaac4tfea7rqb10smaskj6bv72497c11adi66m6xb3g8l5kcyqop7q8svrjkiw5qvvzd4p3zoi2yjbxdw20215pttgbby1jqi8t7cm6b19w4r2ldc1tac7x3ryh5k55cgscaxdhk2i15qr15nu4zqe9p89oi90dib13l5avg7z0grq7t2o2uyrfflewsx666cga9szposbawvf7tqzx1a2jdwi19n8yejkgkk83enxljtreu6vmcntsua3mn9442028tytks2wvrlh4x0zjhrjrofuvrbo6aruiz8j767ota0dqtre93d6ldsgzomt23p1ennuf9jssx095pl9xiz28saw4 == \u\m\u\u\5\n\h\h\e\9\o\g\e\c\o\q\v\k\y\a\b\k\9\z\2\j\i\u\e\o\8\q\3\d\v\w\8\h\8\4\0\s\n\z\6\g\g\i\o\1\n\q\5\l\j\b\m\j\3\6\t\h\h\y\0\f\j\n\t\6\g\z\y\l\8\h\l\i\9\q\o\l\9\4\l\t\t\7\v\0\d\v\z\8\4\n\l\k\g\k\t\v\d\v\p\i\a\x\k\e\j\l\u\o\c\h\0\h\5\w\7\i\m\u\i\h\p\r\j\d\6\t\9\r\s\0\l\t\d\q\i\4\l\m\2\e\r\0\k\h\1\1\p\5\r\g\i\q\s\j\x\x\s\e\m\v\a\a\c\4\t\f\e\a\7\r\q\b\1\0\s\m\a\s\k\j\6\b\v\7\2\4\9\7\c\1\1\a\d\i\6\6\m\6\x\b\3\g\8\l\5\k\c\y\q\o\p\7\q\8\s\v\r\j\k\i\w\5\q\v\v\z\d\4\p\3\z\o\i\2\y\j\b\x\d\w\2\0\2\1\5\p\t\t\g\b\b\y\1\j\q\i\8\t\7\c\m\6\b\1\9\w\4\r\2\l\d\c\1\t\a\c\7\x\3\r\y\h\5\k\5\5\c\g\s\c\a\x\d\h\k\2\i\1\5\q\r\1\5\n\u\4\z\q\e\9\p\8\9\o\i\9\0\d\i\b\1\3\l\5\a\v\g\7\z\0\g\r\q\7\t\2\o\2\u\y\r\f\f\l\e\w\s\x\6\6\6\c\g\a\9\s\z\p\o\s\b\a\w\v\f\7\t\q\z\x\1\a\2\j\d\w\i\1\9\n\8\y\e\j\k\g\k\k\8\3\e\n\x\l\j\t\r\e\u\6\v\m\c\n\t\s\u\a\3\m\n\9\4\4\2\0\2\8\t\y\t\k\s\2\w\v\r\l\h\4\x\0\z\j\h\r\j\r\o\f\u\v\r\b\o\6\a\r\u\i\z\8\j\7\6\7\o\t\a\0\d\q\t\r\e\9\3\d\6\l\d\s\g\z\o\m\t\2\3\p\1\e\n\n\u\f\9\j\s\s\x\0\9\5\p\l\9\x\i\z\2\8\s\a\w\4 ]] 00:45:24.180 12:08:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:45:24.180 12:08:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:45:24.180 [2024-06-10 12:08:56.202893] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:24.180 [2024-06-10 12:08:56.203160] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169191 ] 00:45:24.438 [2024-06-10 12:08:56.384821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:24.696 [2024-06-10 12:08:56.617327] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:26.328  Copying: 512/512 [B] (average 500 kBps) 00:45:26.328 00:45:26.329 12:08:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ umuu5nhhe9ogecoqvkyabk9z2jiueo8q3dvw8h840snz6ggio1nq5ljbmj36thhy0fjnt6gzyl8hli9qol94ltt7v0dvz84nlkgktvdvpiaxkejluoch0h5w7imuihprjd6t9rs0ltdqi4lm2er0kh11p5rgiqsjxxsemvaac4tfea7rqb10smaskj6bv72497c11adi66m6xb3g8l5kcyqop7q8svrjkiw5qvvzd4p3zoi2yjbxdw20215pttgbby1jqi8t7cm6b19w4r2ldc1tac7x3ryh5k55cgscaxdhk2i15qr15nu4zqe9p89oi90dib13l5avg7z0grq7t2o2uyrfflewsx666cga9szposbawvf7tqzx1a2jdwi19n8yejkgkk83enxljtreu6vmcntsua3mn9442028tytks2wvrlh4x0zjhrjrofuvrbo6aruiz8j767ota0dqtre93d6ldsgzomt23p1ennuf9jssx095pl9xiz28saw4 == \u\m\u\u\5\n\h\h\e\9\o\g\e\c\o\q\v\k\y\a\b\k\9\z\2\j\i\u\e\o\8\q\3\d\v\w\8\h\8\4\0\s\n\z\6\g\g\i\o\1\n\q\5\l\j\b\m\j\3\6\t\h\h\y\0\f\j\n\t\6\g\z\y\l\8\h\l\i\9\q\o\l\9\4\l\t\t\7\v\0\d\v\z\8\4\n\l\k\g\k\t\v\d\v\p\i\a\x\k\e\j\l\u\o\c\h\0\h\5\w\7\i\m\u\i\h\p\r\j\d\6\t\9\r\s\0\l\t\d\q\i\4\l\m\2\e\r\0\k\h\1\1\p\5\r\g\i\q\s\j\x\x\s\e\m\v\a\a\c\4\t\f\e\a\7\r\q\b\1\0\s\m\a\s\k\j\6\b\v\7\2\4\9\7\c\1\1\a\d\i\6\6\m\6\x\b\3\g\8\l\5\k\c\y\q\o\p\7\q\8\s\v\r\j\k\i\w\5\q\v\v\z\d\4\p\3\z\o\i\2\y\j\b\x\d\w\2\0\2\1\5\p\t\t\g\b\b\y\1\j\q\i\8\t\7\c\m\6\b\1\9\w\4\r\2\l\d\c\1\t\a\c\7\x\3\r\y\h\5\k\5\5\c\g\s\c\a\x\d\h\k\2\i\1\5\q\r\1\5\n\u\4\z\q\e\9\p\8\9\o\i\9\0\d\i\b\1\3\l\5\a\v\g\7\z\0\g\r\q\7\t\2\o\2\u\y\r\f\f\l\e\w\s\x\6\6\6\c\g\a\9\s\z\p\o\s\b\a\w\v\f\7\t\q\z\x\1\a\2\j\d\w\i\1\9\n\8\y\e\j\k\g\k\k\8\3\e\n\x\l\j\t\r\e\u\6\v\m\c\n\t\s\u\a\3\m\n\9\4\4\2\0\2\8\t\y\t\k\s\2\w\v\r\l\h\4\x\0\z\j\h\r\j\r\o\f\u\v\r\b\o\6\a\r\u\i\z\8\j\7\6\7\o\t\a\0\d\q\t\r\e\9\3\d\6\l\d\s\g\z\o\m\t\2\3\p\1\e\n\n\u\f\9\j\s\s\x\0\9\5\p\l\9\x\i\z\2\8\s\a\w\4 ]] 00:45:26.329 12:08:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:45:26.329 12:08:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:45:26.586 [2024-06-10 12:08:58.429100] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:26.586 [2024-06-10 12:08:58.429333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169222 ] 00:45:26.586 [2024-06-10 12:08:58.610574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:26.844 [2024-06-10 12:08:58.843407] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:28.784  Copying: 512/512 [B] (average 500 kBps) 00:45:28.784 00:45:28.784 12:09:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ umuu5nhhe9ogecoqvkyabk9z2jiueo8q3dvw8h840snz6ggio1nq5ljbmj36thhy0fjnt6gzyl8hli9qol94ltt7v0dvz84nlkgktvdvpiaxkejluoch0h5w7imuihprjd6t9rs0ltdqi4lm2er0kh11p5rgiqsjxxsemvaac4tfea7rqb10smaskj6bv72497c11adi66m6xb3g8l5kcyqop7q8svrjkiw5qvvzd4p3zoi2yjbxdw20215pttgbby1jqi8t7cm6b19w4r2ldc1tac7x3ryh5k55cgscaxdhk2i15qr15nu4zqe9p89oi90dib13l5avg7z0grq7t2o2uyrfflewsx666cga9szposbawvf7tqzx1a2jdwi19n8yejkgkk83enxljtreu6vmcntsua3mn9442028tytks2wvrlh4x0zjhrjrofuvrbo6aruiz8j767ota0dqtre93d6ldsgzomt23p1ennuf9jssx095pl9xiz28saw4 == \u\m\u\u\5\n\h\h\e\9\o\g\e\c\o\q\v\k\y\a\b\k\9\z\2\j\i\u\e\o\8\q\3\d\v\w\8\h\8\4\0\s\n\z\6\g\g\i\o\1\n\q\5\l\j\b\m\j\3\6\t\h\h\y\0\f\j\n\t\6\g\z\y\l\8\h\l\i\9\q\o\l\9\4\l\t\t\7\v\0\d\v\z\8\4\n\l\k\g\k\t\v\d\v\p\i\a\x\k\e\j\l\u\o\c\h\0\h\5\w\7\i\m\u\i\h\p\r\j\d\6\t\9\r\s\0\l\t\d\q\i\4\l\m\2\e\r\0\k\h\1\1\p\5\r\g\i\q\s\j\x\x\s\e\m\v\a\a\c\4\t\f\e\a\7\r\q\b\1\0\s\m\a\s\k\j\6\b\v\7\2\4\9\7\c\1\1\a\d\i\6\6\m\6\x\b\3\g\8\l\5\k\c\y\q\o\p\7\q\8\s\v\r\j\k\i\w\5\q\v\v\z\d\4\p\3\z\o\i\2\y\j\b\x\d\w\2\0\2\1\5\p\t\t\g\b\b\y\1\j\q\i\8\t\7\c\m\6\b\1\9\w\4\r\2\l\d\c\1\t\a\c\7\x\3\r\y\h\5\k\5\5\c\g\s\c\a\x\d\h\k\2\i\1\5\q\r\1\5\n\u\4\z\q\e\9\p\8\9\o\i\9\0\d\i\b\1\3\l\5\a\v\g\7\z\0\g\r\q\7\t\2\o\2\u\y\r\f\f\l\e\w\s\x\6\6\6\c\g\a\9\s\z\p\o\s\b\a\w\v\f\7\t\q\z\x\1\a\2\j\d\w\i\1\9\n\8\y\e\j\k\g\k\k\8\3\e\n\x\l\j\t\r\e\u\6\v\m\c\n\t\s\u\a\3\m\n\9\4\4\2\0\2\8\t\y\t\k\s\2\w\v\r\l\h\4\x\0\z\j\h\r\j\r\o\f\u\v\r\b\o\6\a\r\u\i\z\8\j\7\6\7\o\t\a\0\d\q\t\r\e\9\3\d\6\l\d\s\g\z\o\m\t\2\3\p\1\e\n\n\u\f\9\j\s\s\x\0\9\5\p\l\9\x\i\z\2\8\s\a\w\4 ]] 00:45:28.784 12:09:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:45:28.784 12:09:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:45:28.784 [2024-06-10 12:09:00.794084] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:28.784 [2024-06-10 12:09:00.794322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169247 ] 00:45:29.042 [2024-06-10 12:09:00.973840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:29.299 [2024-06-10 12:09:01.205802] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:30.932  Copying: 512/512 [B] (average 250 kBps) 00:45:30.932 00:45:30.932 ************************************ 00:45:30.932 END TEST dd_flags_misc_forced_aio 00:45:30.932 ************************************ 00:45:30.932 12:09:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ umuu5nhhe9ogecoqvkyabk9z2jiueo8q3dvw8h840snz6ggio1nq5ljbmj36thhy0fjnt6gzyl8hli9qol94ltt7v0dvz84nlkgktvdvpiaxkejluoch0h5w7imuihprjd6t9rs0ltdqi4lm2er0kh11p5rgiqsjxxsemvaac4tfea7rqb10smaskj6bv72497c11adi66m6xb3g8l5kcyqop7q8svrjkiw5qvvzd4p3zoi2yjbxdw20215pttgbby1jqi8t7cm6b19w4r2ldc1tac7x3ryh5k55cgscaxdhk2i15qr15nu4zqe9p89oi90dib13l5avg7z0grq7t2o2uyrfflewsx666cga9szposbawvf7tqzx1a2jdwi19n8yejkgkk83enxljtreu6vmcntsua3mn9442028tytks2wvrlh4x0zjhrjrofuvrbo6aruiz8j767ota0dqtre93d6ldsgzomt23p1ennuf9jssx095pl9xiz28saw4 == \u\m\u\u\5\n\h\h\e\9\o\g\e\c\o\q\v\k\y\a\b\k\9\z\2\j\i\u\e\o\8\q\3\d\v\w\8\h\8\4\0\s\n\z\6\g\g\i\o\1\n\q\5\l\j\b\m\j\3\6\t\h\h\y\0\f\j\n\t\6\g\z\y\l\8\h\l\i\9\q\o\l\9\4\l\t\t\7\v\0\d\v\z\8\4\n\l\k\g\k\t\v\d\v\p\i\a\x\k\e\j\l\u\o\c\h\0\h\5\w\7\i\m\u\i\h\p\r\j\d\6\t\9\r\s\0\l\t\d\q\i\4\l\m\2\e\r\0\k\h\1\1\p\5\r\g\i\q\s\j\x\x\s\e\m\v\a\a\c\4\t\f\e\a\7\r\q\b\1\0\s\m\a\s\k\j\6\b\v\7\2\4\9\7\c\1\1\a\d\i\6\6\m\6\x\b\3\g\8\l\5\k\c\y\q\o\p\7\q\8\s\v\r\j\k\i\w\5\q\v\v\z\d\4\p\3\z\o\i\2\y\j\b\x\d\w\2\0\2\1\5\p\t\t\g\b\b\y\1\j\q\i\8\t\7\c\m\6\b\1\9\w\4\r\2\l\d\c\1\t\a\c\7\x\3\r\y\h\5\k\5\5\c\g\s\c\a\x\d\h\k\2\i\1\5\q\r\1\5\n\u\4\z\q\e\9\p\8\9\o\i\9\0\d\i\b\1\3\l\5\a\v\g\7\z\0\g\r\q\7\t\2\o\2\u\y\r\f\f\l\e\w\s\x\6\6\6\c\g\a\9\s\z\p\o\s\b\a\w\v\f\7\t\q\z\x\1\a\2\j\d\w\i\1\9\n\8\y\e\j\k\g\k\k\8\3\e\n\x\l\j\t\r\e\u\6\v\m\c\n\t\s\u\a\3\m\n\9\4\4\2\0\2\8\t\y\t\k\s\2\w\v\r\l\h\4\x\0\z\j\h\r\j\r\o\f\u\v\r\b\o\6\a\r\u\i\z\8\j\7\6\7\o\t\a\0\d\q\t\r\e\9\3\d\6\l\d\s\g\z\o\m\t\2\3\p\1\e\n\n\u\f\9\j\s\s\x\0\9\5\p\l\9\x\i\z\2\8\s\a\w\4 ]] 00:45:30.932 00:45:30.932 real 0m18.002s 00:45:30.932 user 0m15.053s 00:45:30.932 sys 0m1.915s 00:45:30.932 12:09:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:45:30.932 12:09:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:45:31.191 12:09:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:45:31.191 12:09:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:45:31.191 12:09:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:45:31.191 00:45:31.191 real 1m15.804s 00:45:31.191 user 1m1.792s 00:45:31.191 sys 0m8.007s 00:45:31.191 12:09:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # xtrace_disable 00:45:31.191 12:09:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:45:31.191 ************************************ 00:45:31.191 END TEST spdk_dd_posix 00:45:31.191 ************************************ 00:45:31.191 12:09:03 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:45:31.191 12:09:03 spdk_dd -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:45:31.191 12:09:03 spdk_dd -- common/autotest_common.sh@1106 -- # xtrace_disable 00:45:31.191 12:09:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:45:31.191 ************************************ 00:45:31.191 START TEST spdk_dd_malloc 00:45:31.191 ************************************ 00:45:31.191 12:09:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:45:31.191 * Looking for test storage... 00:45:31.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:45:31.191 12:09:03 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:31.191 12:09:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:31.191 12:09:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:31.191 12:09:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:31.191 12:09:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:31.191 12:09:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:31.191 12:09:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:31.191 12:09:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:45:31.191 12:09:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:31.191 12:09:03 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:45:31.191 12:09:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:45:31.191 12:09:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:45:31.191 12:09:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:45:31.191 ************************************ 00:45:31.191 START TEST dd_malloc_copy 00:45:31.191 ************************************ 00:45:31.191 12:09:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # malloc_copy 00:45:31.191 12:09:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:45:31.191 12:09:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:45:31.192 12:09:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:45:31.192 12:09:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:45:31.192 12:09:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:45:31.192 12:09:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:45:31.192 12:09:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:45:31.192 12:09:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:45:31.192 12:09:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:45:31.192 12:09:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:45:31.192 { 00:45:31.192 "subsystems": [ 00:45:31.192 { 00:45:31.192 "subsystem": "bdev", 00:45:31.192 "config": [ 00:45:31.192 { 00:45:31.192 "params": { 00:45:31.192 "block_size": 512, 00:45:31.192 "num_blocks": 1048576, 00:45:31.192 "name": "malloc0" 00:45:31.192 }, 00:45:31.192 "method": "bdev_malloc_create" 00:45:31.192 }, 00:45:31.192 { 00:45:31.192 "params": { 00:45:31.192 "block_size": 512, 00:45:31.192 "num_blocks": 1048576, 00:45:31.192 "name": "malloc1" 00:45:31.192 }, 00:45:31.192 "method": "bdev_malloc_create" 00:45:31.192 }, 00:45:31.192 { 00:45:31.192 "method": "bdev_wait_for_examine" 00:45:31.192 } 00:45:31.192 ] 00:45:31.192 } 00:45:31.192 ] 00:45:31.192 } 00:45:31.450 [2024-06-10 12:09:03.271200] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:31.450 [2024-06-10 12:09:03.272177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169345 ] 00:45:31.450 [2024-06-10 12:09:03.460226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:31.708 [2024-06-10 12:09:03.680477] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:40.341  Copying: 193/512 [MB] (193 MBps) Copying: 387/512 [MB] (193 MBps) Copying: 512/512 [MB] (average 195 MBps) 00:45:40.341 00:45:40.341 12:09:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:45:40.341 12:09:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:45:40.341 12:09:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:45:40.341 12:09:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:45:40.341 { 00:45:40.341 "subsystems": [ 00:45:40.341 { 00:45:40.341 "subsystem": "bdev", 00:45:40.341 "config": [ 00:45:40.341 { 00:45:40.341 "params": { 00:45:40.341 "block_size": 512, 00:45:40.341 "num_blocks": 1048576, 00:45:40.341 "name": "malloc0" 00:45:40.341 }, 00:45:40.341 "method": "bdev_malloc_create" 00:45:40.341 }, 00:45:40.341 { 00:45:40.341 "params": { 00:45:40.341 "block_size": 512, 00:45:40.341 "num_blocks": 1048576, 00:45:40.341 "name": "malloc1" 00:45:40.341 }, 00:45:40.341 "method": "bdev_malloc_create" 00:45:40.341 }, 00:45:40.341 { 00:45:40.341 "method": "bdev_wait_for_examine" 00:45:40.341 } 00:45:40.341 ] 00:45:40.341 } 00:45:40.341 ] 00:45:40.341 } 00:45:40.341 [2024-06-10 12:09:12.128754] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:40.341 [2024-06-10 12:09:12.129023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169454 ] 00:45:40.341 [2024-06-10 12:09:12.320610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:40.598 [2024-06-10 12:09:12.578097] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:48.605  Copying: 216/512 [MB] (216 MBps) Copying: 439/512 [MB] (223 MBps) Copying: 512/512 [MB] (average 220 MBps) 00:45:48.605 00:45:48.605 00:45:48.605 real 0m17.292s 00:45:48.605 user 0m15.973s 00:45:48.605 sys 0m1.149s 00:45:48.605 12:09:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:45:48.605 12:09:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:45:48.605 ************************************ 00:45:48.605 END TEST dd_malloc_copy 00:45:48.605 ************************************ 00:45:48.605 00:45:48.605 real 0m17.463s 00:45:48.605 user 0m16.047s 00:45:48.605 sys 0m1.258s 00:45:48.605 12:09:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:45:48.605 12:09:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:45:48.605 ************************************ 00:45:48.605 END TEST spdk_dd_malloc 00:45:48.605 ************************************ 00:45:48.605 12:09:20 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:45:48.605 12:09:20 spdk_dd -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:45:48.605 12:09:20 spdk_dd -- common/autotest_common.sh@1106 -- # xtrace_disable 00:45:48.605 12:09:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:45:48.605 ************************************ 00:45:48.605 START TEST spdk_dd_bdev_to_bdev 00:45:48.605 ************************************ 00:45:48.605 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:45:48.884 * Looking for test storage... 00:45:48.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:45:48.884 12:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:45:48.884 [2024-06-10 12:09:20.796253] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:48.884 [2024-06-10 12:09:20.796448] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169616 ] 00:45:49.149 [2024-06-10 12:09:20.973385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:49.149 [2024-06-10 12:09:21.189978] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:51.088  Copying: 256/256 [MB] (average 1108 MBps) 00:45:51.088 00:45:51.346 12:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:51.346 12:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:51.346 12:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:45:51.346 12:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:45:51.346 12:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:45:51.346 12:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:45:51.346 12:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1106 -- # xtrace_disable 00:45:51.346 12:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:45:51.346 ************************************ 00:45:51.346 START TEST dd_inflate_file 00:45:51.346 ************************************ 00:45:51.346 12:09:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:45:51.346 [2024-06-10 12:09:23.242482] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:51.346 [2024-06-10 12:09:23.243544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169648 ] 00:45:51.604 [2024-06-10 12:09:23.426497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:51.604 [2024-06-10 12:09:23.650401] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:53.545  Copying: 64/64 [MB] (average 1142 MBps) 00:45:53.545 00:45:53.545 00:45:53.545 real 0m2.239s 00:45:53.546 user 0m1.759s 00:45:53.546 sys 0m0.349s 00:45:53.546 12:09:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:45:53.546 12:09:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:45:53.546 ************************************ 00:45:53.546 END TEST dd_inflate_file 00:45:53.546 ************************************ 00:45:53.546 12:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:45:53.546 12:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:45:53.546 12:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:45:53.546 12:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:45:53.546 12:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1106 -- # xtrace_disable 00:45:53.546 12:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:45:53.546 12:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:45:53.546 12:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:45:53.546 12:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:45:53.546 ************************************ 00:45:53.546 START TEST dd_copy_to_out_bdev 00:45:53.546 ************************************ 00:45:53.546 12:09:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:45:53.546 { 00:45:53.546 "subsystems": [ 00:45:53.546 { 00:45:53.546 "subsystem": "bdev", 00:45:53.546 "config": [ 00:45:53.546 { 00:45:53.546 "params": { 00:45:53.546 "block_size": 4096, 00:45:53.546 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:45:53.546 "name": "aio1" 00:45:53.546 }, 00:45:53.546 "method": "bdev_aio_create" 00:45:53.546 }, 00:45:53.546 { 00:45:53.546 "params": { 00:45:53.546 "trtype": "pcie", 00:45:53.546 "traddr": "0000:00:10.0", 00:45:53.546 "name": "Nvme0" 00:45:53.546 }, 00:45:53.546 "method": "bdev_nvme_attach_controller" 00:45:53.546 }, 00:45:53.546 { 00:45:53.546 "method": "bdev_wait_for_examine" 00:45:53.546 } 00:45:53.546 ] 00:45:53.546 } 00:45:53.546 ] 00:45:53.546 } 00:45:53.546 [2024-06-10 12:09:25.563846] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:53.546 [2024-06-10 12:09:25.564059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169705 ] 00:45:53.804 [2024-06-10 12:09:25.747687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:54.063 [2024-06-10 12:09:25.958317] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:56.814  Copying: 64/64 [MB] (average 74 MBps) 00:45:56.814 00:45:56.814 00:45:56.814 real 0m3.132s 00:45:56.814 user 0m2.717s 00:45:56.814 sys 0m0.306s 00:45:56.814 12:09:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # xtrace_disable 00:45:56.814 12:09:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:45:56.814 ************************************ 00:45:56.814 END TEST dd_copy_to_out_bdev 00:45:56.814 ************************************ 00:45:56.814 12:09:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:45:56.814 12:09:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:45:56.814 12:09:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:45:56.814 12:09:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1106 -- # xtrace_disable 00:45:56.814 12:09:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:45:56.814 ************************************ 00:45:56.814 START TEST dd_offset_magic 00:45:56.814 ************************************ 00:45:56.814 12:09:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # offset_magic 00:45:56.814 12:09:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:45:56.814 12:09:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:45:56.814 12:09:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:45:56.814 12:09:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:45:56.814 12:09:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:45:56.814 12:09:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:45:56.814 12:09:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:45:56.814 12:09:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:45:56.814 { 00:45:56.814 "subsystems": [ 00:45:56.814 { 00:45:56.814 "subsystem": "bdev", 00:45:56.814 "config": [ 00:45:56.814 { 00:45:56.814 "params": { 00:45:56.814 "block_size": 4096, 00:45:56.814 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:45:56.814 "name": "aio1" 00:45:56.814 }, 00:45:56.814 "method": "bdev_aio_create" 00:45:56.814 }, 00:45:56.814 { 00:45:56.814 "params": { 00:45:56.814 "trtype": "pcie", 00:45:56.814 "traddr": "0000:00:10.0", 00:45:56.814 "name": "Nvme0" 00:45:56.814 }, 00:45:56.814 "method": "bdev_nvme_attach_controller" 00:45:56.814 }, 00:45:56.814 { 00:45:56.814 "method": "bdev_wait_for_examine" 00:45:56.814 } 00:45:56.814 ] 00:45:56.814 } 00:45:56.814 ] 00:45:56.814 } 00:45:56.814 [2024-06-10 12:09:28.758843] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:56.814 [2024-06-10 12:09:28.759619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169769 ] 00:45:57.073 [2024-06-10 12:09:28.940692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:57.331 [2024-06-10 12:09:29.171888] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:59.638  Copying: 65/65 [MB] (average 170 MBps) 00:45:59.638 00:45:59.638 12:09:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:45:59.638 12:09:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:45:59.638 12:09:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:45:59.638 12:09:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:45:59.638 { 00:45:59.638 "subsystems": [ 00:45:59.638 { 00:45:59.638 "subsystem": "bdev", 00:45:59.638 "config": [ 00:45:59.638 { 00:45:59.638 "params": { 00:45:59.638 "block_size": 4096, 00:45:59.638 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:45:59.638 "name": "aio1" 00:45:59.638 }, 00:45:59.638 "method": "bdev_aio_create" 00:45:59.638 }, 00:45:59.638 { 00:45:59.638 "params": { 00:45:59.638 "trtype": "pcie", 00:45:59.638 "traddr": "0000:00:10.0", 00:45:59.638 "name": "Nvme0" 00:45:59.638 }, 00:45:59.638 "method": "bdev_nvme_attach_controller" 00:45:59.638 }, 00:45:59.638 { 00:45:59.638 "method": "bdev_wait_for_examine" 00:45:59.638 } 00:45:59.638 ] 00:45:59.638 } 00:45:59.638 ] 00:45:59.638 } 00:45:59.638 [2024-06-10 12:09:31.451181] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:45:59.638 [2024-06-10 12:09:31.451387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169814 ] 00:45:59.638 [2024-06-10 12:09:31.632424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:59.903 [2024-06-10 12:09:31.857886] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:01.850  Copying: 1024/1024 [kB] (average 500 MBps) 00:46:01.850 00:46:01.850 12:09:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:46:01.850 12:09:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:46:01.850 12:09:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:46:01.850 12:09:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:46:01.850 12:09:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:46:01.850 12:09:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:46:01.850 12:09:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:46:01.850 { 00:46:01.850 "subsystems": [ 00:46:01.850 { 00:46:01.850 "subsystem": "bdev", 00:46:01.850 "config": [ 00:46:01.850 { 00:46:01.850 "params": { 00:46:01.850 "block_size": 4096, 00:46:01.850 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:46:01.850 "name": "aio1" 00:46:01.850 }, 00:46:01.850 "method": "bdev_aio_create" 00:46:01.850 }, 00:46:01.850 { 00:46:01.850 "params": { 00:46:01.850 "trtype": "pcie", 00:46:01.850 "traddr": "0000:00:10.0", 00:46:01.850 "name": "Nvme0" 00:46:01.850 }, 00:46:01.850 "method": "bdev_nvme_attach_controller" 00:46:01.850 }, 00:46:01.850 { 00:46:01.850 "method": "bdev_wait_for_examine" 00:46:01.850 } 00:46:01.850 ] 00:46:01.850 } 00:46:01.850 ] 00:46:01.850 } 00:46:01.850 [2024-06-10 12:09:33.783629] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:46:01.850 [2024-06-10 12:09:33.783856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169847 ] 00:46:02.108 [2024-06-10 12:09:33.974295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:02.366 [2024-06-10 12:09:34.265135] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:03.867  Copying: 65/65 [MB] (average 1444 MBps) 00:46:03.867 00:46:04.125 12:09:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:46:04.125 12:09:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:46:04.125 12:09:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:46:04.125 12:09:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:46:04.125 { 00:46:04.125 "subsystems": [ 00:46:04.125 { 00:46:04.125 "subsystem": "bdev", 00:46:04.125 "config": [ 00:46:04.125 { 00:46:04.125 "params": { 00:46:04.125 "block_size": 4096, 00:46:04.125 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:46:04.125 "name": "aio1" 00:46:04.125 }, 00:46:04.125 "method": "bdev_aio_create" 00:46:04.125 }, 00:46:04.125 { 00:46:04.125 "params": { 00:46:04.125 "trtype": "pcie", 00:46:04.125 "traddr": "0000:00:10.0", 00:46:04.125 "name": "Nvme0" 00:46:04.125 }, 00:46:04.125 "method": "bdev_nvme_attach_controller" 00:46:04.125 }, 00:46:04.125 { 00:46:04.125 "method": "bdev_wait_for_examine" 00:46:04.125 } 00:46:04.125 ] 00:46:04.125 } 00:46:04.125 ] 00:46:04.125 } 00:46:04.125 [2024-06-10 12:09:36.009194] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:46:04.125 [2024-06-10 12:09:36.009398] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169877 ] 00:46:04.383 [2024-06-10 12:09:36.186449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:04.383 [2024-06-10 12:09:36.412389] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:06.327  Copying: 1024/1024 [kB] (average 1000 MBps) 00:46:06.327 00:46:06.327 12:09:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:46:06.327 12:09:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:46:06.327 00:46:06.327 real 0m9.560s 00:46:06.327 user 0m7.573s 00:46:06.327 sys 0m1.197s 00:46:06.327 12:09:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:06.327 ************************************ 00:46:06.327 END TEST dd_offset_magic 00:46:06.327 ************************************ 00:46:06.327 12:09:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:46:06.327 12:09:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:46:06.327 12:09:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:46:06.327 12:09:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:46:06.327 12:09:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:46:06.327 12:09:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:46:06.327 12:09:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:46:06.327 12:09:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:46:06.327 12:09:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:46:06.327 12:09:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:46:06.327 12:09:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:46:06.327 12:09:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:46:06.327 { 00:46:06.327 "subsystems": [ 00:46:06.327 { 00:46:06.327 "subsystem": "bdev", 00:46:06.327 "config": [ 00:46:06.327 { 00:46:06.327 "params": { 00:46:06.327 "block_size": 4096, 00:46:06.327 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:46:06.327 "name": "aio1" 00:46:06.327 }, 00:46:06.327 "method": "bdev_aio_create" 00:46:06.327 }, 00:46:06.327 { 00:46:06.327 "params": { 00:46:06.327 "trtype": "pcie", 00:46:06.327 "traddr": "0000:00:10.0", 00:46:06.327 "name": "Nvme0" 00:46:06.327 }, 00:46:06.327 "method": "bdev_nvme_attach_controller" 00:46:06.327 }, 00:46:06.327 { 00:46:06.327 "method": "bdev_wait_for_examine" 00:46:06.327 } 00:46:06.327 ] 00:46:06.327 } 00:46:06.327 ] 00:46:06.327 } 00:46:06.327 [2024-06-10 12:09:38.364294] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:46:06.327 [2024-06-10 12:09:38.364508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169933 ] 00:46:06.586 [2024-06-10 12:09:38.558037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:06.845 [2024-06-10 12:09:38.816768] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:08.787  Copying: 5120/5120 [kB] (average 1000 MBps) 00:46:08.787 00:46:08.787 12:09:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:46:08.787 12:09:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1 00:46:08.787 12:09:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:46:08.787 12:09:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:46:08.787 12:09:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:46:08.787 12:09:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:46:08.787 12:09:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:46:08.787 12:09:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:46:08.787 12:09:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:46:08.787 12:09:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:46:08.787 { 00:46:08.787 "subsystems": [ 00:46:08.787 { 00:46:08.787 "subsystem": "bdev", 00:46:08.787 "config": [ 00:46:08.787 { 00:46:08.787 "params": { 00:46:08.787 "block_size": 4096, 00:46:08.787 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:46:08.787 "name": "aio1" 00:46:08.787 }, 00:46:08.787 "method": "bdev_aio_create" 00:46:08.787 }, 00:46:08.787 { 00:46:08.787 "params": { 00:46:08.787 "trtype": "pcie", 00:46:08.787 "traddr": "0000:00:10.0", 00:46:08.787 "name": "Nvme0" 00:46:08.787 }, 00:46:08.787 "method": "bdev_nvme_attach_controller" 00:46:08.787 }, 00:46:08.787 { 00:46:08.787 "method": "bdev_wait_for_examine" 00:46:08.787 } 00:46:08.787 ] 00:46:08.787 } 00:46:08.787 ] 00:46:08.787 } 00:46:08.787 [2024-06-10 12:09:40.534731] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:46:08.787 [2024-06-10 12:09:40.534881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169965 ] 00:46:08.787 [2024-06-10 12:09:40.695153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:09.046 [2024-06-10 12:09:40.911481] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:10.996  Copying: 5120/5120 [kB] (average 1250 MBps) 00:46:10.996 00:46:10.996 12:09:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:46:10.996 00:46:10.996 real 0m22.181s 00:46:10.996 user 0m17.662s 00:46:10.996 sys 0m3.112s 00:46:10.996 12:09:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:10.996 12:09:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:46:10.996 ************************************ 00:46:10.996 END TEST spdk_dd_bdev_to_bdev 00:46:10.996 ************************************ 00:46:10.996 12:09:42 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:46:10.996 12:09:42 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:46:10.996 12:09:42 spdk_dd -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:10.996 12:09:42 spdk_dd -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:10.996 12:09:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:46:10.996 ************************************ 00:46:10.996 START TEST spdk_dd_sparse 00:46:10.996 ************************************ 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:46:10.996 * Looking for test storage... 00:46:10.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:46:10.996 1+0 records in 00:46:10.996 1+0 records out 00:46:10.996 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0120403 s, 348 MB/s 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:46:10.996 1+0 records in 00:46:10.996 1+0 records out 00:46:10.996 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0118077 s, 355 MB/s 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:46:10.996 1+0 records in 00:46:10.996 1+0 records out 00:46:10.996 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00973851 s, 431 MB/s 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:46:10.996 ************************************ 00:46:10.996 START TEST dd_sparse_file_to_file 00:46:10.996 ************************************ 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # file_to_file 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:46:10.996 12:09:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:46:10.996 12:09:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:46:10.996 12:09:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:46:10.996 12:09:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:46:10.996 12:09:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:46:10.996 12:09:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:46:10.996 12:09:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:46:10.996 12:09:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:46:10.996 { 00:46:10.996 "subsystems": [ 00:46:10.996 { 00:46:10.996 "subsystem": "bdev", 00:46:10.996 "config": [ 00:46:10.996 { 00:46:10.996 "params": { 00:46:10.996 "block_size": 4096, 00:46:10.996 "filename": "dd_sparse_aio_disk", 00:46:10.996 "name": "dd_aio" 00:46:10.996 }, 00:46:10.996 "method": "bdev_aio_create" 00:46:10.996 }, 00:46:10.996 { 00:46:10.996 "params": { 00:46:10.996 "lvs_name": "dd_lvstore", 00:46:10.996 "bdev_name": "dd_aio" 00:46:10.996 }, 00:46:10.996 "method": "bdev_lvol_create_lvstore" 00:46:10.996 }, 00:46:10.996 { 00:46:10.996 "method": "bdev_wait_for_examine" 00:46:10.996 } 00:46:10.996 ] 00:46:10.996 } 00:46:10.996 ] 00:46:10.996 } 00:46:11.255 [2024-06-10 12:09:43.081302] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:46:11.255 [2024-06-10 12:09:43.081508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170055 ] 00:46:11.255 [2024-06-10 12:09:43.258594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:11.513 [2024-06-10 12:09:43.461357] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:13.457  Copying: 12/36 [MB] (average 923 MBps) 00:46:13.457 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:46:13.457 00:46:13.457 real 0m2.244s 00:46:13.457 user 0m1.806s 00:46:13.457 sys 0m0.300s 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:46:13.457 ************************************ 00:46:13.457 END TEST dd_sparse_file_to_file 00:46:13.457 ************************************ 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:46:13.457 ************************************ 00:46:13.457 START TEST dd_sparse_file_to_bdev 00:46:13.457 ************************************ 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # file_to_bdev 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:46:13.457 12:09:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:46:13.457 { 00:46:13.457 "subsystems": [ 00:46:13.457 { 00:46:13.457 "subsystem": "bdev", 00:46:13.457 "config": [ 00:46:13.457 { 00:46:13.457 "params": { 00:46:13.457 "block_size": 4096, 00:46:13.457 "filename": "dd_sparse_aio_disk", 00:46:13.457 "name": "dd_aio" 00:46:13.457 }, 00:46:13.457 "method": "bdev_aio_create" 00:46:13.457 }, 00:46:13.457 { 00:46:13.457 "params": { 00:46:13.457 "lvs_name": "dd_lvstore", 00:46:13.457 "lvol_name": "dd_lvol", 00:46:13.457 "size_in_mib": 36, 00:46:13.457 "thin_provision": true 00:46:13.457 }, 00:46:13.457 "method": "bdev_lvol_create" 00:46:13.457 }, 00:46:13.457 { 00:46:13.457 "method": "bdev_wait_for_examine" 00:46:13.457 } 00:46:13.457 ] 00:46:13.457 } 00:46:13.457 ] 00:46:13.457 } 00:46:13.457 [2024-06-10 12:09:45.389466] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:46:13.457 [2024-06-10 12:09:45.389685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170115 ] 00:46:13.716 [2024-06-10 12:09:45.572132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:14.116 [2024-06-10 12:09:45.778834] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:15.614  Copying: 12/36 [MB] (average 521 MBps) 00:46:15.614 00:46:15.614 00:46:15.614 real 0m2.204s 00:46:15.614 user 0m1.838s 00:46:15.614 sys 0m0.263s 00:46:15.614 12:09:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:15.614 12:09:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:46:15.614 ************************************ 00:46:15.614 END TEST dd_sparse_file_to_bdev 00:46:15.614 ************************************ 00:46:15.614 12:09:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:46:15.614 12:09:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:15.614 12:09:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:15.614 12:09:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:46:15.614 ************************************ 00:46:15.614 START TEST dd_sparse_bdev_to_file 00:46:15.614 ************************************ 00:46:15.614 12:09:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # bdev_to_file 00:46:15.614 12:09:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:46:15.614 12:09:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:46:15.614 12:09:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:46:15.614 12:09:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:46:15.614 12:09:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:46:15.614 12:09:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:46:15.614 12:09:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:46:15.614 12:09:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:46:15.614 { 00:46:15.614 "subsystems": [ 00:46:15.614 { 00:46:15.614 "subsystem": "bdev", 00:46:15.614 "config": [ 00:46:15.614 { 00:46:15.614 "params": { 00:46:15.614 "block_size": 4096, 00:46:15.614 "filename": "dd_sparse_aio_disk", 00:46:15.614 "name": "dd_aio" 00:46:15.614 }, 00:46:15.614 "method": "bdev_aio_create" 00:46:15.614 }, 00:46:15.614 { 00:46:15.615 "method": "bdev_wait_for_examine" 00:46:15.615 } 00:46:15.615 ] 00:46:15.615 } 00:46:15.615 ] 00:46:15.615 } 00:46:15.615 [2024-06-10 12:09:47.658821] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:46:15.615 [2024-06-10 12:09:47.659027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170179 ] 00:46:15.873 [2024-06-10 12:09:47.846102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:16.131 [2024-06-10 12:09:48.107574] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:18.126  Copying: 12/36 [MB] (average 923 MBps) 00:46:18.126 00:46:18.126 12:09:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:46:18.126 12:09:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:46:18.126 12:09:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:46:18.126 12:09:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:46:18.126 12:09:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:46:18.126 12:09:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:46:18.126 12:09:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:46:18.126 12:09:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:46:18.126 12:09:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:46:18.126 12:09:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:46:18.126 00:46:18.126 real 0m2.391s 00:46:18.126 user 0m1.991s 00:46:18.126 sys 0m0.293s 00:46:18.126 12:09:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:18.126 12:09:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:46:18.126 ************************************ 00:46:18.126 END TEST dd_sparse_bdev_to_file 00:46:18.126 ************************************ 00:46:18.126 12:09:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:46:18.126 12:09:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:46:18.126 12:09:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:46:18.126 12:09:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:46:18.126 12:09:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:46:18.126 00:46:18.126 real 0m7.208s 00:46:18.126 user 0m5.774s 00:46:18.126 sys 0m1.091s 00:46:18.126 12:09:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:18.126 12:09:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:46:18.126 ************************************ 00:46:18.126 END TEST spdk_dd_sparse 00:46:18.126 ************************************ 00:46:18.126 12:09:50 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:46:18.126 12:09:50 spdk_dd -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:18.126 12:09:50 spdk_dd -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:18.126 12:09:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:46:18.126 ************************************ 00:46:18.126 START TEST spdk_dd_negative 00:46:18.126 ************************************ 00:46:18.126 12:09:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:46:18.385 * Looking for test storage... 00:46:18.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:46:18.385 ************************************ 00:46:18.385 START TEST dd_invalid_arguments 00:46:18.385 ************************************ 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # invalid_arguments 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:46:18.385 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@649 -- # local es=0 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:46:18.386 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:46:18.386 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:46:18.386 00:46:18.386 CPU options: 00:46:18.386 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:46:18.386 (like [0,1,10]) 00:46:18.386 --lcores lcore to CPU mapping list. The list is in the format: 00:46:18.386 [<,lcores[@CPUs]>...] 00:46:18.386 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:46:18.386 Within the group, '-' is used for range separator, 00:46:18.386 ',' is used for single number separator. 00:46:18.386 '( )' can be omitted for single element group, 00:46:18.386 '@' can be omitted if cpus and lcores have the same value 00:46:18.386 --disable-cpumask-locks Disable CPU core lock files. 00:46:18.386 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:46:18.386 pollers in the app support interrupt mode) 00:46:18.386 -p, --main-core main (primary) core for DPDK 00:46:18.386 00:46:18.386 Configuration options: 00:46:18.386 -c, --config, --json JSON config file 00:46:18.386 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:46:18.386 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:46:18.386 --wait-for-rpc wait for RPCs to initialize subsystems 00:46:18.386 --rpcs-allowed comma-separated list of permitted RPCS 00:46:18.386 --json-ignore-init-errors don't exit on invalid config entry 00:46:18.386 00:46:18.386 Memory options: 00:46:18.386 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:46:18.386 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:46:18.386 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:46:18.386 -R, --huge-unlink unlink huge files after initialization 00:46:18.386 -n, --mem-channels number of memory channels used for DPDK 00:46:18.386 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:46:18.386 --msg-mempool-size global message memory pool size in count (default: 262143) 00:46:18.386 --no-huge run without using hugepages 00:46:18.386 -i, --shm-id shared memory ID (optional) 00:46:18.386 -g, --single-file-segments force creating just one hugetlbfs file 00:46:18.386 00:46:18.386 PCI options: 00:46:18.386 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:46:18.386 -B, --pci-blocked pci addr to block (can be used more than once) 00:46:18.386 -u, --no-pci disable PCI access 00:46:18.386 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:46:18.386 00:46:18.386 Log options: 00:46:18.386 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:46:18.386 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:46:18.386 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, 00:46:18.386 bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, 00:46:18.386 blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:46:18.386 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:46:18.386 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:46:18.386 sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, 00:46:18.386 vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, 00:46:18.386 vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:46:18.386 virtio_vfio_user, vmd) 00:46:18.386 --silence-noticelog disable notice level logging to stderr 00:46:18.386 00:46:18.386 Trace options: 00:46:18.386 --num-trace-entries number of trace entries for each core, must be power of 2, 00:46:18.386 setting 0 to disable trace (default 32768) 00:46:18.386 Tracepoints vary in size and can use more than one trace entry. 00:46:18.386 -e, --tpoint-group [:] 00:46:18.386 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:46:18.386 [2024-06-10 12:09:50.307241] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:46:18.386 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:46:18.386 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:46:18.386 a tracepoint group. First tpoint inside a group can be enabled by 00:46:18.386 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:46:18.386 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:46:18.386 in /include/spdk_internal/trace_defs.h 00:46:18.386 00:46:18.386 Other options: 00:46:18.386 -h, --help show this usage 00:46:18.386 -v, --version print SPDK version 00:46:18.386 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:46:18.386 --env-context Opaque context for use of the env implementation 00:46:18.386 00:46:18.386 Application specific: 00:46:18.386 [--------- DD Options ---------] 00:46:18.386 --if Input file. Must specify either --if or --ib. 00:46:18.386 --ib Input bdev. Must specifier either --if or --ib 00:46:18.386 --of Output file. Must specify either --of or --ob. 00:46:18.386 --ob Output bdev. Must specify either --of or --ob. 00:46:18.386 --iflag Input file flags. 00:46:18.386 --oflag Output file flags. 00:46:18.386 --bs I/O unit size (default: 4096) 00:46:18.386 --qd Queue depth (default: 2) 00:46:18.386 --count I/O unit count. The number of I/O units to copy. (default: all) 00:46:18.386 --skip Skip this many I/O units at start of input. (default: 0) 00:46:18.386 --seek Skip this many I/O units at start of output. (default: 0) 00:46:18.386 --aio Force usage of AIO. (by default io_uring is used if available) 00:46:18.386 --sparse Enable hole skipping in input target 00:46:18.386 Available iflag and oflag values: 00:46:18.386 append - append mode 00:46:18.386 direct - use direct I/O for data 00:46:18.386 directory - fail unless a directory 00:46:18.386 dsync - use synchronized I/O for data 00:46:18.386 noatime - do not update access time 00:46:18.386 noctty - do not assign controlling terminal from file 00:46:18.386 nofollow - do not follow symlinks 00:46:18.386 nonblock - use non-blocking I/O 00:46:18.386 sync - use synchronized I/O for data and metadata 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # es=2 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:18.386 00:46:18.386 real 0m0.141s 00:46:18.386 user 0m0.055s 00:46:18.386 sys 0m0.087s 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:46:18.386 ************************************ 00:46:18.386 END TEST dd_invalid_arguments 00:46:18.386 ************************************ 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:18.386 12:09:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:46:18.386 ************************************ 00:46:18.387 START TEST dd_double_input 00:46:18.387 ************************************ 00:46:18.387 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # double_input 00:46:18.387 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:46:18.387 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@649 -- # local es=0 00:46:18.387 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:46:18.387 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:18.387 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.387 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:18.387 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.387 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:18.387 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.387 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:18.387 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:46:18.387 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:46:18.646 [2024-06-10 12:09:50.503046] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # es=22 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:18.646 00:46:18.646 real 0m0.131s 00:46:18.646 user 0m0.071s 00:46:18.646 sys 0m0.061s 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:46:18.646 ************************************ 00:46:18.646 END TEST dd_double_input 00:46:18.646 ************************************ 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:46:18.646 ************************************ 00:46:18.646 START TEST dd_double_output 00:46:18.646 ************************************ 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # double_output 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@649 -- # local es=0 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:46:18.646 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:46:18.905 [2024-06-10 12:09:50.708187] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # es=22 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:18.905 00:46:18.905 real 0m0.147s 00:46:18.905 user 0m0.067s 00:46:18.905 sys 0m0.080s 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:46:18.905 ************************************ 00:46:18.905 END TEST dd_double_output 00:46:18.905 ************************************ 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:46:18.905 ************************************ 00:46:18.905 START TEST dd_no_input 00:46:18.905 ************************************ 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # no_input 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@649 -- # local es=0 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:46:18.905 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:46:18.905 [2024-06-10 12:09:50.906978] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:46:19.163 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # es=22 00:46:19.163 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:19.163 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:19.163 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:19.163 00:46:19.163 real 0m0.139s 00:46:19.163 user 0m0.060s 00:46:19.163 sys 0m0.079s 00:46:19.163 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:19.163 12:09:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:46:19.163 ************************************ 00:46:19.163 END TEST dd_no_input 00:46:19.163 ************************************ 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:46:19.163 ************************************ 00:46:19.163 START TEST dd_no_output 00:46:19.163 ************************************ 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # no_output 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@649 -- # local es=0 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:46:19.163 [2024-06-10 12:09:51.083913] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # es=22 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:19.163 00:46:19.163 real 0m0.121s 00:46:19.163 user 0m0.063s 00:46:19.163 sys 0m0.058s 00:46:19.163 ************************************ 00:46:19.163 END TEST dd_no_output 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:46:19.163 ************************************ 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:46:19.163 ************************************ 00:46:19.163 START TEST dd_wrong_blocksize 00:46:19.163 ************************************ 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # wrong_blocksize 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@649 -- # local es=0 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:46:19.163 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:46:19.422 [2024-06-10 12:09:51.260077] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # es=22 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:19.422 00:46:19.422 real 0m0.115s 00:46:19.422 user 0m0.051s 00:46:19.422 sys 0m0.064s 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:46:19.422 ************************************ 00:46:19.422 END TEST dd_wrong_blocksize 00:46:19.422 ************************************ 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:46:19.422 ************************************ 00:46:19.422 START TEST dd_smaller_blocksize 00:46:19.422 ************************************ 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # smaller_blocksize 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@649 -- # local es=0 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:46:19.422 12:09:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:46:19.422 [2024-06-10 12:09:51.442901] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:46:19.422 [2024-06-10 12:09:51.443226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170455 ] 00:46:19.681 [2024-06-10 12:09:51.605574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:19.939 [2024-06-10 12:09:51.813512] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:20.506 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:46:20.506 [2024-06-10 12:09:52.506862] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:46:20.506 [2024-06-10 12:09:52.507199] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:46:21.441 [2024-06-10 12:09:53.291311] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:46:21.700 12:09:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # es=244 00:46:21.700 12:09:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:21.700 12:09:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # es=116 00:46:21.700 12:09:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # case "$es" in 00:46:21.700 12:09:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@669 -- # es=1 00:46:21.700 12:09:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:21.700 00:46:21.700 real 0m2.339s 00:46:21.700 user 0m1.750s 00:46:21.700 sys 0m0.486s 00:46:21.700 12:09:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:21.700 ************************************ 00:46:21.700 END TEST dd_smaller_blocksize 00:46:21.700 ************************************ 00:46:21.700 12:09:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:46:21.958 12:09:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:46:21.958 12:09:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:21.958 12:09:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:21.958 12:09:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:46:21.958 ************************************ 00:46:21.958 START TEST dd_invalid_count 00:46:21.958 ************************************ 00:46:21.958 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # invalid_count 00:46:21.958 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:46:21.958 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@649 -- # local es=0 00:46:21.958 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:46:21.958 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:21.958 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:46:21.959 [2024-06-10 12:09:53.857432] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # es=22 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:21.959 00:46:21.959 real 0m0.131s 00:46:21.959 user 0m0.066s 00:46:21.959 sys 0m0.063s 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:46:21.959 ************************************ 00:46:21.959 END TEST dd_invalid_count 00:46:21.959 ************************************ 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:46:21.959 ************************************ 00:46:21.959 START TEST dd_invalid_oflag 00:46:21.959 ************************************ 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # invalid_oflag 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@649 -- # local es=0 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:46:21.959 12:09:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:46:22.216 [2024-06-10 12:09:54.062957] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # es=22 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:22.216 00:46:22.216 real 0m0.145s 00:46:22.216 user 0m0.076s 00:46:22.216 sys 0m0.067s 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:46:22.216 ************************************ 00:46:22.216 END TEST dd_invalid_oflag 00:46:22.216 ************************************ 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:46:22.216 ************************************ 00:46:22.216 START TEST dd_invalid_iflag 00:46:22.216 ************************************ 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # invalid_iflag 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@649 -- # local es=0 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:46:22.216 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:46:22.216 [2024-06-10 12:09:54.244044] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:46:22.474 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # es=22 00:46:22.474 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:22.474 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:22.474 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:22.474 00:46:22.474 real 0m0.113s 00:46:22.474 user 0m0.058s 00:46:22.474 sys 0m0.054s 00:46:22.474 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:22.474 12:09:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:46:22.474 ************************************ 00:46:22.474 END TEST dd_invalid_iflag 00:46:22.474 ************************************ 00:46:22.474 12:09:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:46:22.474 12:09:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:22.474 12:09:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:22.474 12:09:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:46:22.474 ************************************ 00:46:22.474 START TEST dd_unknown_flag 00:46:22.474 ************************************ 00:46:22.474 12:09:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # unknown_flag 00:46:22.474 12:09:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:46:22.474 12:09:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@649 -- # local es=0 00:46:22.475 12:09:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:46:22.475 12:09:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:22.475 12:09:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:22.475 12:09:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:22.475 12:09:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:22.475 12:09:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:22.475 12:09:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:22.475 12:09:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:22.475 12:09:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:46:22.475 12:09:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:46:22.475 [2024-06-10 12:09:54.435127] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:46:22.475 [2024-06-10 12:09:54.435564] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170587 ] 00:46:22.733 [2024-06-10 12:09:54.614146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:22.990 [2024-06-10 12:09:54.821406] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:23.249 [2024-06-10 12:09:55.141203] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:46:23.249 [2024-06-10 12:09:55.141534] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:46:23.249  Copying: 0/0 [B] (average 0 Bps)[2024-06-10 12:09:55.141759] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:46:24.185 [2024-06-10 12:09:55.931006] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:46:24.461 00:46:24.461 00:46:24.461 ************************************ 00:46:24.461 END TEST dd_unknown_flag 00:46:24.461 ************************************ 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # es=234 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # es=106 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # case "$es" in 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@669 -- # es=1 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:24.461 00:46:24.461 real 0m2.050s 00:46:24.461 user 0m1.677s 00:46:24.461 sys 0m0.238s 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:46:24.461 ************************************ 00:46:24.461 START TEST dd_invalid_json 00:46:24.461 ************************************ 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # invalid_json 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@649 -- # local es=0 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:46:24.461 12:09:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:46:24.720 [2024-06-10 12:09:56.524253] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:46:24.721 [2024-06-10 12:09:56.524630] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170634 ] 00:46:24.721 [2024-06-10 12:09:56.689101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:24.980 [2024-06-10 12:09:56.907791] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:24.980 [2024-06-10 12:09:56.908098] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:46:24.980 [2024-06-10 12:09:56.908235] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:46:24.980 [2024-06-10 12:09:56.908301] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:46:24.980 [2024-06-10 12:09:56.908438] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:46:25.546 ************************************ 00:46:25.546 END TEST dd_invalid_json 00:46:25.546 ************************************ 00:46:25.546 12:09:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # es=234 00:46:25.546 12:09:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:25.546 12:09:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # es=106 00:46:25.546 12:09:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # case "$es" in 00:46:25.546 12:09:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@669 -- # es=1 00:46:25.546 12:09:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:25.546 00:46:25.546 real 0m0.909s 00:46:25.546 user 0m0.675s 00:46:25.546 sys 0m0.134s 00:46:25.546 12:09:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:25.546 12:09:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:46:25.546 ************************************ 00:46:25.546 END TEST spdk_dd_negative 00:46:25.546 ************************************ 00:46:25.546 00:46:25.546 real 0m7.314s 00:46:25.546 user 0m5.098s 00:46:25.546 sys 0m1.884s 00:46:25.546 12:09:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:25.546 12:09:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:46:25.546 ************************************ 00:46:25.546 END TEST spdk_dd 00:46:25.546 ************************************ 00:46:25.546 00:46:25.546 real 3m6.804s 00:46:25.547 user 2m33.634s 00:46:25.547 sys 0m23.316s 00:46:25.547 12:09:57 spdk_dd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:25.547 12:09:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:46:25.547 12:09:57 -- spdk/autotest.sh@215 -- # '[' 1 -eq 1 ']' 00:46:25.547 12:09:57 -- spdk/autotest.sh@216 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:46:25.547 12:09:57 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:46:25.547 12:09:57 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:25.547 12:09:57 -- common/autotest_common.sh@10 -- # set +x 00:46:25.547 ************************************ 00:46:25.547 START TEST blockdev_nvme 00:46:25.547 ************************************ 00:46:25.547 12:09:57 blockdev_nvme -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:46:25.805 * Looking for test storage... 00:46:25.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:46:25.805 12:09:57 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=170732 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 170732 00:46:25.805 12:09:57 blockdev_nvme -- common/autotest_common.sh@830 -- # '[' -z 170732 ']' 00:46:25.805 12:09:57 blockdev_nvme -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:25.805 12:09:57 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:46:25.805 12:09:57 blockdev_nvme -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:25.805 12:09:57 blockdev_nvme -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:25.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:25.805 12:09:57 blockdev_nvme -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:25.805 12:09:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:46:25.805 [2024-06-10 12:09:57.768628] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:46:25.805 [2024-06-10 12:09:57.768848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170732 ] 00:46:26.064 [2024-06-10 12:09:57.953354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:26.322 [2024-06-10 12:09:58.217605] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@863 -- # return 0 00:46:27.257 12:09:59 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:46:27.257 12:09:59 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:46:27.257 12:09:59 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:46:27.257 12:09:59 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:46:27.257 12:09:59 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:46:27.257 12:09:59 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:27.257 12:09:59 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:27.257 12:09:59 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:46:27.257 12:09:59 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:27.257 12:09:59 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:27.257 12:09:59 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:27.257 12:09:59 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:46:27.257 12:09:59 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:46:27.257 12:09:59 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:46:27.257 12:09:59 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:27.516 12:09:59 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:46:27.516 12:09:59 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "bfcd24fd-80b8-419c-95e3-e0ef057a1225"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "bfcd24fd-80b8-419c-95e3-e0ef057a1225",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:46:27.516 12:09:59 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:46:27.516 12:09:59 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:46:27.516 12:09:59 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:46:27.516 12:09:59 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:46:27.516 12:09:59 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 170732 00:46:27.516 12:09:59 blockdev_nvme -- common/autotest_common.sh@949 -- # '[' -z 170732 ']' 00:46:27.516 12:09:59 blockdev_nvme -- common/autotest_common.sh@953 -- # kill -0 170732 00:46:27.516 12:09:59 blockdev_nvme -- common/autotest_common.sh@954 -- # uname 00:46:27.516 12:09:59 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:27.516 12:09:59 blockdev_nvme -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 170732 00:46:27.516 12:09:59 blockdev_nvme -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:46:27.516 12:09:59 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:46:27.516 killing process with pid 170732 00:46:27.516 12:09:59 blockdev_nvme -- common/autotest_common.sh@967 -- # echo 'killing process with pid 170732' 00:46:27.516 12:09:59 blockdev_nvme -- common/autotest_common.sh@968 -- # kill 170732 00:46:27.516 12:09:59 blockdev_nvme -- common/autotest_common.sh@973 -- # wait 170732 00:46:30.048 12:10:01 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:46:30.048 12:10:01 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:46:30.048 12:10:01 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:46:30.048 12:10:01 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:30.048 12:10:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:46:30.048 ************************************ 00:46:30.048 START TEST bdev_hello_world 00:46:30.048 ************************************ 00:46:30.048 12:10:01 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:46:30.048 [2024-06-10 12:10:02.070432] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:46:30.048 [2024-06-10 12:10:02.070684] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170831 ] 00:46:30.305 [2024-06-10 12:10:02.250951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:30.562 [2024-06-10 12:10:02.481024] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:31.129 [2024-06-10 12:10:02.997699] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:46:31.129 [2024-06-10 12:10:02.997788] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:46:31.129 [2024-06-10 12:10:02.997826] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:46:31.129 [2024-06-10 12:10:03.001084] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:46:31.129 [2024-06-10 12:10:03.001696] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:46:31.129 [2024-06-10 12:10:03.001739] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:46:31.129 [2024-06-10 12:10:03.002025] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:46:31.129 00:46:31.129 [2024-06-10 12:10:03.002090] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:46:32.501 00:46:32.501 real 0m2.454s 00:46:32.501 user 0m2.145s 00:46:32.501 sys 0m0.208s 00:46:32.501 12:10:04 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:32.501 12:10:04 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:46:32.501 ************************************ 00:46:32.501 END TEST bdev_hello_world 00:46:32.501 ************************************ 00:46:32.501 12:10:04 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:46:32.501 12:10:04 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:46:32.501 12:10:04 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:32.501 12:10:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:46:32.501 ************************************ 00:46:32.501 START TEST bdev_bounds 00:46:32.501 ************************************ 00:46:32.501 12:10:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # bdev_bounds '' 00:46:32.501 12:10:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=170875 00:46:32.501 12:10:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:46:32.501 12:10:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:46:32.501 Process bdevio pid: 170875 00:46:32.501 12:10:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 170875' 00:46:32.501 12:10:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 170875 00:46:32.501 12:10:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@830 -- # '[' -z 170875 ']' 00:46:32.501 12:10:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:32.501 12:10:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:32.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:32.501 12:10:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:32.501 12:10:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:32.501 12:10:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:46:32.758 [2024-06-10 12:10:04.591016] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:46:32.758 [2024-06-10 12:10:04.591904] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170875 ] 00:46:32.758 [2024-06-10 12:10:04.784430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:46:33.016 [2024-06-10 12:10:05.067254] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:46:33.016 [2024-06-10 12:10:05.067376] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:46:33.016 [2024-06-10 12:10:05.067383] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:33.601 12:10:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:33.601 12:10:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@863 -- # return 0 00:46:33.601 12:10:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:46:33.860 I/O targets: 00:46:33.860 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:46:33.860 00:46:33.860 00:46:33.860 CUnit - A unit testing framework for C - Version 2.1-3 00:46:33.860 http://cunit.sourceforge.net/ 00:46:33.860 00:46:33.860 00:46:33.860 Suite: bdevio tests on: Nvme0n1 00:46:33.860 Test: blockdev write read block ...passed 00:46:33.860 Test: blockdev write zeroes read block ...passed 00:46:33.860 Test: blockdev write zeroes read no split ...passed 00:46:33.860 Test: blockdev write zeroes read split ...passed 00:46:33.860 Test: blockdev write zeroes read split partial ...passed 00:46:33.860 Test: blockdev reset ...[2024-06-10 12:10:05.758434] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:46:33.860 [2024-06-10 12:10:05.762357] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:46:33.860 passed 00:46:33.860 Test: blockdev write read 8 blocks ...passed 00:46:33.860 Test: blockdev write read size > 128k ...passed 00:46:33.860 Test: blockdev write read invalid size ...passed 00:46:33.860 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:46:33.860 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:46:33.860 Test: blockdev write read max offset ...passed 00:46:33.860 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:46:33.860 Test: blockdev writev readv 8 blocks ...passed 00:46:33.860 Test: blockdev writev readv 30 x 1block ...passed 00:46:33.860 Test: blockdev writev readv block ...passed 00:46:33.860 Test: blockdev writev readv size > 128k ...passed 00:46:33.860 Test: blockdev writev readv size > 128k in two iovs ...passed 00:46:33.860 Test: blockdev comparev and writev ...[2024-06-10 12:10:05.772305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x31c0d000 len:0x1000 00:46:33.860 [2024-06-10 12:10:05.772390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:46:33.860 passed 00:46:33.860 Test: blockdev nvme passthru rw ...passed 00:46:33.860 Test: blockdev nvme passthru vendor specific ...[2024-06-10 12:10:05.773416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:46:33.860 [2024-06-10 12:10:05.773475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:46:33.860 passed 00:46:33.860 Test: blockdev nvme admin passthru ...passed 00:46:33.860 Test: blockdev copy ...passed 00:46:33.860 00:46:33.860 Run Summary: Type Total Ran Passed Failed Inactive 00:46:33.860 suites 1 1 n/a 0 0 00:46:33.860 tests 23 23 23 0 0 00:46:33.860 asserts 152 152 152 0 n/a 00:46:33.860 00:46:33.860 Elapsed time = 0.307 seconds 00:46:33.860 0 00:46:33.860 12:10:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 170875 00:46:33.860 12:10:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@949 -- # '[' -z 170875 ']' 00:46:33.860 12:10:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # kill -0 170875 00:46:33.860 12:10:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # uname 00:46:33.860 12:10:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:33.860 12:10:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 170875 00:46:33.860 12:10:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:46:33.860 12:10:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:46:33.860 12:10:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # echo 'killing process with pid 170875' 00:46:33.860 killing process with pid 170875 00:46:33.860 12:10:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # kill 170875 00:46:33.860 12:10:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # wait 170875 00:46:35.238 12:10:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:46:35.238 00:46:35.238 real 0m2.735s 00:46:35.238 user 0m6.222s 00:46:35.238 sys 0m0.383s 00:46:35.238 12:10:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:35.238 12:10:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:46:35.238 ************************************ 00:46:35.238 END TEST bdev_bounds 00:46:35.238 ************************************ 00:46:35.238 12:10:07 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:46:35.238 12:10:07 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:46:35.238 12:10:07 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:35.238 12:10:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:46:35.497 ************************************ 00:46:35.497 START TEST bdev_nbd 00:46:35.497 ************************************ 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1') 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=1 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0') 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1') 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=170944 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 170944 /var/tmp/spdk-nbd.sock 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@830 -- # '[' -z 170944 ']' 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:46:35.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:46:35.497 12:10:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:46:35.497 [2024-06-10 12:10:07.395905] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:46:35.497 [2024-06-10 12:10:07.396135] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:35.755 [2024-06-10 12:10:07.582200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:36.014 [2024-06-10 12:10:07.819092] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:36.580 12:10:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:36.580 12:10:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@863 -- # return 0 00:46:36.580 12:10:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:46:36.580 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:36.580 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:46:36.580 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:46:36.580 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:46:36.580 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:36.580 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:46:36.580 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:46:36.580 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:46:36.580 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:46:36.580 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:46:36.580 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:46:36.580 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:46:36.838 1+0 records in 00:46:36.838 1+0 records out 00:46:36.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447271 s, 9.2 MB/s 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:46:36.838 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:46:37.096 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:46:37.096 { 00:46:37.096 "nbd_device": "/dev/nbd0", 00:46:37.096 "bdev_name": "Nvme0n1" 00:46:37.096 } 00:46:37.096 ]' 00:46:37.096 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:46:37.096 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:46:37.096 { 00:46:37.096 "nbd_device": "/dev/nbd0", 00:46:37.096 "bdev_name": "Nvme0n1" 00:46:37.096 } 00:46:37.096 ]' 00:46:37.096 12:10:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:46:37.096 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:46:37.096 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:37.096 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:46:37.096 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:46:37.096 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:46:37.096 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:37.096 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:46:37.355 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:46:37.355 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:46:37.355 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:46:37.355 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:37.355 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:37.355 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:46:37.355 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:46:37.355 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:46:37.355 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:46:37.355 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:37.355 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:46:37.614 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:46:37.891 /dev/nbd0 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:46:37.891 1+0 records in 00:46:37.891 1+0 records out 00:46:37.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521088 s, 7.9 MB/s 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:37.891 12:10:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:46:38.150 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:46:38.150 { 00:46:38.150 "nbd_device": "/dev/nbd0", 00:46:38.150 "bdev_name": "Nvme0n1" 00:46:38.150 } 00:46:38.150 ]' 00:46:38.150 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:46:38.150 { 00:46:38.150 "nbd_device": "/dev/nbd0", 00:46:38.150 "bdev_name": "Nvme0n1" 00:46:38.150 } 00:46:38.150 ]' 00:46:38.150 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:46:38.151 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:46:38.151 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:46:38.151 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:46:38.151 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:46:38.151 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:46:38.151 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:46:38.151 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:46:38.151 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:46:38.151 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:46:38.151 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:46:38.151 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:46:38.151 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:46:38.151 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:46:38.151 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:46:38.151 256+0 records in 00:46:38.151 256+0 records out 00:46:38.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00770156 s, 136 MB/s 00:46:38.151 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:46:38.151 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:46:38.409 256+0 records in 00:46:38.409 256+0 records out 00:46:38.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0564035 s, 18.6 MB/s 00:46:38.409 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:46:38.409 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:46:38.409 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:46:38.409 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:46:38.409 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:46:38.409 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:46:38.409 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:46:38.409 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:46:38.409 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:46:38.409 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:46:38.409 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:46:38.410 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:38.410 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:46:38.410 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:46:38.410 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:46:38.410 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:38.410 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:46:38.668 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:46:38.668 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:46:38.668 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:46:38.668 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:38.668 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:38.668 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:46:38.668 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:46:38.668 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:46:38.668 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:46:38.668 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:38.668 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:46:38.668 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:46:38.668 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:46:38.668 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:46:38.927 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:46:38.927 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:46:38.927 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:46:38.927 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:46:38.927 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:46:38.927 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:46:38.927 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:46:38.927 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:46:38.927 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:46:38.927 12:10:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:46:38.927 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:38.927 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:46:38.927 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:46:38.928 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:46:38.928 12:10:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:46:39.186 malloc_lvol_verify 00:46:39.186 12:10:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:46:39.444 fd4f8c3c-d30e-4b80-a316-1772ccb1651c 00:46:39.444 12:10:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:46:39.701 97cfd406-f868-4258-a5cc-39afc8f53fab 00:46:39.701 12:10:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:46:39.960 /dev/nbd0 00:46:39.960 12:10:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:46:39.960 mke2fs 1.46.5 (30-Dec-2021) 00:46:39.960 00:46:39.960 Filesystem too small for a journal 00:46:39.960 Discarding device blocks: 0/1024 done 00:46:39.960 Creating filesystem with 1024 4k blocks and 1024 inodes 00:46:39.960 00:46:39.960 Allocating group tables: 0/1 done 00:46:39.960 Writing inode tables: 0/1 done 00:46:39.960 Writing superblocks and filesystem accounting information: 0/1 done 00:46:39.960 00:46:39.960 12:10:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:46:39.960 12:10:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:46:39.960 12:10:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:39.960 12:10:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:46:39.960 12:10:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:46:39.960 12:10:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:46:39.960 12:10:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:39.960 12:10:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:46:39.960 12:10:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 170944 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@949 -- # '[' -z 170944 ']' 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # kill -0 170944 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # uname 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 170944 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # echo 'killing process with pid 170944' 00:46:40.219 killing process with pid 170944 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # kill 170944 00:46:40.219 12:10:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # wait 170944 00:46:41.597 12:10:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:46:41.597 00:46:41.597 real 0m6.137s 00:46:41.597 user 0m8.503s 00:46:41.597 sys 0m1.507s 00:46:41.597 12:10:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:41.597 12:10:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:46:41.597 ************************************ 00:46:41.597 END TEST bdev_nbd 00:46:41.597 ************************************ 00:46:41.597 12:10:13 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:46:41.597 12:10:13 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:46:41.597 skipping fio tests on NVMe due to multi-ns failures. 00:46:41.597 12:10:13 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:46:41.597 12:10:13 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:46:41.597 12:10:13 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:46:41.597 12:10:13 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:46:41.597 12:10:13 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:41.597 12:10:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:46:41.597 ************************************ 00:46:41.597 START TEST bdev_verify 00:46:41.597 ************************************ 00:46:41.597 12:10:13 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:46:41.597 [2024-06-10 12:10:13.589180] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:46:41.597 [2024-06-10 12:10:13.589409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171140 ] 00:46:41.856 [2024-06-10 12:10:13.773162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:46:42.116 [2024-06-10 12:10:13.994687] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:42.116 [2024-06-10 12:10:13.994705] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:46:42.684 Running I/O for 5 seconds... 00:46:47.956 00:46:47.956 Latency(us) 00:46:47.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:47.956 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:46:47.956 Verification LBA range: start 0x0 length 0xa0000 00:46:47.956 Nvme0n1 : 5.01 9181.17 35.86 0.00 0.00 13869.19 1154.68 23343.30 00:46:47.956 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:46:47.956 Verification LBA range: start 0xa0000 length 0xa0000 00:46:47.956 Nvme0n1 : 5.01 10072.54 39.35 0.00 0.00 12635.68 1115.67 20222.54 00:46:47.956 =================================================================================================================== 00:46:47.956 Total : 19253.71 75.21 0.00 0.00 13223.80 1115.67 23343.30 00:46:49.334 00:46:49.334 real 0m7.734s 00:46:49.334 user 0m14.056s 00:46:49.334 sys 0m0.310s 00:46:49.334 12:10:21 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:49.334 12:10:21 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:46:49.334 ************************************ 00:46:49.334 END TEST bdev_verify 00:46:49.334 ************************************ 00:46:49.334 12:10:21 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:46:49.334 12:10:21 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:46:49.334 12:10:21 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:49.334 12:10:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:46:49.334 ************************************ 00:46:49.334 START TEST bdev_verify_big_io 00:46:49.334 ************************************ 00:46:49.334 12:10:21 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:46:49.334 [2024-06-10 12:10:21.387646] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:46:49.334 [2024-06-10 12:10:21.387873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171254 ] 00:46:49.593 [2024-06-10 12:10:21.574770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:46:49.852 [2024-06-10 12:10:21.794583] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:49.852 [2024-06-10 12:10:21.794585] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:46:50.419 Running I/O for 5 seconds... 00:46:55.686 00:46:55.686 Latency(us) 00:46:55.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:55.686 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:46:55.686 Verification LBA range: start 0x0 length 0xa000 00:46:55.686 Nvme0n1 : 5.08 920.16 57.51 0.00 0.00 136225.96 643.66 206719.27 00:46:55.686 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:46:55.686 Verification LBA range: start 0xa000 length 0xa000 00:46:55.686 Nvme0n1 : 5.07 924.82 57.80 0.00 0.00 135643.89 1209.30 195734.19 00:46:55.686 =================================================================================================================== 00:46:55.686 Total : 1844.98 115.31 0.00 0.00 135934.30 643.66 206719.27 00:46:57.590 00:46:57.590 real 0m8.046s 00:46:57.590 user 0m14.753s 00:46:57.590 sys 0m0.251s 00:46:57.590 12:10:29 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:57.590 12:10:29 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:46:57.590 ************************************ 00:46:57.590 END TEST bdev_verify_big_io 00:46:57.590 ************************************ 00:46:57.590 12:10:29 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:57.590 12:10:29 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:46:57.590 12:10:29 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:57.590 12:10:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:46:57.590 ************************************ 00:46:57.590 START TEST bdev_write_zeroes 00:46:57.590 ************************************ 00:46:57.590 12:10:29 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:57.590 [2024-06-10 12:10:29.491891] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:46:57.590 [2024-06-10 12:10:29.492100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171357 ] 00:46:57.848 [2024-06-10 12:10:29.674000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:57.848 [2024-06-10 12:10:29.896891] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:58.415 Running I/O for 1 seconds... 00:46:59.389 00:46:59.389 Latency(us) 00:46:59.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:59.389 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:46:59.389 Nvme0n1 : 1.00 58116.46 227.02 0.00 0.00 2197.30 725.58 12420.63 00:46:59.389 =================================================================================================================== 00:46:59.389 Total : 58116.46 227.02 0.00 0.00 2197.30 725.58 12420.63 00:47:01.292 00:47:01.292 real 0m3.545s 00:47:01.292 user 0m3.204s 00:47:01.292 sys 0m0.240s 00:47:01.292 12:10:32 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:01.292 12:10:32 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:47:01.292 ************************************ 00:47:01.292 END TEST bdev_write_zeroes 00:47:01.292 ************************************ 00:47:01.292 12:10:33 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:47:01.292 12:10:33 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:47:01.292 12:10:33 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:01.292 12:10:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:47:01.292 ************************************ 00:47:01.292 START TEST bdev_json_nonenclosed 00:47:01.292 ************************************ 00:47:01.292 12:10:33 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:47:01.292 [2024-06-10 12:10:33.108557] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:47:01.292 [2024-06-10 12:10:33.108776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171424 ] 00:47:01.292 [2024-06-10 12:10:33.296796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:01.551 [2024-06-10 12:10:33.504424] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:47:01.551 [2024-06-10 12:10:33.504516] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:47:01.551 [2024-06-10 12:10:33.504568] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:47:01.551 [2024-06-10 12:10:33.504594] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:47:02.118 00:47:02.118 real 0m0.960s 00:47:02.118 user 0m0.690s 00:47:02.118 sys 0m0.169s 00:47:02.118 12:10:33 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:02.118 12:10:33 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:47:02.118 ************************************ 00:47:02.118 END TEST bdev_json_nonenclosed 00:47:02.118 ************************************ 00:47:02.118 12:10:34 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:47:02.118 12:10:34 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:47:02.118 12:10:34 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:02.118 12:10:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:47:02.118 ************************************ 00:47:02.118 START TEST bdev_json_nonarray 00:47:02.118 ************************************ 00:47:02.118 12:10:34 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:47:02.118 [2024-06-10 12:10:34.120474] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:47:02.118 [2024-06-10 12:10:34.120692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171462 ] 00:47:02.376 [2024-06-10 12:10:34.300528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:02.634 [2024-06-10 12:10:34.517646] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:47:02.634 [2024-06-10 12:10:34.517749] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:47:02.634 [2024-06-10 12:10:34.517806] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:47:02.634 [2024-06-10 12:10:34.517831] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:47:03.200 00:47:03.200 real 0m0.968s 00:47:03.200 user 0m0.707s 00:47:03.200 sys 0m0.160s 00:47:03.200 12:10:35 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:03.200 12:10:35 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:47:03.200 ************************************ 00:47:03.200 END TEST bdev_json_nonarray 00:47:03.200 ************************************ 00:47:03.200 12:10:35 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:47:03.200 12:10:35 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:47:03.200 12:10:35 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:47:03.200 12:10:35 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:47:03.200 12:10:35 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:47:03.201 12:10:35 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:47:03.201 12:10:35 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:47:03.201 12:10:35 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:47:03.201 12:10:35 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:47:03.201 12:10:35 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:47:03.201 12:10:35 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:47:03.201 ************************************ 00:47:03.201 END TEST blockdev_nvme 00:47:03.201 ************************************ 00:47:03.201 00:47:03.201 real 0m37.565s 00:47:03.201 user 0m55.081s 00:47:03.201 sys 0m4.130s 00:47:03.201 12:10:35 blockdev_nvme -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:03.201 12:10:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:47:03.201 12:10:35 -- spdk/autotest.sh@217 -- # uname -s 00:47:03.201 12:10:35 -- spdk/autotest.sh@217 -- # [[ Linux == Linux ]] 00:47:03.201 12:10:35 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:47:03.201 12:10:35 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:47:03.201 12:10:35 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:03.201 12:10:35 -- common/autotest_common.sh@10 -- # set +x 00:47:03.201 ************************************ 00:47:03.201 START TEST blockdev_nvme_gpt 00:47:03.201 ************************************ 00:47:03.201 12:10:35 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:47:03.201 * Looking for test storage... 00:47:03.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=171548 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 171548 00:47:03.201 12:10:35 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:47:03.201 12:10:35 blockdev_nvme_gpt -- common/autotest_common.sh@830 -- # '[' -z 171548 ']' 00:47:03.201 12:10:35 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:03.201 12:10:35 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local max_retries=100 00:47:03.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:03.459 12:10:35 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:03.459 12:10:35 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # xtrace_disable 00:47:03.459 12:10:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:03.459 [2024-06-10 12:10:35.353999] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:47:03.459 [2024-06-10 12:10:35.354969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171548 ] 00:47:03.716 [2024-06-10 12:10:35.539500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:03.716 [2024-06-10 12:10:35.737679] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:47:04.649 12:10:36 blockdev_nvme_gpt -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:47:04.649 12:10:36 blockdev_nvme_gpt -- common/autotest_common.sh@863 -- # return 0 00:47:04.649 12:10:36 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:47:04.649 12:10:36 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:47:04.649 12:10:36 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:47:04.907 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:47:04.907 Waiting for block devices as requested 00:47:05.165 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:47:05.165 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:47:05.165 12:10:37 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:47:05.165 12:10:37 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:47:05.165 12:10:37 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local nvme bdf 00:47:05.165 12:10:37 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:47:05.165 12:10:37 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:47:05.165 12:10:37 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:47:05.165 12:10:37 blockdev_nvme_gpt -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:47:05.165 12:10:37 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:47:05.165 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme0/nvme0n1') 00:47:05.165 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:47:05.165 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:47:05.165 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:47:05.165 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:47:05.165 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme0n1 00:47:05.165 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme0n1 -ms print 00:47:05.165 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:47:05.165 BYT; 00:47:05.165 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:47:05.165 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:47:05.165 BYT; 00:47:05.165 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:47:05.165 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme0n1 00:47:05.165 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:47:05.165 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme0n1 ]] 00:47:05.165 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:47:05.165 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:47:05.165 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:47:05.741 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:47:05.741 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:47:05.741 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:47:05.741 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:47:05.741 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:47:05.741 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:47:05.741 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:47:05.741 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:47:05.741 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:47:05.741 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:47:05.741 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:47:05.742 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:47:05.742 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:47:05.742 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:47:05.742 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:47:05.742 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:47:05.742 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:47:05.742 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:47:05.742 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:47:05.742 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:47:05.742 12:10:37 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:47:05.742 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:47:05.742 12:10:37 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:47:06.675 The operation has completed successfully. 00:47:06.675 12:10:38 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:47:07.608 The operation has completed successfully. 00:47:07.608 12:10:39 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:47:08.175 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:47:08.175 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:47:09.112 12:10:40 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:47:09.112 12:10:40 blockdev_nvme_gpt -- common/autotest_common.sh@560 -- # xtrace_disable 00:47:09.112 12:10:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:09.112 [] 00:47:09.112 12:10:40 blockdev_nvme_gpt -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:47:09.112 12:10:40 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:47:09.112 12:10:40 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:47:09.112 12:10:40 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:47:09.112 12:10:40 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:47:09.112 12:10:40 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:47:09.112 12:10:40 blockdev_nvme_gpt -- common/autotest_common.sh@560 -- # xtrace_disable 00:47:09.112 12:10:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:09.112 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:47:09.112 12:10:41 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:47:09.112 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@560 -- # xtrace_disable 00:47:09.112 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:09.112 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:47:09.112 12:10:41 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:47:09.112 12:10:41 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:47:09.112 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@560 -- # xtrace_disable 00:47:09.112 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:09.112 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:47:09.112 12:10:41 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:47:09.112 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@560 -- # xtrace_disable 00:47:09.112 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:09.112 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:47:09.112 12:10:41 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:47:09.112 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@560 -- # xtrace_disable 00:47:09.112 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:09.112 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:47:09.112 12:10:41 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:47:09.112 12:10:41 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:47:09.112 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@560 -- # xtrace_disable 00:47:09.112 12:10:41 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:47:09.112 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:09.112 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:47:09.112 12:10:41 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:47:09.112 12:10:41 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:47:09.112 12:10:41 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:47:09.371 12:10:41 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:47:09.371 12:10:41 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:47:09.371 12:10:41 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:47:09.371 12:10:41 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 171548 00:47:09.371 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@949 -- # '[' -z 171548 ']' 00:47:09.371 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # kill -0 171548 00:47:09.371 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # uname 00:47:09.371 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:47:09.371 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 171548 00:47:09.371 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:47:09.371 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:47:09.371 killing process with pid 171548 00:47:09.371 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # echo 'killing process with pid 171548' 00:47:09.371 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # kill 171548 00:47:09.371 12:10:41 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # wait 171548 00:47:11.902 12:10:43 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:47:11.902 12:10:43 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:47:11.902 12:10:43 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:47:11.902 12:10:43 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:11.902 12:10:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:11.902 ************************************ 00:47:11.902 START TEST bdev_hello_world 00:47:11.902 ************************************ 00:47:11.902 12:10:43 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:47:11.902 [2024-06-10 12:10:43.873347] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:47:11.902 [2024-06-10 12:10:43.873604] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171982 ] 00:47:12.159 [2024-06-10 12:10:44.053383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:12.416 [2024-06-10 12:10:44.283499] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:47:12.980 [2024-06-10 12:10:44.800602] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:47:12.980 [2024-06-10 12:10:44.800689] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:47:12.980 [2024-06-10 12:10:44.800726] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:47:12.980 [2024-06-10 12:10:44.803793] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:47:12.980 [2024-06-10 12:10:44.804310] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:47:12.980 [2024-06-10 12:10:44.804358] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:47:12.980 [2024-06-10 12:10:44.804646] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:47:12.980 00:47:12.980 [2024-06-10 12:10:44.804703] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:47:14.355 00:47:14.355 real 0m2.442s 00:47:14.355 user 0m2.087s 00:47:14.355 sys 0m0.256s 00:47:14.355 12:10:46 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:14.355 12:10:46 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:47:14.355 ************************************ 00:47:14.355 END TEST bdev_hello_world 00:47:14.355 ************************************ 00:47:14.355 12:10:46 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:47:14.355 12:10:46 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:47:14.355 12:10:46 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:14.355 12:10:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:14.355 ************************************ 00:47:14.355 START TEST bdev_bounds 00:47:14.355 ************************************ 00:47:14.355 12:10:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # bdev_bounds '' 00:47:14.355 12:10:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=172036 00:47:14.355 12:10:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:47:14.355 Process bdevio pid: 172036 00:47:14.355 12:10:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 172036' 00:47:14.355 12:10:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 172036 00:47:14.355 12:10:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@830 -- # '[' -z 172036 ']' 00:47:14.355 12:10:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:14.355 12:10:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:47:14.355 12:10:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local max_retries=100 00:47:14.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:14.355 12:10:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:14.355 12:10:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # xtrace_disable 00:47:14.355 12:10:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:47:14.355 [2024-06-10 12:10:46.372434] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:47:14.355 [2024-06-10 12:10:46.372668] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172036 ] 00:47:14.613 [2024-06-10 12:10:46.562508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:47:14.872 [2024-06-10 12:10:46.765924] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:47:14.872 [2024-06-10 12:10:46.766104] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:47:14.872 [2024-06-10 12:10:46.766108] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:47:15.437 12:10:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:47:15.437 12:10:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@863 -- # return 0 00:47:15.437 12:10:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:47:15.437 I/O targets: 00:47:15.437 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:47:15.437 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:47:15.437 00:47:15.437 00:47:15.437 CUnit - A unit testing framework for C - Version 2.1-3 00:47:15.437 http://cunit.sourceforge.net/ 00:47:15.437 00:47:15.437 00:47:15.437 Suite: bdevio tests on: Nvme0n1p2 00:47:15.437 Test: blockdev write read block ...passed 00:47:15.437 Test: blockdev write zeroes read block ...passed 00:47:15.437 Test: blockdev write zeroes read no split ...passed 00:47:15.437 Test: blockdev write zeroes read split ...passed 00:47:15.437 Test: blockdev write zeroes read split partial ...passed 00:47:15.437 Test: blockdev reset ...[2024-06-10 12:10:47.426874] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:47:15.437 [2024-06-10 12:10:47.430656] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:47:15.437 passed 00:47:15.437 Test: blockdev write read 8 blocks ...passed 00:47:15.437 Test: blockdev write read size > 128k ...passed 00:47:15.437 Test: blockdev write read invalid size ...passed 00:47:15.437 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:47:15.437 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:47:15.437 Test: blockdev write read max offset ...passed 00:47:15.437 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:47:15.437 Test: blockdev writev readv 8 blocks ...passed 00:47:15.437 Test: blockdev writev readv 30 x 1block ...passed 00:47:15.437 Test: blockdev writev readv block ...passed 00:47:15.437 Test: blockdev writev readv size > 128k ...passed 00:47:15.437 Test: blockdev writev readv size > 128k in two iovs ...passed 00:47:15.437 Test: blockdev comparev and writev ...[2024-06-10 12:10:47.439975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2ac0b000 len:0x1000 00:47:15.437 [2024-06-10 12:10:47.440058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:47:15.437 passed 00:47:15.437 Test: blockdev nvme passthru rw ...passed 00:47:15.437 Test: blockdev nvme passthru vendor specific ...passed 00:47:15.437 Test: blockdev nvme admin passthru ...passed 00:47:15.437 Test: blockdev copy ...passed 00:47:15.437 Suite: bdevio tests on: Nvme0n1p1 00:47:15.437 Test: blockdev write read block ...passed 00:47:15.437 Test: blockdev write zeroes read block ...passed 00:47:15.437 Test: blockdev write zeroes read no split ...passed 00:47:15.437 Test: blockdev write zeroes read split ...passed 00:47:15.696 Test: blockdev write zeroes read split partial ...passed 00:47:15.696 Test: blockdev reset ...[2024-06-10 12:10:47.507595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:47:15.696 [2024-06-10 12:10:47.511328] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:47:15.696 passed 00:47:15.696 Test: blockdev write read 8 blocks ...passed 00:47:15.696 Test: blockdev write read size > 128k ...passed 00:47:15.696 Test: blockdev write read invalid size ...passed 00:47:15.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:47:15.696 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:47:15.696 Test: blockdev write read max offset ...passed 00:47:15.696 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:47:15.696 Test: blockdev writev readv 8 blocks ...passed 00:47:15.696 Test: blockdev writev readv 30 x 1block ...passed 00:47:15.696 Test: blockdev writev readv block ...passed 00:47:15.696 Test: blockdev writev readv size > 128k ...passed 00:47:15.696 Test: blockdev writev readv size > 128k in two iovs ...passed 00:47:15.696 Test: blockdev comparev and writev ...[2024-06-10 12:10:47.520339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2ac0d000 len:0x1000 00:47:15.696 [2024-06-10 12:10:47.520418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:47:15.696 passed 00:47:15.696 Test: blockdev nvme passthru rw ...passed 00:47:15.696 Test: blockdev nvme passthru vendor specific ...passed 00:47:15.696 Test: blockdev nvme admin passthru ...passed 00:47:15.696 Test: blockdev copy ...passed 00:47:15.696 00:47:15.696 Run Summary: Type Total Ran Passed Failed Inactive 00:47:15.696 suites 2 2 n/a 0 0 00:47:15.696 tests 46 46 46 0 0 00:47:15.696 asserts 284 284 284 0 n/a 00:47:15.696 00:47:15.696 Elapsed time = 0.446 seconds 00:47:15.696 0 00:47:15.696 12:10:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 172036 00:47:15.696 12:10:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@949 -- # '[' -z 172036 ']' 00:47:15.696 12:10:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # kill -0 172036 00:47:15.696 12:10:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # uname 00:47:15.696 12:10:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:47:15.696 12:10:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 172036 00:47:15.696 12:10:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:47:15.696 12:10:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:47:15.696 12:10:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # echo 'killing process with pid 172036' 00:47:15.696 killing process with pid 172036 00:47:15.696 12:10:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # kill 172036 00:47:15.696 12:10:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # wait 172036 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:47:17.070 00:47:17.070 real 0m2.595s 00:47:17.070 user 0m5.979s 00:47:17.070 sys 0m0.392s 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:47:17.070 ************************************ 00:47:17.070 END TEST bdev_bounds 00:47:17.070 ************************************ 00:47:17.070 12:10:48 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:47:17.070 12:10:48 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:47:17.070 12:10:48 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:17.070 12:10:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:17.070 ************************************ 00:47:17.070 START TEST bdev_nbd 00:47:17.070 ************************************ 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=2 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=172105 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 172105 /var/tmp/spdk-nbd.sock 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@830 -- # '[' -z 172105 ']' 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local max_retries=100 00:47:17.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # xtrace_disable 00:47:17.070 12:10:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:47:17.070 [2024-06-10 12:10:49.011957] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:47:17.070 [2024-06-10 12:10:49.012127] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:17.327 [2024-06-10 12:10:49.176860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:17.328 [2024-06-10 12:10:49.383505] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:47:18.261 12:10:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:47:18.261 12:10:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@863 -- # return 0 00:47:18.261 12:10:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:47:18.261 12:10:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:18.261 12:10:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:47:18.261 12:10:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:47:18.261 12:10:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:47:18.261 12:10:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:18.261 12:10:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:47:18.261 12:10:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:47:18.261 12:10:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:47:18.261 12:10:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:47:18.261 12:10:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:47:18.261 12:10:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:47:18.261 12:10:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:47:18.261 1+0 records in 00:47:18.261 1+0 records out 00:47:18.261 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494897 s, 8.3 MB/s 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:47:18.261 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:47:18.836 1+0 records in 00:47:18.836 1+0 records out 00:47:18.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632153 s, 6.5 MB/s 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:47:18.836 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:19.095 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:47:19.095 { 00:47:19.095 "nbd_device": "/dev/nbd0", 00:47:19.095 "bdev_name": "Nvme0n1p1" 00:47:19.095 }, 00:47:19.095 { 00:47:19.095 "nbd_device": "/dev/nbd1", 00:47:19.095 "bdev_name": "Nvme0n1p2" 00:47:19.095 } 00:47:19.095 ]' 00:47:19.095 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:47:19.095 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:47:19.095 { 00:47:19.095 "nbd_device": "/dev/nbd0", 00:47:19.095 "bdev_name": "Nvme0n1p1" 00:47:19.095 }, 00:47:19.095 { 00:47:19.095 "nbd_device": "/dev/nbd1", 00:47:19.095 "bdev_name": "Nvme0n1p2" 00:47:19.095 } 00:47:19.095 ]' 00:47:19.095 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:47:19.095 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:47:19.095 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:19.095 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:19.095 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:19.095 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:47:19.095 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:19.095 12:10:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:19.354 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:19.354 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:19.354 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:19.354 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:19.354 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:19.354 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:19.354 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:47:19.354 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:47:19.354 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:19.354 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:47:19.613 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:47:19.613 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:47:19.613 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:47:19.613 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:19.613 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:19.613 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:47:19.613 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:47:19.613 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:47:19.613 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:19.613 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:19.613 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:19.613 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:47:19.613 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:47:19.613 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:19.871 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:47:19.871 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:47:19.871 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:19.871 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:47:19.871 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:47:19.871 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:47:19.871 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:47:19.871 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:47:19.871 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:47:19.871 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:47:19.871 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:19.871 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:47:19.871 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:47:19.872 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:19.872 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:47:19.872 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:47:19.872 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:19.872 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:47:19.872 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:47:19.872 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:19.872 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:47:19.872 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:47:19.872 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:47:19.872 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:47:19.872 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:47:19.872 /dev/nbd0 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:47:20.130 1+0 records in 00:47:20.130 1+0 records out 00:47:20.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443831 s, 9.2 MB/s 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:47:20.130 12:10:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:47:20.130 /dev/nbd1 00:47:20.130 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:47:20.130 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:47:20.130 12:10:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:47:20.130 12:10:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:47:20.130 12:10:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:47:20.130 12:10:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:47:20.130 12:10:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:47:20.130 12:10:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:47:20.130 12:10:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:47:20.131 12:10:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:47:20.131 12:10:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:47:20.131 1+0 records in 00:47:20.131 1+0 records out 00:47:20.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510738 s, 8.0 MB/s 00:47:20.131 12:10:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:20.388 12:10:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:47:20.388 12:10:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:20.388 12:10:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:47:20.388 12:10:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:47:20.388 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:47:20.388 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:47:20.388 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:20.388 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:20.388 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:20.388 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:47:20.388 { 00:47:20.388 "nbd_device": "/dev/nbd0", 00:47:20.388 "bdev_name": "Nvme0n1p1" 00:47:20.388 }, 00:47:20.388 { 00:47:20.388 "nbd_device": "/dev/nbd1", 00:47:20.388 "bdev_name": "Nvme0n1p2" 00:47:20.388 } 00:47:20.388 ]' 00:47:20.388 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:20.388 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:47:20.388 { 00:47:20.388 "nbd_device": "/dev/nbd0", 00:47:20.388 "bdev_name": "Nvme0n1p1" 00:47:20.388 }, 00:47:20.388 { 00:47:20.388 "nbd_device": "/dev/nbd1", 00:47:20.388 "bdev_name": "Nvme0n1p2" 00:47:20.388 } 00:47:20.388 ]' 00:47:20.388 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:47:20.388 /dev/nbd1' 00:47:20.388 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:47:20.388 /dev/nbd1' 00:47:20.388 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:47:20.646 256+0 records in 00:47:20.646 256+0 records out 00:47:20.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00703371 s, 149 MB/s 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:47:20.646 256+0 records in 00:47:20.646 256+0 records out 00:47:20.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0738054 s, 14.2 MB/s 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:47:20.646 256+0 records in 00:47:20.646 256+0 records out 00:47:20.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0747501 s, 14.0 MB/s 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:20.646 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:20.904 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:20.904 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:20.904 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:20.904 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:20.905 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:20.905 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:20.905 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:47:20.905 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:47:20.905 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:20.905 12:10:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:47:21.162 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:47:21.162 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:47:21.162 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:47:21.163 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:21.163 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:21.163 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:47:21.163 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:47:21.163 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:47:21.163 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:21.163 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:21.163 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:21.420 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:47:21.678 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:47:21.678 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:21.678 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:47:21.678 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:21.678 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:47:21.678 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:47:21.678 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:47:21.678 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:47:21.678 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:47:21.678 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:47:21.678 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:47:21.678 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:47:21.678 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:21.678 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:47:21.678 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:47:21.678 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:47:21.678 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:47:21.936 malloc_lvol_verify 00:47:21.936 12:10:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:47:22.194 ed5c8ce6-a5cd-41fc-a1f9-96306f8bbbd4 00:47:22.194 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:47:22.451 ca4e6533-8024-4576-825f-d5d8e5ac98de 00:47:22.451 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:47:22.451 /dev/nbd0 00:47:22.451 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:47:22.451 mke2fs 1.46.5 (30-Dec-2021) 00:47:22.451 00:47:22.451 Filesystem too small for a journal 00:47:22.451 Discarding device blocks: 0/1024 done 00:47:22.452 Creating filesystem with 1024 4k blocks and 1024 inodes 00:47:22.452 00:47:22.452 Allocating group tables: 0/1 done 00:47:22.452 Writing inode tables: 0/1 done 00:47:22.452 Writing superblocks and filesystem accounting information: 0/1 done 00:47:22.452 00:47:22.452 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:47:22.452 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:47:22.452 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:22.452 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:47:22.452 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:22.452 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:47:22.452 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:22.452 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 172105 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@949 -- # '[' -z 172105 ']' 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # kill -0 172105 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # uname 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 172105 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:47:22.710 killing process with pid 172105 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # echo 'killing process with pid 172105' 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # kill 172105 00:47:22.710 12:10:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # wait 172105 00:47:24.138 12:10:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:47:24.138 00:47:24.138 real 0m7.132s 00:47:24.138 user 0m9.998s 00:47:24.138 sys 0m1.860s 00:47:24.138 12:10:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:24.138 12:10:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:47:24.138 ************************************ 00:47:24.138 END TEST bdev_nbd 00:47:24.138 ************************************ 00:47:24.138 12:10:56 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:47:24.138 12:10:56 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:47:24.138 skipping fio tests on NVMe due to multi-ns failures. 00:47:24.138 12:10:56 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:47:24.138 12:10:56 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:47:24.138 12:10:56 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:47:24.138 12:10:56 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:47:24.138 12:10:56 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:47:24.138 12:10:56 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:24.138 12:10:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:24.138 ************************************ 00:47:24.138 START TEST bdev_verify 00:47:24.138 ************************************ 00:47:24.138 12:10:56 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:47:24.396 [2024-06-10 12:10:56.222041] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:47:24.396 [2024-06-10 12:10:56.222253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172359 ] 00:47:24.396 [2024-06-10 12:10:56.403377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:24.653 [2024-06-10 12:10:56.619003] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:47:24.653 [2024-06-10 12:10:56.619008] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:47:25.219 Running I/O for 5 seconds... 00:47:30.475 00:47:30.475 Latency(us) 00:47:30.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:30.475 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:47:30.475 Verification LBA range: start 0x0 length 0x4ff80 00:47:30.475 Nvme0n1p1 : 5.03 4964.49 19.39 0.00 0.00 25710.65 4462.69 31457.28 00:47:30.475 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:47:30.475 Verification LBA range: start 0x4ff80 length 0x4ff80 00:47:30.475 Nvme0n1p1 : 5.03 4830.50 18.87 0.00 0.00 26341.34 3042.74 32206.26 00:47:30.475 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:47:30.475 Verification LBA range: start 0x0 length 0x4ff7f 00:47:30.475 Nvme0n1p2 : 5.03 4962.89 19.39 0.00 0.00 25676.61 3323.61 32206.26 00:47:30.475 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:47:30.475 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:47:30.475 Nvme0n1p2 : 5.03 4832.61 18.88 0.00 0.00 26403.03 4462.69 32955.25 00:47:30.475 =================================================================================================================== 00:47:30.475 Total : 19590.49 76.53 0.00 0.00 26028.50 3042.74 32955.25 00:47:31.850 00:47:31.850 real 0m7.745s 00:47:31.850 user 0m14.086s 00:47:31.850 sys 0m0.297s 00:47:31.850 12:11:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:31.850 12:11:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:47:31.850 ************************************ 00:47:31.850 END TEST bdev_verify 00:47:31.850 ************************************ 00:47:32.107 12:11:03 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:47:32.107 12:11:03 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:47:32.107 12:11:03 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:32.107 12:11:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:32.107 ************************************ 00:47:32.107 START TEST bdev_verify_big_io 00:47:32.107 ************************************ 00:47:32.107 12:11:03 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:47:32.107 [2024-06-10 12:11:04.022171] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:47:32.107 [2024-06-10 12:11:04.022952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172463 ] 00:47:32.366 [2024-06-10 12:11:04.191382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:32.366 [2024-06-10 12:11:04.404286] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:47:32.366 [2024-06-10 12:11:04.404288] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:47:32.933 Running I/O for 5 seconds... 00:47:38.207 00:47:38.207 Latency(us) 00:47:38.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:38.207 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:47:38.207 Verification LBA range: start 0x0 length 0x4ff8 00:47:38.207 Nvme0n1p1 : 5.13 474.23 29.64 0.00 0.00 265389.91 10922.67 367500.92 00:47:38.207 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:47:38.207 Verification LBA range: start 0x4ff8 length 0x4ff8 00:47:38.207 Nvme0n1p1 : 5.21 464.85 29.05 0.00 0.00 262075.88 869.91 379484.65 00:47:38.207 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:47:38.207 Verification LBA range: start 0x0 length 0x4ff7 00:47:38.207 Nvme0n1p2 : 5.19 480.81 30.05 0.00 0.00 255219.17 862.11 377487.36 00:47:38.207 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:47:38.207 Verification LBA range: start 0x4ff7 length 0x4ff7 00:47:38.207 Nvme0n1p2 : 5.17 445.53 27.85 0.00 0.00 280466.88 16852.11 367500.92 00:47:38.207 =================================================================================================================== 00:47:38.207 Total : 1865.42 116.59 0.00 0.00 265527.83 862.11 379484.65 00:47:40.111 00:47:40.111 real 0m8.124s 00:47:40.111 user 0m14.822s 00:47:40.111 sys 0m0.222s 00:47:40.111 12:11:12 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:40.111 12:11:12 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:47:40.111 ************************************ 00:47:40.111 END TEST bdev_verify_big_io 00:47:40.111 ************************************ 00:47:40.111 12:11:12 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:47:40.111 12:11:12 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:47:40.111 12:11:12 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:40.111 12:11:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:40.111 ************************************ 00:47:40.111 START TEST bdev_write_zeroes 00:47:40.111 ************************************ 00:47:40.111 12:11:12 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:47:40.369 [2024-06-10 12:11:12.235253] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:47:40.369 [2024-06-10 12:11:12.235513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172572 ] 00:47:40.369 [2024-06-10 12:11:12.423184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:40.628 [2024-06-10 12:11:12.641519] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:47:41.194 Running I/O for 1 seconds... 00:47:42.128 00:47:42.128 Latency(us) 00:47:42.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:42.128 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:47:42.128 Nvme0n1p1 : 1.01 26499.06 103.51 0.00 0.00 4819.74 2699.46 12108.56 00:47:42.128 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:47:42.128 Nvme0n1p2 : 1.01 26454.91 103.34 0.00 0.00 4820.15 3214.38 11921.31 00:47:42.128 =================================================================================================================== 00:47:42.128 Total : 52953.97 206.85 0.00 0.00 4819.94 2699.46 12108.56 00:47:44.036 00:47:44.036 real 0m3.681s 00:47:44.036 user 0m3.300s 00:47:44.036 sys 0m0.280s 00:47:44.036 12:11:15 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:44.036 ************************************ 00:47:44.036 END TEST bdev_write_zeroes 00:47:44.036 ************************************ 00:47:44.036 12:11:15 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:47:44.036 12:11:15 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:47:44.036 12:11:15 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:47:44.036 12:11:15 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:44.036 12:11:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:44.036 ************************************ 00:47:44.036 START TEST bdev_json_nonenclosed 00:47:44.036 ************************************ 00:47:44.036 12:11:15 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:47:44.036 [2024-06-10 12:11:15.948045] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:47:44.036 [2024-06-10 12:11:15.948449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172641 ] 00:47:44.295 [2024-06-10 12:11:16.107385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:44.295 [2024-06-10 12:11:16.341529] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:47:44.295 [2024-06-10 12:11:16.341632] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:47:44.295 [2024-06-10 12:11:16.341693] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:47:44.295 [2024-06-10 12:11:16.341721] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:47:44.861 00:47:44.861 real 0m0.980s 00:47:44.861 user 0m0.740s 00:47:44.861 sys 0m0.140s 00:47:44.861 12:11:16 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:44.861 12:11:16 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:47:44.861 ************************************ 00:47:44.861 END TEST bdev_json_nonenclosed 00:47:44.861 ************************************ 00:47:44.861 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:47:44.861 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:47:44.861 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:44.861 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:45.119 ************************************ 00:47:45.119 START TEST bdev_json_nonarray 00:47:45.119 ************************************ 00:47:45.119 12:11:16 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:47:45.119 [2024-06-10 12:11:17.012281] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:47:45.119 [2024-06-10 12:11:17.012499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172674 ] 00:47:45.377 [2024-06-10 12:11:17.193599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:45.377 [2024-06-10 12:11:17.422134] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:47:45.377 [2024-06-10 12:11:17.422246] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:47:45.377 [2024-06-10 12:11:17.422317] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:47:45.377 [2024-06-10 12:11:17.422348] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:47:45.943 00:47:45.943 real 0m1.036s 00:47:45.943 user 0m0.803s 00:47:45.943 sys 0m0.132s 00:47:45.943 12:11:17 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:45.943 12:11:17 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:47:45.943 ************************************ 00:47:45.943 END TEST bdev_json_nonarray 00:47:45.943 ************************************ 00:47:46.201 12:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:47:46.201 12:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:47:46.201 12:11:18 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:47:46.201 12:11:18 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:47:46.201 12:11:18 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:46.201 12:11:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:46.201 ************************************ 00:47:46.201 START TEST bdev_gpt_uuid 00:47:46.201 ************************************ 00:47:46.201 12:11:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # bdev_gpt_uuid 00:47:46.201 12:11:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:47:46.201 12:11:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:47:46.201 12:11:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=172712 00:47:46.201 12:11:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:47:46.201 12:11:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 172712 00:47:46.201 12:11:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:47:46.201 12:11:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@830 -- # '[' -z 172712 ']' 00:47:46.201 12:11:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:46.201 12:11:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local max_retries=100 00:47:46.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:46.201 12:11:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:46.201 12:11:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # xtrace_disable 00:47:46.201 12:11:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:47:46.201 [2024-06-10 12:11:18.114728] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:47:46.201 [2024-06-10 12:11:18.114921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172712 ] 00:47:46.458 [2024-06-10 12:11:18.281282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:46.458 [2024-06-10 12:11:18.489499] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:47:47.426 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:47:47.426 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@863 -- # return 0 00:47:47.426 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:47:47.426 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@560 -- # xtrace_disable 00:47:47.426 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:47:47.426 Some configs were skipped because the RPC state that can call them passed over. 00:47:47.426 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:47:47.426 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:47:47.426 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@560 -- # xtrace_disable 00:47:47.426 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:47:47.426 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:47:47.426 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:47:47.426 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@560 -- # xtrace_disable 00:47:47.426 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:47:47.426 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:47:47.426 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:47:47.426 { 00:47:47.426 "name": "Nvme0n1p1", 00:47:47.426 "aliases": [ 00:47:47.426 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:47:47.426 ], 00:47:47.426 "product_name": "GPT Disk", 00:47:47.426 "block_size": 4096, 00:47:47.426 "num_blocks": 655104, 00:47:47.426 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:47:47.426 "assigned_rate_limits": { 00:47:47.426 "rw_ios_per_sec": 0, 00:47:47.426 "rw_mbytes_per_sec": 0, 00:47:47.426 "r_mbytes_per_sec": 0, 00:47:47.426 "w_mbytes_per_sec": 0 00:47:47.426 }, 00:47:47.426 "claimed": false, 00:47:47.426 "zoned": false, 00:47:47.426 "supported_io_types": { 00:47:47.426 "read": true, 00:47:47.426 "write": true, 00:47:47.426 "unmap": true, 00:47:47.426 "write_zeroes": true, 00:47:47.426 "flush": true, 00:47:47.426 "reset": true, 00:47:47.426 "compare": true, 00:47:47.426 "compare_and_write": false, 00:47:47.426 "abort": true, 00:47:47.426 "nvme_admin": false, 00:47:47.426 "nvme_io": false 00:47:47.426 }, 00:47:47.426 "driver_specific": { 00:47:47.426 "gpt": { 00:47:47.426 "base_bdev": "Nvme0n1", 00:47:47.426 "offset_blocks": 256, 00:47:47.426 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:47:47.426 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:47:47.426 "partition_name": "SPDK_TEST_first" 00:47:47.426 } 00:47:47.426 } 00:47:47.426 } 00:47:47.426 ]' 00:47:47.692 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:47:47.692 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:47:47.692 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:47:47.692 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:47:47.692 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:47:47.692 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:47:47.692 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:47:47.692 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@560 -- # xtrace_disable 00:47:47.692 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:47:47.692 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:47:47.692 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:47:47.692 { 00:47:47.692 "name": "Nvme0n1p2", 00:47:47.692 "aliases": [ 00:47:47.692 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:47:47.692 ], 00:47:47.692 "product_name": "GPT Disk", 00:47:47.692 "block_size": 4096, 00:47:47.692 "num_blocks": 655103, 00:47:47.692 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:47:47.692 "assigned_rate_limits": { 00:47:47.692 "rw_ios_per_sec": 0, 00:47:47.692 "rw_mbytes_per_sec": 0, 00:47:47.692 "r_mbytes_per_sec": 0, 00:47:47.692 "w_mbytes_per_sec": 0 00:47:47.692 }, 00:47:47.692 "claimed": false, 00:47:47.692 "zoned": false, 00:47:47.692 "supported_io_types": { 00:47:47.692 "read": true, 00:47:47.692 "write": true, 00:47:47.692 "unmap": true, 00:47:47.692 "write_zeroes": true, 00:47:47.692 "flush": true, 00:47:47.692 "reset": true, 00:47:47.692 "compare": true, 00:47:47.692 "compare_and_write": false, 00:47:47.692 "abort": true, 00:47:47.692 "nvme_admin": false, 00:47:47.692 "nvme_io": false 00:47:47.692 }, 00:47:47.692 "driver_specific": { 00:47:47.692 "gpt": { 00:47:47.692 "base_bdev": "Nvme0n1", 00:47:47.692 "offset_blocks": 655360, 00:47:47.692 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:47:47.692 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:47:47.692 "partition_name": "SPDK_TEST_second" 00:47:47.692 } 00:47:47.692 } 00:47:47.692 } 00:47:47.692 ]' 00:47:47.692 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:47:47.692 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:47:47.692 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:47:47.692 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:47:47.692 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:47:47.950 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:47:47.950 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 172712 00:47:47.950 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@949 -- # '[' -z 172712 ']' 00:47:47.950 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # kill -0 172712 00:47:47.950 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # uname 00:47:47.950 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:47:47.950 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 172712 00:47:47.950 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:47:47.950 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:47:47.950 killing process with pid 172712 00:47:47.950 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 172712' 00:47:47.950 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # kill 172712 00:47:47.950 12:11:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # wait 172712 00:47:50.477 00:47:50.477 real 0m4.344s 00:47:50.477 user 0m4.569s 00:47:50.477 sys 0m0.474s 00:47:50.477 12:11:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:50.477 12:11:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:47:50.477 ************************************ 00:47:50.477 END TEST bdev_gpt_uuid 00:47:50.477 ************************************ 00:47:50.477 12:11:22 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:47:50.477 12:11:22 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:47:50.477 12:11:22 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:47:50.477 12:11:22 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:47:50.477 12:11:22 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:47:50.477 12:11:22 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:47:50.477 12:11:22 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:47:50.477 12:11:22 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:47:50.477 12:11:22 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:47:50.735 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:47:50.735 Waiting for block devices as requested 00:47:50.992 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:47:50.992 12:11:22 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:47:50.992 12:11:22 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:47:50.992 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:47:50.992 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:47:50.992 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:47:50.992 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:47:50.992 12:11:22 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:47:50.992 00:47:50.992 real 0m47.854s 00:47:50.992 user 1m6.098s 00:47:50.992 sys 0m6.806s 00:47:50.992 12:11:22 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:50.992 ************************************ 00:47:50.992 END TEST blockdev_nvme_gpt 00:47:50.992 ************************************ 00:47:50.992 12:11:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:47:50.992 12:11:23 -- spdk/autotest.sh@220 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:47:50.992 12:11:23 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:47:50.992 12:11:23 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:50.992 12:11:23 -- common/autotest_common.sh@10 -- # set +x 00:47:50.992 ************************************ 00:47:50.992 START TEST nvme 00:47:50.992 ************************************ 00:47:50.992 12:11:23 nvme -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:47:51.250 * Looking for test storage... 00:47:51.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:47:51.250 12:11:23 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:47:51.508 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:47:51.766 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:47:52.702 12:11:24 nvme -- nvme/nvme.sh@79 -- # uname 00:47:52.702 12:11:24 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:47:52.702 12:11:24 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:47:52.702 12:11:24 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:47:52.702 12:11:24 nvme -- common/autotest_common.sh@1081 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:47:52.702 12:11:24 nvme -- common/autotest_common.sh@1067 -- # _randomize_va_space=2 00:47:52.702 12:11:24 nvme -- common/autotest_common.sh@1068 -- # echo 0 00:47:52.702 12:11:24 nvme -- common/autotest_common.sh@1069 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:47:52.702 12:11:24 nvme -- common/autotest_common.sh@1070 -- # stubpid=173128 00:47:52.702 12:11:24 nvme -- common/autotest_common.sh@1071 -- # echo Waiting for stub to ready for secondary processes... 00:47:52.702 Waiting for stub to ready for secondary processes... 00:47:52.702 12:11:24 nvme -- common/autotest_common.sh@1072 -- # '[' -e /var/run/spdk_stub0 ']' 00:47:52.702 12:11:24 nvme -- common/autotest_common.sh@1074 -- # [[ -e /proc/173128 ]] 00:47:52.702 12:11:24 nvme -- common/autotest_common.sh@1075 -- # sleep 1s 00:47:52.702 [2024-06-10 12:11:24.745027] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:47:52.702 [2024-06-10 12:11:24.745272] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:47:53.637 12:11:25 nvme -- common/autotest_common.sh@1072 -- # '[' -e /var/run/spdk_stub0 ']' 00:47:53.637 12:11:25 nvme -- common/autotest_common.sh@1074 -- # [[ -e /proc/173128 ]] 00:47:53.637 12:11:25 nvme -- common/autotest_common.sh@1075 -- # sleep 1s 00:47:53.895 [2024-06-10 12:11:25.808189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:47:54.153 [2024-06-10 12:11:26.047703] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:47:54.153 [2024-06-10 12:11:26.047779] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:47:54.153 [2024-06-10 12:11:26.048027] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:47:54.153 [2024-06-10 12:11:26.058398] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:47:54.154 [2024-06-10 12:11:26.058687] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:47:54.154 [2024-06-10 12:11:26.069094] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:47:54.154 [2024-06-10 12:11:26.072243] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:47:54.720 12:11:26 nvme -- common/autotest_common.sh@1072 -- # '[' -e /var/run/spdk_stub0 ']' 00:47:54.720 done. 00:47:54.720 12:11:26 nvme -- common/autotest_common.sh@1077 -- # echo done. 00:47:54.720 12:11:26 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:47:54.720 12:11:26 nvme -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:47:54.720 12:11:26 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:54.720 12:11:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:54.720 ************************************ 00:47:54.720 START TEST nvme_reset 00:47:54.720 ************************************ 00:47:54.720 12:11:26 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:47:55.287 Initializing NVMe Controllers 00:47:55.287 Skipping QEMU NVMe SSD at 0000:00:10.0 00:47:55.287 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:47:55.287 00:47:55.287 real 0m0.402s 00:47:55.287 user 0m0.112s 00:47:55.287 sys 0m0.181s 00:47:55.287 12:11:27 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:55.287 12:11:27 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:47:55.287 ************************************ 00:47:55.287 END TEST nvme_reset 00:47:55.287 ************************************ 00:47:55.287 12:11:27 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:47:55.287 12:11:27 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:47:55.287 12:11:27 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:55.287 12:11:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:55.287 ************************************ 00:47:55.287 START TEST nvme_identify 00:47:55.287 ************************************ 00:47:55.287 12:11:27 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # nvme_identify 00:47:55.287 12:11:27 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:47:55.287 12:11:27 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:47:55.287 12:11:27 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:47:55.287 12:11:27 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:47:55.287 12:11:27 nvme.nvme_identify -- common/autotest_common.sh@1512 -- # bdfs=() 00:47:55.287 12:11:27 nvme.nvme_identify -- common/autotest_common.sh@1512 -- # local bdfs 00:47:55.287 12:11:27 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:47:55.287 12:11:27 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:47:55.287 12:11:27 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:47:55.287 12:11:27 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:47:55.287 12:11:27 nvme.nvme_identify -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 00:47:55.287 12:11:27 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:47:55.545 [2024-06-10 12:11:27.556895] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 173162 terminated unexpected 00:47:55.545 ===================================================== 00:47:55.545 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:47:55.545 ===================================================== 00:47:55.545 Controller Capabilities/Features 00:47:55.545 ================================ 00:47:55.545 Vendor ID: 1b36 00:47:55.545 Subsystem Vendor ID: 1af4 00:47:55.545 Serial Number: 12340 00:47:55.545 Model Number: QEMU NVMe Ctrl 00:47:55.545 Firmware Version: 8.0.0 00:47:55.545 Recommended Arb Burst: 6 00:47:55.545 IEEE OUI Identifier: 00 54 52 00:47:55.545 Multi-path I/O 00:47:55.545 May have multiple subsystem ports: No 00:47:55.545 May have multiple controllers: No 00:47:55.545 Associated with SR-IOV VF: No 00:47:55.545 Max Data Transfer Size: 524288 00:47:55.545 Max Number of Namespaces: 256 00:47:55.545 Max Number of I/O Queues: 64 00:47:55.545 NVMe Specification Version (VS): 1.4 00:47:55.545 NVMe Specification Version (Identify): 1.4 00:47:55.545 Maximum Queue Entries: 2048 00:47:55.545 Contiguous Queues Required: Yes 00:47:55.545 Arbitration Mechanisms Supported 00:47:55.545 Weighted Round Robin: Not Supported 00:47:55.545 Vendor Specific: Not Supported 00:47:55.545 Reset Timeout: 7500 ms 00:47:55.545 Doorbell Stride: 4 bytes 00:47:55.545 NVM Subsystem Reset: Not Supported 00:47:55.546 Command Sets Supported 00:47:55.546 NVM Command Set: Supported 00:47:55.546 Boot Partition: Not Supported 00:47:55.546 Memory Page Size Minimum: 4096 bytes 00:47:55.546 Memory Page Size Maximum: 65536 bytes 00:47:55.546 Persistent Memory Region: Not Supported 00:47:55.546 Optional Asynchronous Events Supported 00:47:55.546 Namespace Attribute Notices: Supported 00:47:55.546 Firmware Activation Notices: Not Supported 00:47:55.546 ANA Change Notices: Not Supported 00:47:55.546 PLE Aggregate Log Change Notices: Not Supported 00:47:55.546 LBA Status Info Alert Notices: Not Supported 00:47:55.546 EGE Aggregate Log Change Notices: Not Supported 00:47:55.546 Normal NVM Subsystem Shutdown event: Not Supported 00:47:55.546 Zone Descriptor Change Notices: Not Supported 00:47:55.546 Discovery Log Change Notices: Not Supported 00:47:55.546 Controller Attributes 00:47:55.546 128-bit Host Identifier: Not Supported 00:47:55.546 Non-Operational Permissive Mode: Not Supported 00:47:55.546 NVM Sets: Not Supported 00:47:55.546 Read Recovery Levels: Not Supported 00:47:55.546 Endurance Groups: Not Supported 00:47:55.546 Predictable Latency Mode: Not Supported 00:47:55.546 Traffic Based Keep ALive: Not Supported 00:47:55.546 Namespace Granularity: Not Supported 00:47:55.546 SQ Associations: Not Supported 00:47:55.546 UUID List: Not Supported 00:47:55.546 Multi-Domain Subsystem: Not Supported 00:47:55.546 Fixed Capacity Management: Not Supported 00:47:55.546 Variable Capacity Management: Not Supported 00:47:55.546 Delete Endurance Group: Not Supported 00:47:55.546 Delete NVM Set: Not Supported 00:47:55.546 Extended LBA Formats Supported: Supported 00:47:55.546 Flexible Data Placement Supported: Not Supported 00:47:55.546 00:47:55.546 Controller Memory Buffer Support 00:47:55.546 ================================ 00:47:55.546 Supported: No 00:47:55.546 00:47:55.546 Persistent Memory Region Support 00:47:55.546 ================================ 00:47:55.546 Supported: No 00:47:55.546 00:47:55.546 Admin Command Set Attributes 00:47:55.546 ============================ 00:47:55.546 Security Send/Receive: Not Supported 00:47:55.546 Format NVM: Supported 00:47:55.546 Firmware Activate/Download: Not Supported 00:47:55.546 Namespace Management: Supported 00:47:55.546 Device Self-Test: Not Supported 00:47:55.546 Directives: Supported 00:47:55.546 NVMe-MI: Not Supported 00:47:55.546 Virtualization Management: Not Supported 00:47:55.546 Doorbell Buffer Config: Supported 00:47:55.546 Get LBA Status Capability: Not Supported 00:47:55.546 Command & Feature Lockdown Capability: Not Supported 00:47:55.546 Abort Command Limit: 4 00:47:55.546 Async Event Request Limit: 4 00:47:55.546 Number of Firmware Slots: N/A 00:47:55.546 Firmware Slot 1 Read-Only: N/A 00:47:55.546 Firmware Activation Without Reset: N/A 00:47:55.546 Multiple Update Detection Support: N/A 00:47:55.546 Firmware Update Granularity: No Information Provided 00:47:55.546 Per-Namespace SMART Log: Yes 00:47:55.546 Asymmetric Namespace Access Log Page: Not Supported 00:47:55.546 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:47:55.546 Command Effects Log Page: Supported 00:47:55.546 Get Log Page Extended Data: Supported 00:47:55.546 Telemetry Log Pages: Not Supported 00:47:55.546 Persistent Event Log Pages: Not Supported 00:47:55.546 Supported Log Pages Log Page: May Support 00:47:55.546 Commands Supported & Effects Log Page: Not Supported 00:47:55.546 Feature Identifiers & Effects Log Page:May Support 00:47:55.546 NVMe-MI Commands & Effects Log Page: May Support 00:47:55.546 Data Area 4 for Telemetry Log: Not Supported 00:47:55.546 Error Log Page Entries Supported: 1 00:47:55.546 Keep Alive: Not Supported 00:47:55.546 00:47:55.546 NVM Command Set Attributes 00:47:55.546 ========================== 00:47:55.546 Submission Queue Entry Size 00:47:55.546 Max: 64 00:47:55.546 Min: 64 00:47:55.546 Completion Queue Entry Size 00:47:55.546 Max: 16 00:47:55.546 Min: 16 00:47:55.546 Number of Namespaces: 256 00:47:55.546 Compare Command: Supported 00:47:55.546 Write Uncorrectable Command: Not Supported 00:47:55.546 Dataset Management Command: Supported 00:47:55.546 Write Zeroes Command: Supported 00:47:55.546 Set Features Save Field: Supported 00:47:55.546 Reservations: Not Supported 00:47:55.546 Timestamp: Supported 00:47:55.546 Copy: Supported 00:47:55.546 Volatile Write Cache: Present 00:47:55.546 Atomic Write Unit (Normal): 1 00:47:55.546 Atomic Write Unit (PFail): 1 00:47:55.546 Atomic Compare & Write Unit: 1 00:47:55.546 Fused Compare & Write: Not Supported 00:47:55.546 Scatter-Gather List 00:47:55.546 SGL Command Set: Supported 00:47:55.546 SGL Keyed: Not Supported 00:47:55.546 SGL Bit Bucket Descriptor: Not Supported 00:47:55.546 SGL Metadata Pointer: Not Supported 00:47:55.546 Oversized SGL: Not Supported 00:47:55.546 SGL Metadata Address: Not Supported 00:47:55.546 SGL Offset: Not Supported 00:47:55.546 Transport SGL Data Block: Not Supported 00:47:55.546 Replay Protected Memory Block: Not Supported 00:47:55.546 00:47:55.546 Firmware Slot Information 00:47:55.546 ========================= 00:47:55.546 Active slot: 1 00:47:55.546 Slot 1 Firmware Revision: 1.0 00:47:55.546 00:47:55.546 00:47:55.546 Commands Supported and Effects 00:47:55.546 ============================== 00:47:55.546 Admin Commands 00:47:55.546 -------------- 00:47:55.546 Delete I/O Submission Queue (00h): Supported 00:47:55.546 Create I/O Submission Queue (01h): Supported 00:47:55.546 Get Log Page (02h): Supported 00:47:55.546 Delete I/O Completion Queue (04h): Supported 00:47:55.546 Create I/O Completion Queue (05h): Supported 00:47:55.546 Identify (06h): Supported 00:47:55.546 Abort (08h): Supported 00:47:55.546 Set Features (09h): Supported 00:47:55.546 Get Features (0Ah): Supported 00:47:55.546 Asynchronous Event Request (0Ch): Supported 00:47:55.546 Namespace Attachment (15h): Supported NS-Inventory-Change 00:47:55.546 Directive Send (19h): Supported 00:47:55.546 Directive Receive (1Ah): Supported 00:47:55.546 Virtualization Management (1Ch): Supported 00:47:55.546 Doorbell Buffer Config (7Ch): Supported 00:47:55.546 Format NVM (80h): Supported LBA-Change 00:47:55.546 I/O Commands 00:47:55.546 ------------ 00:47:55.546 Flush (00h): Supported LBA-Change 00:47:55.546 Write (01h): Supported LBA-Change 00:47:55.546 Read (02h): Supported 00:47:55.546 Compare (05h): Supported 00:47:55.546 Write Zeroes (08h): Supported LBA-Change 00:47:55.546 Dataset Management (09h): Supported LBA-Change 00:47:55.546 Unknown (0Ch): Supported 00:47:55.546 Unknown (12h): Supported 00:47:55.546 Copy (19h): Supported LBA-Change 00:47:55.546 Unknown (1Dh): Supported LBA-Change 00:47:55.546 00:47:55.546 Error Log 00:47:55.546 ========= 00:47:55.546 00:47:55.546 Arbitration 00:47:55.546 =========== 00:47:55.546 Arbitration Burst: no limit 00:47:55.546 00:47:55.546 Power Management 00:47:55.546 ================ 00:47:55.546 Number of Power States: 1 00:47:55.546 Current Power State: Power State #0 00:47:55.546 Power State #0: 00:47:55.546 Max Power: 25.00 W 00:47:55.546 Non-Operational State: Operational 00:47:55.546 Entry Latency: 16 microseconds 00:47:55.546 Exit Latency: 4 microseconds 00:47:55.546 Relative Read Throughput: 0 00:47:55.546 Relative Read Latency: 0 00:47:55.546 Relative Write Throughput: 0 00:47:55.546 Relative Write Latency: 0 00:47:55.805 Idle Power: Not Reported 00:47:55.805 Active Power: Not Reported 00:47:55.805 Non-Operational Permissive Mode: Not Supported 00:47:55.805 00:47:55.805 Health Information 00:47:55.805 ================== 00:47:55.805 Critical Warnings: 00:47:55.805 Available Spare Space: OK 00:47:55.805 Temperature: OK 00:47:55.805 Device Reliability: OK 00:47:55.805 Read Only: No 00:47:55.805 Volatile Memory Backup: OK 00:47:55.805 Current Temperature: 323 Kelvin (50 Celsius) 00:47:55.805 Temperature Threshold: 343 Kelvin (70 Celsius) 00:47:55.805 Available Spare: 0% 00:47:55.805 Available Spare Threshold: 0% 00:47:55.805 Life Percentage Used: 0% 00:47:55.805 Data Units Read: 4505 00:47:55.805 Data Units Written: 4169 00:47:55.805 Host Read Commands: 218440 00:47:55.805 Host Write Commands: 231489 00:47:55.805 Controller Busy Time: 0 minutes 00:47:55.805 Power Cycles: 0 00:47:55.805 Power On Hours: 0 hours 00:47:55.805 Unsafe Shutdowns: 0 00:47:55.805 Unrecoverable Media Errors: 0 00:47:55.805 Lifetime Error Log Entries: 0 00:47:55.805 Warning Temperature Time: 0 minutes 00:47:55.805 Critical Temperature Time: 0 minutes 00:47:55.805 00:47:55.805 Number of Queues 00:47:55.805 ================ 00:47:55.805 Number of I/O Submission Queues: 64 00:47:55.805 Number of I/O Completion Queues: 64 00:47:55.805 00:47:55.805 ZNS Specific Controller Data 00:47:55.805 ============================ 00:47:55.805 Zone Append Size Limit: 0 00:47:55.805 00:47:55.805 00:47:55.805 Active Namespaces 00:47:55.805 ================= 00:47:55.805 Namespace ID:1 00:47:55.805 Error Recovery Timeout: Unlimited 00:47:55.805 Command Set Identifier: NVM (00h) 00:47:55.805 Deallocate: Supported 00:47:55.805 Deallocated/Unwritten Error: Supported 00:47:55.805 Deallocated Read Value: All 0x00 00:47:55.805 Deallocate in Write Zeroes: Not Supported 00:47:55.805 Deallocated Guard Field: 0xFFFF 00:47:55.805 Flush: Supported 00:47:55.805 Reservation: Not Supported 00:47:55.805 Namespace Sharing Capabilities: Private 00:47:55.805 Size (in LBAs): 1310720 (5GiB) 00:47:55.805 Capacity (in LBAs): 1310720 (5GiB) 00:47:55.805 Utilization (in LBAs): 1310720 (5GiB) 00:47:55.805 Thin Provisioning: Not Supported 00:47:55.805 Per-NS Atomic Units: No 00:47:55.805 Maximum Single Source Range Length: 128 00:47:55.805 Maximum Copy Length: 128 00:47:55.805 Maximum Source Range Count: 128 00:47:55.805 NGUID/EUI64 Never Reused: No 00:47:55.805 Namespace Write Protected: No 00:47:55.805 Number of LBA Formats: 8 00:47:55.805 Current LBA Format: LBA Format #04 00:47:55.805 LBA Format #00: Data Size: 512 Metadata Size: 0 00:47:55.805 LBA Format #01: Data Size: 512 Metadata Size: 8 00:47:55.805 LBA Format #02: Data Size: 512 Metadata Size: 16 00:47:55.805 LBA Format #03: Data Size: 512 Metadata Size: 64 00:47:55.805 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:47:55.805 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:47:55.805 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:47:55.805 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:47:55.805 00:47:55.805 12:11:27 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:47:55.805 12:11:27 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:47:56.064 ===================================================== 00:47:56.064 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:47:56.064 ===================================================== 00:47:56.064 Controller Capabilities/Features 00:47:56.064 ================================ 00:47:56.064 Vendor ID: 1b36 00:47:56.064 Subsystem Vendor ID: 1af4 00:47:56.064 Serial Number: 12340 00:47:56.064 Model Number: QEMU NVMe Ctrl 00:47:56.064 Firmware Version: 8.0.0 00:47:56.064 Recommended Arb Burst: 6 00:47:56.064 IEEE OUI Identifier: 00 54 52 00:47:56.064 Multi-path I/O 00:47:56.064 May have multiple subsystem ports: No 00:47:56.064 May have multiple controllers: No 00:47:56.064 Associated with SR-IOV VF: No 00:47:56.064 Max Data Transfer Size: 524288 00:47:56.064 Max Number of Namespaces: 256 00:47:56.064 Max Number of I/O Queues: 64 00:47:56.064 NVMe Specification Version (VS): 1.4 00:47:56.064 NVMe Specification Version (Identify): 1.4 00:47:56.064 Maximum Queue Entries: 2048 00:47:56.064 Contiguous Queues Required: Yes 00:47:56.064 Arbitration Mechanisms Supported 00:47:56.064 Weighted Round Robin: Not Supported 00:47:56.064 Vendor Specific: Not Supported 00:47:56.064 Reset Timeout: 7500 ms 00:47:56.064 Doorbell Stride: 4 bytes 00:47:56.064 NVM Subsystem Reset: Not Supported 00:47:56.064 Command Sets Supported 00:47:56.064 NVM Command Set: Supported 00:47:56.064 Boot Partition: Not Supported 00:47:56.064 Memory Page Size Minimum: 4096 bytes 00:47:56.064 Memory Page Size Maximum: 65536 bytes 00:47:56.064 Persistent Memory Region: Not Supported 00:47:56.064 Optional Asynchronous Events Supported 00:47:56.064 Namespace Attribute Notices: Supported 00:47:56.064 Firmware Activation Notices: Not Supported 00:47:56.064 ANA Change Notices: Not Supported 00:47:56.064 PLE Aggregate Log Change Notices: Not Supported 00:47:56.064 LBA Status Info Alert Notices: Not Supported 00:47:56.064 EGE Aggregate Log Change Notices: Not Supported 00:47:56.064 Normal NVM Subsystem Shutdown event: Not Supported 00:47:56.064 Zone Descriptor Change Notices: Not Supported 00:47:56.064 Discovery Log Change Notices: Not Supported 00:47:56.064 Controller Attributes 00:47:56.064 128-bit Host Identifier: Not Supported 00:47:56.064 Non-Operational Permissive Mode: Not Supported 00:47:56.064 NVM Sets: Not Supported 00:47:56.064 Read Recovery Levels: Not Supported 00:47:56.064 Endurance Groups: Not Supported 00:47:56.064 Predictable Latency Mode: Not Supported 00:47:56.064 Traffic Based Keep ALive: Not Supported 00:47:56.064 Namespace Granularity: Not Supported 00:47:56.064 SQ Associations: Not Supported 00:47:56.064 UUID List: Not Supported 00:47:56.064 Multi-Domain Subsystem: Not Supported 00:47:56.064 Fixed Capacity Management: Not Supported 00:47:56.064 Variable Capacity Management: Not Supported 00:47:56.064 Delete Endurance Group: Not Supported 00:47:56.064 Delete NVM Set: Not Supported 00:47:56.064 Extended LBA Formats Supported: Supported 00:47:56.064 Flexible Data Placement Supported: Not Supported 00:47:56.064 00:47:56.064 Controller Memory Buffer Support 00:47:56.065 ================================ 00:47:56.065 Supported: No 00:47:56.065 00:47:56.065 Persistent Memory Region Support 00:47:56.065 ================================ 00:47:56.065 Supported: No 00:47:56.065 00:47:56.065 Admin Command Set Attributes 00:47:56.065 ============================ 00:47:56.065 Security Send/Receive: Not Supported 00:47:56.065 Format NVM: Supported 00:47:56.065 Firmware Activate/Download: Not Supported 00:47:56.065 Namespace Management: Supported 00:47:56.065 Device Self-Test: Not Supported 00:47:56.065 Directives: Supported 00:47:56.065 NVMe-MI: Not Supported 00:47:56.065 Virtualization Management: Not Supported 00:47:56.065 Doorbell Buffer Config: Supported 00:47:56.065 Get LBA Status Capability: Not Supported 00:47:56.065 Command & Feature Lockdown Capability: Not Supported 00:47:56.065 Abort Command Limit: 4 00:47:56.065 Async Event Request Limit: 4 00:47:56.065 Number of Firmware Slots: N/A 00:47:56.065 Firmware Slot 1 Read-Only: N/A 00:47:56.065 Firmware Activation Without Reset: N/A 00:47:56.065 Multiple Update Detection Support: N/A 00:47:56.065 Firmware Update Granularity: No Information Provided 00:47:56.065 Per-Namespace SMART Log: Yes 00:47:56.065 Asymmetric Namespace Access Log Page: Not Supported 00:47:56.065 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:47:56.065 Command Effects Log Page: Supported 00:47:56.065 Get Log Page Extended Data: Supported 00:47:56.065 Telemetry Log Pages: Not Supported 00:47:56.065 Persistent Event Log Pages: Not Supported 00:47:56.065 Supported Log Pages Log Page: May Support 00:47:56.065 Commands Supported & Effects Log Page: Not Supported 00:47:56.065 Feature Identifiers & Effects Log Page:May Support 00:47:56.065 NVMe-MI Commands & Effects Log Page: May Support 00:47:56.065 Data Area 4 for Telemetry Log: Not Supported 00:47:56.065 Error Log Page Entries Supported: 1 00:47:56.065 Keep Alive: Not Supported 00:47:56.065 00:47:56.065 NVM Command Set Attributes 00:47:56.065 ========================== 00:47:56.065 Submission Queue Entry Size 00:47:56.065 Max: 64 00:47:56.065 Min: 64 00:47:56.065 Completion Queue Entry Size 00:47:56.065 Max: 16 00:47:56.065 Min: 16 00:47:56.065 Number of Namespaces: 256 00:47:56.065 Compare Command: Supported 00:47:56.065 Write Uncorrectable Command: Not Supported 00:47:56.065 Dataset Management Command: Supported 00:47:56.065 Write Zeroes Command: Supported 00:47:56.065 Set Features Save Field: Supported 00:47:56.065 Reservations: Not Supported 00:47:56.065 Timestamp: Supported 00:47:56.065 Copy: Supported 00:47:56.065 Volatile Write Cache: Present 00:47:56.065 Atomic Write Unit (Normal): 1 00:47:56.065 Atomic Write Unit (PFail): 1 00:47:56.065 Atomic Compare & Write Unit: 1 00:47:56.065 Fused Compare & Write: Not Supported 00:47:56.065 Scatter-Gather List 00:47:56.065 SGL Command Set: Supported 00:47:56.065 SGL Keyed: Not Supported 00:47:56.065 SGL Bit Bucket Descriptor: Not Supported 00:47:56.065 SGL Metadata Pointer: Not Supported 00:47:56.065 Oversized SGL: Not Supported 00:47:56.065 SGL Metadata Address: Not Supported 00:47:56.065 SGL Offset: Not Supported 00:47:56.065 Transport SGL Data Block: Not Supported 00:47:56.065 Replay Protected Memory Block: Not Supported 00:47:56.065 00:47:56.065 Firmware Slot Information 00:47:56.065 ========================= 00:47:56.065 Active slot: 1 00:47:56.065 Slot 1 Firmware Revision: 1.0 00:47:56.065 00:47:56.065 00:47:56.065 Commands Supported and Effects 00:47:56.065 ============================== 00:47:56.065 Admin Commands 00:47:56.065 -------------- 00:47:56.065 Delete I/O Submission Queue (00h): Supported 00:47:56.065 Create I/O Submission Queue (01h): Supported 00:47:56.065 Get Log Page (02h): Supported 00:47:56.065 Delete I/O Completion Queue (04h): Supported 00:47:56.065 Create I/O Completion Queue (05h): Supported 00:47:56.065 Identify (06h): Supported 00:47:56.065 Abort (08h): Supported 00:47:56.065 Set Features (09h): Supported 00:47:56.065 Get Features (0Ah): Supported 00:47:56.065 Asynchronous Event Request (0Ch): Supported 00:47:56.065 Namespace Attachment (15h): Supported NS-Inventory-Change 00:47:56.065 Directive Send (19h): Supported 00:47:56.065 Directive Receive (1Ah): Supported 00:47:56.065 Virtualization Management (1Ch): Supported 00:47:56.065 Doorbell Buffer Config (7Ch): Supported 00:47:56.065 Format NVM (80h): Supported LBA-Change 00:47:56.065 I/O Commands 00:47:56.065 ------------ 00:47:56.065 Flush (00h): Supported LBA-Change 00:47:56.065 Write (01h): Supported LBA-Change 00:47:56.065 Read (02h): Supported 00:47:56.065 Compare (05h): Supported 00:47:56.065 Write Zeroes (08h): Supported LBA-Change 00:47:56.065 Dataset Management (09h): Supported LBA-Change 00:47:56.065 Unknown (0Ch): Supported 00:47:56.065 Unknown (12h): Supported 00:47:56.065 Copy (19h): Supported LBA-Change 00:47:56.065 Unknown (1Dh): Supported LBA-Change 00:47:56.065 00:47:56.065 Error Log 00:47:56.065 ========= 00:47:56.065 00:47:56.065 Arbitration 00:47:56.065 =========== 00:47:56.065 Arbitration Burst: no limit 00:47:56.065 00:47:56.065 Power Management 00:47:56.065 ================ 00:47:56.065 Number of Power States: 1 00:47:56.065 Current Power State: Power State #0 00:47:56.065 Power State #0: 00:47:56.065 Max Power: 25.00 W 00:47:56.065 Non-Operational State: Operational 00:47:56.065 Entry Latency: 16 microseconds 00:47:56.065 Exit Latency: 4 microseconds 00:47:56.065 Relative Read Throughput: 0 00:47:56.065 Relative Read Latency: 0 00:47:56.065 Relative Write Throughput: 0 00:47:56.065 Relative Write Latency: 0 00:47:56.065 Idle Power: Not Reported 00:47:56.065 Active Power: Not Reported 00:47:56.065 Non-Operational Permissive Mode: Not Supported 00:47:56.065 00:47:56.065 Health Information 00:47:56.065 ================== 00:47:56.065 Critical Warnings: 00:47:56.065 Available Spare Space: OK 00:47:56.065 Temperature: OK 00:47:56.065 Device Reliability: OK 00:47:56.065 Read Only: No 00:47:56.065 Volatile Memory Backup: OK 00:47:56.065 Current Temperature: 323 Kelvin (50 Celsius) 00:47:56.065 Temperature Threshold: 343 Kelvin (70 Celsius) 00:47:56.065 Available Spare: 0% 00:47:56.065 Available Spare Threshold: 0% 00:47:56.065 Life Percentage Used: 0% 00:47:56.065 Data Units Read: 4505 00:47:56.065 Data Units Written: 4169 00:47:56.065 Host Read Commands: 218440 00:47:56.065 Host Write Commands: 231489 00:47:56.065 Controller Busy Time: 0 minutes 00:47:56.065 Power Cycles: 0 00:47:56.065 Power On Hours: 0 hours 00:47:56.065 Unsafe Shutdowns: 0 00:47:56.065 Unrecoverable Media Errors: 0 00:47:56.065 Lifetime Error Log Entries: 0 00:47:56.065 Warning Temperature Time: 0 minutes 00:47:56.065 Critical Temperature Time: 0 minutes 00:47:56.065 00:47:56.065 Number of Queues 00:47:56.065 ================ 00:47:56.065 Number of I/O Submission Queues: 64 00:47:56.065 Number of I/O Completion Queues: 64 00:47:56.065 00:47:56.065 ZNS Specific Controller Data 00:47:56.065 ============================ 00:47:56.065 Zone Append Size Limit: 0 00:47:56.065 00:47:56.065 00:47:56.065 Active Namespaces 00:47:56.065 ================= 00:47:56.065 Namespace ID:1 00:47:56.065 Error Recovery Timeout: Unlimited 00:47:56.065 Command Set Identifier: NVM (00h) 00:47:56.065 Deallocate: Supported 00:47:56.065 Deallocated/Unwritten Error: Supported 00:47:56.065 Deallocated Read Value: All 0x00 00:47:56.065 Deallocate in Write Zeroes: Not Supported 00:47:56.065 Deallocated Guard Field: 0xFFFF 00:47:56.065 Flush: Supported 00:47:56.065 Reservation: Not Supported 00:47:56.065 Namespace Sharing Capabilities: Private 00:47:56.065 Size (in LBAs): 1310720 (5GiB) 00:47:56.065 Capacity (in LBAs): 1310720 (5GiB) 00:47:56.065 Utilization (in LBAs): 1310720 (5GiB) 00:47:56.065 Thin Provisioning: Not Supported 00:47:56.065 Per-NS Atomic Units: No 00:47:56.065 Maximum Single Source Range Length: 128 00:47:56.065 Maximum Copy Length: 128 00:47:56.065 Maximum Source Range Count: 128 00:47:56.065 NGUID/EUI64 Never Reused: No 00:47:56.065 Namespace Write Protected: No 00:47:56.065 Number of LBA Formats: 8 00:47:56.065 Current LBA Format: LBA Format #04 00:47:56.065 LBA Format #00: Data Size: 512 Metadata Size: 0 00:47:56.065 LBA Format #01: Data Size: 512 Metadata Size: 8 00:47:56.065 LBA Format #02: Data Size: 512 Metadata Size: 16 00:47:56.065 LBA Format #03: Data Size: 512 Metadata Size: 64 00:47:56.065 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:47:56.065 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:47:56.065 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:47:56.065 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:47:56.065 00:47:56.065 00:47:56.065 real 0m0.857s 00:47:56.065 user 0m0.327s 00:47:56.065 sys 0m0.389s 00:47:56.065 12:11:28 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:56.066 12:11:28 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:47:56.066 ************************************ 00:47:56.066 END TEST nvme_identify 00:47:56.066 ************************************ 00:47:56.066 12:11:28 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:47:56.066 12:11:28 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:47:56.066 12:11:28 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:56.066 12:11:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:56.066 ************************************ 00:47:56.066 START TEST nvme_perf 00:47:56.066 ************************************ 00:47:56.066 12:11:28 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # nvme_perf 00:47:56.066 12:11:28 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:47:57.441 Initializing NVMe Controllers 00:47:57.441 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:47:57.441 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:47:57.441 Initialization complete. Launching workers. 00:47:57.441 ======================================================== 00:47:57.441 Latency(us) 00:47:57.441 Device Information : IOPS MiB/s Average min max 00:47:57.441 PCIE (0000:00:10.0) NSID 1 from core 0: 91551.87 1072.87 1396.87 613.41 5694.00 00:47:57.441 ======================================================== 00:47:57.441 Total : 91551.87 1072.87 1396.87 613.41 5694.00 00:47:57.441 00:47:57.441 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:47:57.441 ================================================================================= 00:47:57.441 1.00000% : 764.587us 00:47:57.441 10.00000% : 920.625us 00:47:57.441 25.00000% : 1092.267us 00:47:57.441 50.00000% : 1373.135us 00:47:57.441 75.00000% : 1654.004us 00:47:57.441 90.00000% : 1833.448us 00:47:57.441 95.00000% : 1966.080us 00:47:57.441 98.00000% : 2402.987us 00:47:57.441 99.00000% : 2715.063us 00:47:57.441 99.50000% : 3089.554us 00:47:57.441 99.90000% : 4275.444us 00:47:57.441 99.99000% : 5430.126us 00:47:57.441 99.99900% : 5710.994us 00:47:57.441 99.99990% : 5710.994us 00:47:57.441 99.99999% : 5710.994us 00:47:57.441 00:47:57.441 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:47:57.441 ============================================================================== 00:47:57.441 Range in us Cumulative IO count 00:47:57.441 612.450 - 616.350: 0.0011% ( 1) 00:47:57.441 624.152 - 628.053: 0.0022% ( 1) 00:47:57.441 628.053 - 631.954: 0.0044% ( 2) 00:47:57.441 639.756 - 643.657: 0.0055% ( 1) 00:47:57.441 643.657 - 647.558: 0.0076% ( 2) 00:47:57.441 647.558 - 651.459: 0.0087% ( 1) 00:47:57.441 651.459 - 655.360: 0.0120% ( 3) 00:47:57.441 655.360 - 659.261: 0.0131% ( 1) 00:47:57.441 659.261 - 663.162: 0.0197% ( 6) 00:47:57.441 663.162 - 667.063: 0.0251% ( 5) 00:47:57.441 667.063 - 670.964: 0.0295% ( 4) 00:47:57.441 670.964 - 674.865: 0.0328% ( 3) 00:47:57.441 674.865 - 678.766: 0.0404% ( 7) 00:47:57.441 678.766 - 682.667: 0.0546% ( 13) 00:47:57.441 682.667 - 686.568: 0.0622% ( 7) 00:47:57.441 686.568 - 690.469: 0.0786% ( 15) 00:47:57.441 690.469 - 694.370: 0.0917% ( 12) 00:47:57.441 694.370 - 698.270: 0.1081% ( 15) 00:47:57.441 698.270 - 702.171: 0.1267% ( 17) 00:47:57.441 702.171 - 706.072: 0.1485% ( 20) 00:47:57.441 706.072 - 709.973: 0.1703% ( 20) 00:47:57.441 709.973 - 713.874: 0.2042% ( 31) 00:47:57.441 713.874 - 717.775: 0.2391% ( 32) 00:47:57.441 717.775 - 721.676: 0.2773% ( 35) 00:47:57.441 721.676 - 725.577: 0.3308% ( 49) 00:47:57.441 725.577 - 729.478: 0.3931% ( 57) 00:47:57.441 729.478 - 733.379: 0.4346% ( 38) 00:47:57.441 733.379 - 737.280: 0.4826% ( 44) 00:47:57.441 737.280 - 741.181: 0.5394% ( 52) 00:47:57.441 741.181 - 745.082: 0.6158% ( 70) 00:47:57.441 745.082 - 748.983: 0.6868% ( 65) 00:47:57.441 748.983 - 752.884: 0.7589% ( 66) 00:47:57.441 752.884 - 756.785: 0.8539% ( 87) 00:47:57.441 756.785 - 760.686: 0.9652% ( 102) 00:47:57.441 760.686 - 764.587: 1.0733% ( 99) 00:47:57.441 764.587 - 768.488: 1.1738% ( 92) 00:47:57.441 768.488 - 772.389: 1.2917% ( 108) 00:47:57.441 772.389 - 776.290: 1.4271% ( 124) 00:47:57.441 776.290 - 780.190: 1.5854% ( 145) 00:47:57.441 780.190 - 784.091: 1.7438% ( 145) 00:47:57.441 784.091 - 787.992: 1.8857% ( 130) 00:47:57.441 787.992 - 791.893: 2.0397% ( 141) 00:47:57.441 791.893 - 795.794: 2.2209% ( 166) 00:47:57.441 795.794 - 799.695: 2.4164% ( 179) 00:47:57.441 799.695 - 803.596: 2.6118% ( 179) 00:47:57.441 803.596 - 807.497: 2.7832% ( 157) 00:47:57.441 807.497 - 811.398: 2.9536% ( 156) 00:47:57.441 811.398 - 815.299: 3.1578% ( 187) 00:47:57.441 815.299 - 819.200: 3.3718% ( 196) 00:47:57.441 819.200 - 823.101: 3.5814% ( 192) 00:47:57.441 823.101 - 827.002: 3.7507% ( 155) 00:47:57.441 827.002 - 830.903: 3.9843% ( 214) 00:47:57.441 830.903 - 834.804: 4.1972% ( 195) 00:47:57.441 834.804 - 838.705: 4.4385% ( 221) 00:47:57.441 838.705 - 842.606: 4.6569% ( 200) 00:47:57.441 842.606 - 846.507: 4.8895% ( 213) 00:47:57.441 846.507 - 850.408: 5.1472% ( 236) 00:47:57.441 850.408 - 854.309: 5.3929% ( 225) 00:47:57.441 854.309 - 858.210: 5.6320% ( 219) 00:47:57.441 858.210 - 862.110: 5.8886% ( 235) 00:47:57.441 862.110 - 866.011: 6.1201% ( 212) 00:47:57.441 866.011 - 869.912: 6.3832% ( 241) 00:47:57.442 869.912 - 873.813: 6.6311% ( 227) 00:47:57.442 873.813 - 877.714: 6.8898% ( 237) 00:47:57.442 877.714 - 881.615: 7.1661% ( 253) 00:47:57.442 881.615 - 885.516: 7.4565% ( 266) 00:47:57.442 885.516 - 889.417: 7.7306% ( 251) 00:47:57.442 889.417 - 893.318: 7.9959% ( 243) 00:47:57.442 893.318 - 897.219: 8.2678% ( 249) 00:47:57.442 897.219 - 901.120: 8.5823% ( 288) 00:47:57.442 901.120 - 905.021: 8.8334% ( 230) 00:47:57.442 905.021 - 908.922: 9.1228% ( 265) 00:47:57.442 908.922 - 912.823: 9.4241% ( 276) 00:47:57.442 912.823 - 916.724: 9.7397% ( 289) 00:47:57.442 916.724 - 920.625: 10.0345% ( 270) 00:47:57.442 920.625 - 924.526: 10.3413% ( 281) 00:47:57.442 924.526 - 928.427: 10.6711% ( 302) 00:47:57.442 928.427 - 932.328: 10.9910% ( 293) 00:47:57.442 932.328 - 936.229: 11.3251% ( 306) 00:47:57.442 936.229 - 940.130: 11.6276% ( 277) 00:47:57.442 940.130 - 944.030: 11.9366% ( 283) 00:47:57.442 944.030 - 947.931: 12.2827% ( 317) 00:47:57.442 947.931 - 951.832: 12.6201% ( 309) 00:47:57.442 951.832 - 955.733: 12.9346% ( 288) 00:47:57.442 955.733 - 959.634: 13.2763% ( 313) 00:47:57.442 959.634 - 963.535: 13.6170% ( 312) 00:47:57.442 963.535 - 967.436: 13.9566% ( 311) 00:47:57.442 967.436 - 971.337: 14.2754% ( 292) 00:47:57.442 971.337 - 975.238: 14.6194% ( 315) 00:47:57.442 975.238 - 979.139: 14.9731% ( 324) 00:47:57.442 979.139 - 983.040: 15.3138% ( 312) 00:47:57.442 983.040 - 986.941: 15.6501% ( 308) 00:47:57.442 986.941 - 990.842: 16.0388% ( 356) 00:47:57.442 990.842 - 994.743: 16.3653% ( 299) 00:47:57.442 994.743 - 998.644: 16.7016% ( 308) 00:47:57.442 998.644 - 1006.446: 17.4146% ( 653) 00:47:57.442 1006.446 - 1014.248: 18.0839% ( 613) 00:47:57.442 1014.248 - 1022.050: 18.8111% ( 666) 00:47:57.442 1022.050 - 1029.851: 19.5263% ( 655) 00:47:57.442 1029.851 - 1037.653: 20.2317% ( 646) 00:47:57.442 1037.653 - 1045.455: 20.9109% ( 622) 00:47:57.442 1045.455 - 1053.257: 21.6020% ( 633) 00:47:57.442 1053.257 - 1061.059: 22.3008% ( 640) 00:47:57.442 1061.059 - 1068.861: 23.0182% ( 657) 00:47:57.442 1068.861 - 1076.663: 23.6974% ( 622) 00:47:57.442 1076.663 - 1084.465: 24.4278% ( 669) 00:47:57.442 1084.465 - 1092.267: 25.1179% ( 632) 00:47:57.442 1092.267 - 1100.069: 25.8255% ( 648) 00:47:57.442 1100.069 - 1107.870: 26.5221% ( 638) 00:47:57.442 1107.870 - 1115.672: 27.2187% ( 638) 00:47:57.442 1115.672 - 1123.474: 27.9001% ( 624) 00:47:57.442 1123.474 - 1131.276: 28.5989% ( 640) 00:47:57.442 1131.276 - 1139.078: 29.2890% ( 632) 00:47:57.442 1139.078 - 1146.880: 29.9867% ( 639) 00:47:57.442 1146.880 - 1154.682: 30.7226% ( 674) 00:47:57.442 1154.682 - 1162.484: 31.3636% ( 587) 00:47:57.442 1162.484 - 1170.286: 32.0667% ( 644) 00:47:57.442 1170.286 - 1178.088: 32.7612% ( 636) 00:47:57.442 1178.088 - 1185.890: 33.4622% ( 642) 00:47:57.442 1185.890 - 1193.691: 34.1523% ( 632) 00:47:57.442 1193.691 - 1201.493: 34.8358% ( 626) 00:47:57.442 1201.493 - 1209.295: 35.5084% ( 616) 00:47:57.442 1209.295 - 1217.097: 36.2170% ( 649) 00:47:57.442 1217.097 - 1224.899: 36.8896% ( 616) 00:47:57.442 1224.899 - 1232.701: 37.5972% ( 648) 00:47:57.442 1232.701 - 1240.503: 38.2731% ( 619) 00:47:57.442 1240.503 - 1248.305: 38.9828% ( 650) 00:47:57.442 1248.305 - 1256.107: 39.6860% ( 644) 00:47:57.442 1256.107 - 1263.909: 40.3750% ( 631) 00:47:57.442 1263.909 - 1271.710: 41.0508% ( 619) 00:47:57.442 1271.710 - 1279.512: 41.7322% ( 624) 00:47:57.442 1279.512 - 1287.314: 42.4266% ( 636) 00:47:57.442 1287.314 - 1295.116: 43.1364% ( 650) 00:47:57.442 1295.116 - 1302.918: 43.8210% ( 627) 00:47:57.442 1302.918 - 1310.720: 44.5187% ( 639) 00:47:57.442 1310.720 - 1318.522: 45.2219% ( 644) 00:47:57.442 1318.522 - 1326.324: 45.9229% ( 642) 00:47:57.442 1326.324 - 1334.126: 46.6184% ( 637) 00:47:57.442 1334.126 - 1341.928: 47.3249% ( 647) 00:47:57.442 1341.928 - 1349.730: 48.0117% ( 629) 00:47:57.442 1349.730 - 1357.531: 48.7258% ( 654) 00:47:57.442 1357.531 - 1365.333: 49.4071% ( 624) 00:47:57.442 1365.333 - 1373.135: 50.1103% ( 644) 00:47:57.442 1373.135 - 1380.937: 50.8102% ( 641) 00:47:57.442 1380.937 - 1388.739: 51.5199% ( 650) 00:47:57.442 1388.739 - 1396.541: 52.2209% ( 642) 00:47:57.442 1396.541 - 1404.343: 52.9121% ( 633) 00:47:57.442 1404.343 - 1412.145: 53.6240% ( 652) 00:47:57.442 1412.145 - 1419.947: 54.3468% ( 662) 00:47:57.442 1419.947 - 1427.749: 55.0238% ( 620) 00:47:57.442 1427.749 - 1435.550: 55.7379% ( 654) 00:47:57.442 1435.550 - 1443.352: 56.4378% ( 641) 00:47:57.442 1443.352 - 1451.154: 57.1508% ( 653) 00:47:57.442 1451.154 - 1458.956: 57.8693% ( 658) 00:47:57.442 1458.956 - 1466.758: 58.5604% ( 633) 00:47:57.442 1466.758 - 1474.560: 59.2767% ( 656) 00:47:57.442 1474.560 - 1482.362: 59.9843% ( 648) 00:47:57.442 1482.362 - 1490.164: 60.7049% ( 660) 00:47:57.442 1490.164 - 1497.966: 61.4147% ( 650) 00:47:57.442 1497.966 - 1505.768: 62.1266% ( 652) 00:47:57.442 1505.768 - 1513.570: 62.8276% ( 642) 00:47:57.442 1513.570 - 1521.371: 63.5580% ( 669) 00:47:57.442 1521.371 - 1529.173: 64.2492% ( 633) 00:47:57.442 1529.173 - 1536.975: 64.9830% ( 672) 00:47:57.442 1536.975 - 1544.777: 65.6763% ( 635) 00:47:57.442 1544.777 - 1552.579: 66.4024% ( 665) 00:47:57.442 1552.579 - 1560.381: 67.0925% ( 632) 00:47:57.442 1560.381 - 1568.183: 67.8306% ( 676) 00:47:57.442 1568.183 - 1575.985: 68.5436% ( 653) 00:47:57.442 1575.985 - 1583.787: 69.2370% ( 635) 00:47:57.442 1583.787 - 1591.589: 69.9937% ( 693) 00:47:57.442 1591.589 - 1599.390: 70.6652% ( 615) 00:47:57.442 1599.390 - 1607.192: 71.4164% ( 688) 00:47:57.442 1607.192 - 1614.994: 72.0988% ( 625) 00:47:57.442 1614.994 - 1622.796: 72.8228% ( 663) 00:47:57.442 1622.796 - 1630.598: 73.5445% ( 661) 00:47:57.442 1630.598 - 1638.400: 74.2444% ( 641) 00:47:57.442 1638.400 - 1646.202: 74.9563% ( 652) 00:47:57.442 1646.202 - 1654.004: 75.6726% ( 656) 00:47:57.442 1654.004 - 1661.806: 76.3638% ( 633) 00:47:57.442 1661.806 - 1669.608: 77.0724% ( 649) 00:47:57.442 1669.608 - 1677.410: 77.7548% ( 625) 00:47:57.442 1677.410 - 1685.211: 78.4689% ( 654) 00:47:57.442 1685.211 - 1693.013: 79.1099% ( 587) 00:47:57.442 1693.013 - 1700.815: 79.8229% ( 653) 00:47:57.442 1700.815 - 1708.617: 80.4606% ( 584) 00:47:57.442 1708.617 - 1716.419: 81.1441% ( 626) 00:47:57.442 1716.419 - 1724.221: 81.7927% ( 594) 00:47:57.442 1724.221 - 1732.023: 82.4434% ( 596) 00:47:57.442 1732.023 - 1739.825: 83.1073% ( 608) 00:47:57.442 1739.825 - 1747.627: 83.7439% ( 583) 00:47:57.442 1747.627 - 1755.429: 84.4001% ( 601) 00:47:57.442 1755.429 - 1763.230: 85.0083% ( 557) 00:47:57.442 1763.230 - 1771.032: 85.6602% ( 597) 00:47:57.442 1771.032 - 1778.834: 86.2673% ( 556) 00:47:57.442 1778.834 - 1786.636: 86.8525% ( 536) 00:47:57.442 1786.636 - 1794.438: 87.4367% ( 535) 00:47:57.442 1794.438 - 1802.240: 88.0034% ( 519) 00:47:57.442 1802.240 - 1810.042: 88.5570% ( 507) 00:47:57.442 1810.042 - 1817.844: 89.0778% ( 477) 00:47:57.442 1817.844 - 1825.646: 89.5746% ( 455) 00:47:57.442 1825.646 - 1833.448: 90.0376% ( 424) 00:47:57.442 1833.448 - 1841.250: 90.4951% ( 419) 00:47:57.442 1841.250 - 1849.051: 90.9133% ( 383) 00:47:57.442 1849.051 - 1856.853: 91.3358% ( 387) 00:47:57.442 1856.853 - 1864.655: 91.7016% ( 335) 00:47:57.442 1864.655 - 1872.457: 92.0685% ( 336) 00:47:57.442 1872.457 - 1880.259: 92.4124% ( 315) 00:47:57.442 1880.259 - 1888.061: 92.7465% ( 306) 00:47:57.442 1888.061 - 1895.863: 93.0490% ( 277) 00:47:57.442 1895.863 - 1903.665: 93.3460% ( 272) 00:47:57.442 1903.665 - 1911.467: 93.6266% ( 257) 00:47:57.442 1911.467 - 1919.269: 93.8887% ( 240) 00:47:57.442 1919.269 - 1927.070: 94.1409% ( 231) 00:47:57.442 1927.070 - 1934.872: 94.3713% ( 211) 00:47:57.442 1934.872 - 1942.674: 94.6017% ( 211) 00:47:57.442 1942.674 - 1950.476: 94.8113% ( 192) 00:47:57.442 1950.476 - 1958.278: 94.9980% ( 171) 00:47:57.442 1958.278 - 1966.080: 95.1716% ( 159) 00:47:57.442 1966.080 - 1973.882: 95.3289% ( 144) 00:47:57.442 1973.882 - 1981.684: 95.4686% ( 128) 00:47:57.442 1981.684 - 1989.486: 95.5844% ( 106) 00:47:57.442 1989.486 - 1997.288: 95.6979% ( 104) 00:47:57.442 1997.288 - 2012.891: 95.8956% ( 181) 00:47:57.442 2012.891 - 2028.495: 96.0594% ( 150) 00:47:57.442 2028.495 - 2044.099: 96.2068% ( 135) 00:47:57.442 2044.099 - 2059.703: 96.3345% ( 117) 00:47:57.442 2059.703 - 2075.307: 96.4492% ( 105) 00:47:57.442 2075.307 - 2090.910: 96.5616% ( 103) 00:47:57.442 2090.910 - 2106.514: 96.6577% ( 88) 00:47:57.442 2106.514 - 2122.118: 96.7494% ( 84) 00:47:57.442 2122.118 - 2137.722: 96.8401% ( 83) 00:47:57.442 2137.722 - 2153.326: 96.9252% ( 78) 00:47:57.442 2153.326 - 2168.930: 97.0049% ( 73) 00:47:57.442 2168.930 - 2184.533: 97.0934% ( 81) 00:47:57.442 2184.533 - 2200.137: 97.1720% ( 72) 00:47:57.442 2200.137 - 2215.741: 97.2473% ( 69) 00:47:57.442 2215.741 - 2231.345: 97.3183% ( 65) 00:47:57.442 2231.345 - 2246.949: 97.3915% ( 67) 00:47:57.442 2246.949 - 2262.552: 97.4613% ( 64) 00:47:57.442 2262.552 - 2278.156: 97.5334% ( 66) 00:47:57.442 2278.156 - 2293.760: 97.6011% ( 62) 00:47:57.442 2293.760 - 2309.364: 97.6732% ( 66) 00:47:57.442 2309.364 - 2324.968: 97.7376% ( 59) 00:47:57.442 2324.968 - 2340.571: 97.8075% ( 64) 00:47:57.442 2340.571 - 2356.175: 97.8697% ( 57) 00:47:57.442 2356.175 - 2371.779: 97.9265% ( 52) 00:47:57.442 2371.779 - 2387.383: 97.9909% ( 59) 00:47:57.442 2387.383 - 2402.987: 98.0499% ( 54) 00:47:57.442 2402.987 - 2418.590: 98.1099% ( 55) 00:47:57.442 2418.590 - 2434.194: 98.1645% ( 50) 00:47:57.442 2434.194 - 2449.798: 98.2213% ( 52) 00:47:57.443 2449.798 - 2465.402: 98.2737% ( 48) 00:47:57.443 2465.402 - 2481.006: 98.3250% ( 47) 00:47:57.443 2481.006 - 2496.610: 98.3774% ( 48) 00:47:57.443 2496.610 - 2512.213: 98.4299% ( 48) 00:47:57.443 2512.213 - 2527.817: 98.4845% ( 50) 00:47:57.443 2527.817 - 2543.421: 98.5347% ( 46) 00:47:57.443 2543.421 - 2559.025: 98.5849% ( 46) 00:47:57.443 2559.025 - 2574.629: 98.6319% ( 43) 00:47:57.443 2574.629 - 2590.232: 98.6755% ( 40) 00:47:57.443 2590.232 - 2605.836: 98.7203% ( 41) 00:47:57.443 2605.836 - 2621.440: 98.7607% ( 37) 00:47:57.443 2621.440 - 2637.044: 98.8109% ( 46) 00:47:57.443 2637.044 - 2652.648: 98.8524% ( 38) 00:47:57.443 2652.648 - 2668.251: 98.8895% ( 34) 00:47:57.443 2668.251 - 2683.855: 98.9343% ( 41) 00:47:57.443 2683.855 - 2699.459: 98.9780% ( 40) 00:47:57.443 2699.459 - 2715.063: 99.0162% ( 35) 00:47:57.443 2715.063 - 2730.667: 99.0457% ( 27) 00:47:57.443 2730.667 - 2746.270: 99.0828% ( 34) 00:47:57.443 2746.270 - 2761.874: 99.1167% ( 31) 00:47:57.443 2761.874 - 2777.478: 99.1516% ( 32) 00:47:57.443 2777.478 - 2793.082: 99.1800% ( 26) 00:47:57.443 2793.082 - 2808.686: 99.2106% ( 28) 00:47:57.443 2808.686 - 2824.290: 99.2379% ( 25) 00:47:57.443 2824.290 - 2839.893: 99.2673% ( 27) 00:47:57.443 2839.893 - 2855.497: 99.2914% ( 22) 00:47:57.443 2855.497 - 2871.101: 99.3143% ( 21) 00:47:57.443 2871.101 - 2886.705: 99.3372% ( 21) 00:47:57.443 2886.705 - 2902.309: 99.3580% ( 19) 00:47:57.443 2902.309 - 2917.912: 99.3754% ( 16) 00:47:57.443 2917.912 - 2933.516: 99.3885% ( 12) 00:47:57.443 2933.516 - 2949.120: 99.4049% ( 15) 00:47:57.443 2949.120 - 2964.724: 99.4202% ( 14) 00:47:57.443 2964.724 - 2980.328: 99.4333% ( 12) 00:47:57.443 2980.328 - 2995.931: 99.4431% ( 9) 00:47:57.443 2995.931 - 3011.535: 99.4551% ( 11) 00:47:57.443 3011.535 - 3027.139: 99.4672% ( 11) 00:47:57.443 3027.139 - 3042.743: 99.4770% ( 9) 00:47:57.443 3042.743 - 3058.347: 99.4868% ( 9) 00:47:57.443 3058.347 - 3073.950: 99.4977% ( 10) 00:47:57.443 3073.950 - 3089.554: 99.5076% ( 9) 00:47:57.443 3089.554 - 3105.158: 99.5185% ( 10) 00:47:57.443 3105.158 - 3120.762: 99.5305% ( 11) 00:47:57.443 3120.762 - 3136.366: 99.5370% ( 6) 00:47:57.443 3136.366 - 3151.970: 99.5480% ( 10) 00:47:57.443 3151.970 - 3167.573: 99.5556% ( 7) 00:47:57.443 3167.573 - 3183.177: 99.5632% ( 7) 00:47:57.443 3183.177 - 3198.781: 99.5720% ( 8) 00:47:57.443 3198.781 - 3214.385: 99.5796% ( 7) 00:47:57.443 3214.385 - 3229.989: 99.5873% ( 7) 00:47:57.443 3229.989 - 3245.592: 99.5949% ( 7) 00:47:57.443 3245.592 - 3261.196: 99.6015% ( 6) 00:47:57.443 3261.196 - 3276.800: 99.6091% ( 7) 00:47:57.443 3276.800 - 3292.404: 99.6157% ( 6) 00:47:57.443 3292.404 - 3308.008: 99.6211% ( 5) 00:47:57.443 3308.008 - 3323.611: 99.6288% ( 7) 00:47:57.443 3323.611 - 3339.215: 99.6353% ( 6) 00:47:57.443 3339.215 - 3354.819: 99.6408% ( 5) 00:47:57.443 3354.819 - 3370.423: 99.6473% ( 6) 00:47:57.443 3370.423 - 3386.027: 99.6561% ( 8) 00:47:57.443 3386.027 - 3401.630: 99.6626% ( 6) 00:47:57.443 3401.630 - 3417.234: 99.6692% ( 6) 00:47:57.443 3417.234 - 3432.838: 99.6746% ( 5) 00:47:57.443 3432.838 - 3448.442: 99.6779% ( 3) 00:47:57.443 3448.442 - 3464.046: 99.6844% ( 6) 00:47:57.443 3464.046 - 3479.650: 99.6888% ( 4) 00:47:57.443 3479.650 - 3495.253: 99.6932% ( 4) 00:47:57.443 3495.253 - 3510.857: 99.6975% ( 4) 00:47:57.443 3510.857 - 3526.461: 99.7041% ( 6) 00:47:57.443 3526.461 - 3542.065: 99.7085% ( 4) 00:47:57.443 3542.065 - 3557.669: 99.7128% ( 4) 00:47:57.443 3557.669 - 3573.272: 99.7183% ( 5) 00:47:57.443 3573.272 - 3588.876: 99.7227% ( 4) 00:47:57.443 3588.876 - 3604.480: 99.7292% ( 6) 00:47:57.443 3604.480 - 3620.084: 99.7336% ( 4) 00:47:57.443 3620.084 - 3635.688: 99.7401% ( 6) 00:47:57.443 3635.688 - 3651.291: 99.7445% ( 4) 00:47:57.443 3651.291 - 3666.895: 99.7510% ( 6) 00:47:57.443 3666.895 - 3682.499: 99.7554% ( 4) 00:47:57.443 3682.499 - 3698.103: 99.7609% ( 5) 00:47:57.443 3698.103 - 3713.707: 99.7663% ( 5) 00:47:57.443 3713.707 - 3729.310: 99.7718% ( 5) 00:47:57.443 3729.310 - 3744.914: 99.7783% ( 6) 00:47:57.443 3744.914 - 3760.518: 99.7827% ( 4) 00:47:57.443 3760.518 - 3776.122: 99.7893% ( 6) 00:47:57.443 3776.122 - 3791.726: 99.7947% ( 5) 00:47:57.443 3791.726 - 3807.330: 99.8013% ( 6) 00:47:57.443 3807.330 - 3822.933: 99.8046% ( 3) 00:47:57.443 3822.933 - 3838.537: 99.8100% ( 5) 00:47:57.443 3838.537 - 3854.141: 99.8155% ( 5) 00:47:57.443 3854.141 - 3869.745: 99.8198% ( 4) 00:47:57.443 3869.745 - 3885.349: 99.8242% ( 4) 00:47:57.443 3885.349 - 3900.952: 99.8286% ( 4) 00:47:57.443 3900.952 - 3916.556: 99.8329% ( 4) 00:47:57.443 3916.556 - 3932.160: 99.8395% ( 6) 00:47:57.443 3932.160 - 3947.764: 99.8428% ( 3) 00:47:57.443 3947.764 - 3963.368: 99.8460% ( 3) 00:47:57.443 3963.368 - 3978.971: 99.8493% ( 3) 00:47:57.443 3978.971 - 3994.575: 99.8537% ( 4) 00:47:57.443 3994.575 - 4025.783: 99.8602% ( 6) 00:47:57.443 4025.783 - 4056.990: 99.8646% ( 4) 00:47:57.443 4056.990 - 4088.198: 99.8701% ( 5) 00:47:57.443 4088.198 - 4119.406: 99.8755% ( 5) 00:47:57.443 4119.406 - 4150.613: 99.8799% ( 4) 00:47:57.443 4150.613 - 4181.821: 99.8854% ( 5) 00:47:57.443 4181.821 - 4213.029: 99.8908% ( 5) 00:47:57.443 4213.029 - 4244.236: 99.8963% ( 5) 00:47:57.443 4244.236 - 4275.444: 99.9017% ( 5) 00:47:57.443 4275.444 - 4306.651: 99.9061% ( 4) 00:47:57.443 4306.651 - 4337.859: 99.9126% ( 6) 00:47:57.443 4337.859 - 4369.067: 99.9181% ( 5) 00:47:57.443 4369.067 - 4400.274: 99.9236% ( 5) 00:47:57.443 4400.274 - 4431.482: 99.9279% ( 4) 00:47:57.443 4431.482 - 4462.690: 99.9334% ( 5) 00:47:57.443 4462.690 - 4493.897: 99.9389% ( 5) 00:47:57.443 4493.897 - 4525.105: 99.9432% ( 4) 00:47:57.443 4525.105 - 4556.312: 99.9487% ( 5) 00:47:57.443 4556.312 - 4587.520: 99.9541% ( 5) 00:47:57.443 4587.520 - 4618.728: 99.9585% ( 4) 00:47:57.443 4618.728 - 4649.935: 99.9596% ( 1) 00:47:57.443 4649.935 - 4681.143: 99.9607% ( 1) 00:47:57.443 4681.143 - 4712.350: 99.9618% ( 1) 00:47:57.443 4712.350 - 4743.558: 99.9629% ( 1) 00:47:57.443 4743.558 - 4774.766: 99.9651% ( 2) 00:47:57.443 4774.766 - 4805.973: 99.9662% ( 1) 00:47:57.443 4805.973 - 4837.181: 99.9672% ( 1) 00:47:57.443 4837.181 - 4868.389: 99.9683% ( 1) 00:47:57.443 4868.389 - 4899.596: 99.9694% ( 1) 00:47:57.443 4899.596 - 4930.804: 99.9705% ( 1) 00:47:57.443 4930.804 - 4962.011: 99.9716% ( 1) 00:47:57.443 4962.011 - 4993.219: 99.9727% ( 1) 00:47:57.443 4993.219 - 5024.427: 99.9738% ( 1) 00:47:57.443 5024.427 - 5055.634: 99.9749% ( 1) 00:47:57.443 5055.634 - 5086.842: 99.9760% ( 1) 00:47:57.443 5086.842 - 5118.050: 99.9771% ( 1) 00:47:57.443 5118.050 - 5149.257: 99.9793% ( 2) 00:47:57.443 5149.257 - 5180.465: 99.9803% ( 1) 00:47:57.443 5180.465 - 5211.672: 99.9814% ( 1) 00:47:57.443 5211.672 - 5242.880: 99.9825% ( 1) 00:47:57.443 5242.880 - 5274.088: 99.9836% ( 1) 00:47:57.443 5274.088 - 5305.295: 99.9858% ( 2) 00:47:57.443 5305.295 - 5336.503: 99.9869% ( 1) 00:47:57.443 5336.503 - 5367.710: 99.9880% ( 1) 00:47:57.443 5367.710 - 5398.918: 99.9891% ( 1) 00:47:57.443 5398.918 - 5430.126: 99.9902% ( 1) 00:47:57.443 5430.126 - 5461.333: 99.9913% ( 1) 00:47:57.443 5461.333 - 5492.541: 99.9924% ( 1) 00:47:57.443 5492.541 - 5523.749: 99.9934% ( 1) 00:47:57.443 5523.749 - 5554.956: 99.9945% ( 1) 00:47:57.443 5554.956 - 5586.164: 99.9967% ( 2) 00:47:57.443 5586.164 - 5617.371: 99.9978% ( 1) 00:47:57.443 5617.371 - 5648.579: 99.9989% ( 1) 00:47:57.443 5679.787 - 5710.994: 100.0000% ( 1) 00:47:57.443 00:47:57.443 12:11:29 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:47:58.832 Initializing NVMe Controllers 00:47:58.832 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:47:58.832 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:47:58.832 Initialization complete. Launching workers. 00:47:58.832 ======================================================== 00:47:58.832 Latency(us) 00:47:58.832 Device Information : IOPS MiB/s Average min max 00:47:58.832 PCIE (0000:00:10.0) NSID 1 from core 0: 58067.22 680.48 2203.30 845.44 14882.18 00:47:58.832 ======================================================== 00:47:58.832 Total : 58067.22 680.48 2203.30 845.44 14882.18 00:47:58.832 00:47:58.832 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:47:58.832 ================================================================================= 00:47:58.832 1.00000% : 1178.088us 00:47:58.832 10.00000% : 1419.947us 00:47:58.832 25.00000% : 1607.192us 00:47:58.832 50.00000% : 2012.891us 00:47:58.832 75.00000% : 2496.610us 00:47:58.832 90.00000% : 3261.196us 00:47:58.832 95.00000% : 3885.349us 00:47:58.832 98.00000% : 4649.935us 00:47:58.832 99.00000% : 4993.219us 00:47:58.832 99.50000% : 5149.257us 00:47:58.832 99.90000% : 13544.107us 00:47:58.832 99.99000% : 14729.996us 00:47:58.832 99.99900% : 14917.242us 00:47:58.832 99.99990% : 14917.242us 00:47:58.832 99.99999% : 14917.242us 00:47:58.832 00:47:58.832 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:47:58.832 ============================================================================== 00:47:58.832 Range in us Cumulative IO count 00:47:58.832 842.606 - 846.507: 0.0017% ( 1) 00:47:58.832 905.021 - 908.922: 0.0052% ( 2) 00:47:58.832 908.922 - 912.823: 0.0069% ( 1) 00:47:58.832 916.724 - 920.625: 0.0086% ( 1) 00:47:58.832 975.238 - 979.139: 0.0103% ( 1) 00:47:58.832 979.139 - 983.040: 0.0155% ( 3) 00:47:58.832 983.040 - 986.941: 0.0172% ( 1) 00:47:58.832 986.941 - 990.842: 0.0224% ( 3) 00:47:58.832 994.743 - 998.644: 0.0276% ( 3) 00:47:58.832 998.644 - 1006.446: 0.0344% ( 4) 00:47:58.832 1006.446 - 1014.248: 0.0396% ( 3) 00:47:58.832 1014.248 - 1022.050: 0.0499% ( 6) 00:47:58.832 1022.050 - 1029.851: 0.0620% ( 7) 00:47:58.832 1029.851 - 1037.653: 0.0672% ( 3) 00:47:58.832 1037.653 - 1045.455: 0.0827% ( 9) 00:47:58.832 1045.455 - 1053.257: 0.0999% ( 10) 00:47:58.832 1053.257 - 1061.059: 0.1154% ( 9) 00:47:58.832 1061.059 - 1068.861: 0.1395% ( 14) 00:47:58.832 1068.861 - 1076.663: 0.1705% ( 18) 00:47:58.832 1076.663 - 1084.465: 0.2066% ( 21) 00:47:58.832 1084.465 - 1092.267: 0.2462% ( 23) 00:47:58.832 1092.267 - 1100.069: 0.2789% ( 19) 00:47:58.832 1100.069 - 1107.870: 0.3220% ( 25) 00:47:58.832 1107.870 - 1115.672: 0.3771% ( 32) 00:47:58.832 1115.672 - 1123.474: 0.4425% ( 38) 00:47:58.832 1123.474 - 1131.276: 0.4804% ( 22) 00:47:58.832 1131.276 - 1139.078: 0.5751% ( 55) 00:47:58.832 1139.078 - 1146.880: 0.6406% ( 38) 00:47:58.832 1146.880 - 1154.682: 0.7180% ( 45) 00:47:58.832 1154.682 - 1162.484: 0.8145% ( 56) 00:47:58.832 1162.484 - 1170.286: 0.9092% ( 55) 00:47:58.832 1170.286 - 1178.088: 1.0349% ( 73) 00:47:58.832 1178.088 - 1185.890: 1.1520% ( 68) 00:47:58.832 1185.890 - 1193.691: 1.2863% ( 78) 00:47:58.832 1193.691 - 1201.493: 1.4206% ( 78) 00:47:58.832 1201.493 - 1209.295: 1.5583% ( 80) 00:47:58.832 1209.295 - 1217.097: 1.6944% ( 79) 00:47:58.832 1217.097 - 1224.899: 1.8493% ( 90) 00:47:58.832 1224.899 - 1232.701: 2.0370% ( 109) 00:47:58.832 1232.701 - 1240.503: 2.2178% ( 105) 00:47:58.832 1240.503 - 1248.305: 2.4107% ( 112) 00:47:58.832 1248.305 - 1256.107: 2.5880% ( 103) 00:47:58.832 1256.107 - 1263.909: 2.8274% ( 139) 00:47:58.832 1263.909 - 1271.710: 3.0547% ( 132) 00:47:58.832 1271.710 - 1279.512: 3.3061% ( 146) 00:47:58.832 1279.512 - 1287.314: 3.5747% ( 156) 00:47:58.832 1287.314 - 1295.116: 3.8571% ( 164) 00:47:58.832 1295.116 - 1302.918: 4.1136% ( 149) 00:47:58.832 1302.918 - 1310.720: 4.4270% ( 182) 00:47:58.832 1310.720 - 1318.522: 4.7439% ( 184) 00:47:58.832 1318.522 - 1326.324: 5.0831% ( 197) 00:47:58.832 1326.324 - 1334.126: 5.4929% ( 238) 00:47:58.832 1334.126 - 1341.928: 5.9062% ( 240) 00:47:58.832 1341.928 - 1349.730: 6.3125% ( 236) 00:47:58.832 1349.730 - 1357.531: 6.7258% ( 240) 00:47:58.832 1357.531 - 1365.333: 7.1098% ( 223) 00:47:58.832 1365.333 - 1373.135: 7.5506% ( 256) 00:47:58.832 1373.135 - 1380.937: 7.9845% ( 252) 00:47:58.832 1380.937 - 1388.739: 8.4598% ( 276) 00:47:58.832 1388.739 - 1396.541: 8.9126% ( 263) 00:47:58.832 1396.541 - 1404.343: 9.4223% ( 296) 00:47:58.832 1404.343 - 1412.145: 9.9251% ( 292) 00:47:58.832 1412.145 - 1419.947: 10.4692% ( 316) 00:47:58.832 1419.947 - 1427.749: 11.0065% ( 312) 00:47:58.832 1427.749 - 1435.550: 11.5506% ( 316) 00:47:58.832 1435.550 - 1443.352: 12.1016% ( 320) 00:47:58.832 1443.352 - 1451.154: 12.7025% ( 349) 00:47:58.832 1451.154 - 1458.956: 13.3173% ( 357) 00:47:58.832 1458.956 - 1466.758: 13.8855% ( 330) 00:47:58.832 1466.758 - 1474.560: 14.5467% ( 384) 00:47:58.832 1474.560 - 1482.362: 15.1804% ( 368) 00:47:58.832 1482.362 - 1490.164: 15.7951% ( 357) 00:47:58.832 1490.164 - 1497.966: 16.4115% ( 358) 00:47:58.832 1497.966 - 1505.768: 17.0607% ( 377) 00:47:58.832 1505.768 - 1513.570: 17.6754% ( 357) 00:47:58.832 1513.570 - 1521.371: 18.2678% ( 344) 00:47:58.832 1521.371 - 1529.173: 18.8842% ( 358) 00:47:58.832 1529.173 - 1536.975: 19.4731% ( 342) 00:47:58.832 1536.975 - 1544.777: 20.1102% ( 370) 00:47:58.832 1544.777 - 1552.579: 20.7249% ( 357) 00:47:58.832 1552.579 - 1560.381: 21.3638% ( 371) 00:47:58.832 1560.381 - 1568.183: 21.9940% ( 366) 00:47:58.832 1568.183 - 1575.985: 22.6621% ( 388) 00:47:58.832 1575.985 - 1583.787: 23.2613% ( 348) 00:47:58.832 1583.787 - 1591.589: 23.8864% ( 363) 00:47:58.832 1591.589 - 1599.390: 24.5131% ( 364) 00:47:58.832 1599.390 - 1607.192: 25.0693% ( 323) 00:47:58.832 1607.192 - 1614.994: 25.6754% ( 352) 00:47:58.832 1614.994 - 1622.796: 26.2350% ( 325) 00:47:58.832 1622.796 - 1630.598: 26.7861% ( 320) 00:47:58.832 1630.598 - 1638.400: 27.3267% ( 314) 00:47:58.832 1638.400 - 1646.202: 27.8743% ( 318) 00:47:58.832 1646.202 - 1654.004: 28.4339% ( 325) 00:47:58.832 1654.004 - 1661.806: 29.0331% ( 348) 00:47:58.832 1661.806 - 1669.608: 29.5170% ( 281) 00:47:58.832 1669.608 - 1677.410: 30.0783% ( 326) 00:47:58.832 1677.410 - 1685.211: 30.5674% ( 284) 00:47:58.833 1685.211 - 1693.013: 31.0977% ( 308) 00:47:58.833 1693.013 - 1700.815: 31.5833% ( 282) 00:47:58.833 1700.815 - 1708.617: 32.0895% ( 294) 00:47:58.833 1708.617 - 1716.419: 32.5682% ( 278) 00:47:58.833 1716.419 - 1724.221: 33.0693% ( 291) 00:47:58.833 1724.221 - 1732.023: 33.5222% ( 263) 00:47:58.833 1732.023 - 1739.825: 34.0164% ( 287) 00:47:58.833 1739.825 - 1747.627: 34.4709% ( 264) 00:47:58.833 1747.627 - 1755.429: 34.9427% ( 274) 00:47:58.833 1755.429 - 1763.230: 35.4249% ( 280) 00:47:58.833 1763.230 - 1771.032: 35.8795% ( 264) 00:47:58.833 1771.032 - 1778.834: 36.3495% ( 273) 00:47:58.833 1778.834 - 1786.636: 36.8300% ( 279) 00:47:58.833 1786.636 - 1794.438: 37.2673% ( 254) 00:47:58.833 1794.438 - 1802.240: 37.7340% ( 271) 00:47:58.833 1802.240 - 1810.042: 38.1937% ( 267) 00:47:58.833 1810.042 - 1817.844: 38.6586% ( 270) 00:47:58.833 1817.844 - 1825.646: 39.0564% ( 231) 00:47:58.833 1825.646 - 1833.448: 39.5489% ( 286) 00:47:58.833 1833.448 - 1841.250: 39.9673% ( 243) 00:47:58.833 1841.250 - 1849.051: 40.4339% ( 271) 00:47:58.833 1849.051 - 1856.853: 40.8885% ( 264) 00:47:58.833 1856.853 - 1864.655: 41.3224% ( 252) 00:47:58.833 1864.655 - 1872.457: 41.8011% ( 278) 00:47:58.833 1872.457 - 1880.259: 42.2695% ( 272) 00:47:58.833 1880.259 - 1888.061: 42.7327% ( 269) 00:47:58.833 1888.061 - 1895.863: 43.2010% ( 272) 00:47:58.833 1895.863 - 1903.665: 43.6625% ( 268) 00:47:58.833 1903.665 - 1911.467: 44.1722% ( 296) 00:47:58.833 1911.467 - 1919.269: 44.6078% ( 253) 00:47:58.833 1919.269 - 1927.070: 45.0745% ( 271) 00:47:58.833 1927.070 - 1934.872: 45.5222% ( 260) 00:47:58.833 1934.872 - 1942.674: 46.0319% ( 296) 00:47:58.833 1942.674 - 1950.476: 46.4417% ( 238) 00:47:58.833 1950.476 - 1958.278: 46.9445% ( 292) 00:47:58.833 1958.278 - 1966.080: 47.3818% ( 254) 00:47:58.833 1966.080 - 1973.882: 47.8399% ( 266) 00:47:58.833 1973.882 - 1981.684: 48.2944% ( 264) 00:47:58.833 1981.684 - 1989.486: 48.7576% ( 269) 00:47:58.833 1989.486 - 1997.288: 49.1881% ( 250) 00:47:58.833 1997.288 - 2012.891: 50.1197% ( 541) 00:47:58.833 2012.891 - 2028.495: 51.0237% ( 525) 00:47:58.833 2028.495 - 2044.099: 51.9191% ( 520) 00:47:58.833 2044.099 - 2059.703: 52.8248% ( 526) 00:47:58.833 2059.703 - 2075.307: 53.7357% ( 529) 00:47:58.833 2075.307 - 2090.910: 54.6362% ( 523) 00:47:58.833 2090.910 - 2106.514: 55.5609% ( 537) 00:47:58.833 2106.514 - 2122.118: 56.4735% ( 530) 00:47:58.833 2122.118 - 2137.722: 57.3775% ( 525) 00:47:58.833 2137.722 - 2153.326: 58.2557% ( 510) 00:47:58.833 2153.326 - 2168.930: 59.1373% ( 512) 00:47:58.833 2168.930 - 2184.533: 60.0517% ( 531) 00:47:58.833 2184.533 - 2200.137: 60.9333% ( 512) 00:47:58.833 2200.137 - 2215.741: 61.7822% ( 493) 00:47:58.833 2215.741 - 2231.345: 62.6552% ( 507) 00:47:58.833 2231.345 - 2246.949: 63.4869% ( 483) 00:47:58.833 2246.949 - 2262.552: 64.3134% ( 480) 00:47:58.833 2262.552 - 2278.156: 65.1485% ( 485) 00:47:58.833 2278.156 - 2293.760: 65.9957% ( 492) 00:47:58.833 2293.760 - 2309.364: 66.8446% ( 493) 00:47:58.833 2309.364 - 2324.968: 67.6212% ( 451) 00:47:58.833 2324.968 - 2340.571: 68.4150% ( 461) 00:47:58.833 2340.571 - 2356.175: 69.1916% ( 451) 00:47:58.833 2356.175 - 2371.779: 69.9733% ( 454) 00:47:58.833 2371.779 - 2387.383: 70.7068% ( 426) 00:47:58.833 2387.383 - 2402.987: 71.4369% ( 424) 00:47:58.833 2402.987 - 2418.590: 72.1498% ( 414) 00:47:58.833 2418.590 - 2434.194: 72.8747% ( 421) 00:47:58.833 2434.194 - 2449.798: 73.5979% ( 420) 00:47:58.833 2449.798 - 2465.402: 74.2471% ( 377) 00:47:58.833 2465.402 - 2481.006: 74.9152% ( 388) 00:47:58.833 2481.006 - 2496.610: 75.5799% ( 386) 00:47:58.833 2496.610 - 2512.213: 76.2411% ( 384) 00:47:58.833 2512.213 - 2527.817: 76.8799% ( 371) 00:47:58.833 2527.817 - 2543.421: 77.5084% ( 365) 00:47:58.833 2543.421 - 2559.025: 78.1300% ( 361) 00:47:58.833 2559.025 - 2574.629: 78.7533% ( 362) 00:47:58.833 2574.629 - 2590.232: 79.3681% ( 357) 00:47:58.833 2590.232 - 2605.836: 79.9277% ( 325) 00:47:58.833 2605.836 - 2621.440: 80.4873% ( 325) 00:47:58.833 2621.440 - 2637.044: 80.9815% ( 287) 00:47:58.833 2637.044 - 2652.648: 81.4361% ( 264) 00:47:58.833 2652.648 - 2668.251: 81.8700% ( 252) 00:47:58.833 2668.251 - 2683.855: 82.2488% ( 220) 00:47:58.833 2683.855 - 2699.459: 82.6053% ( 207) 00:47:58.833 2699.459 - 2715.063: 82.9066% ( 175) 00:47:58.833 2715.063 - 2730.667: 83.1838% ( 161) 00:47:58.833 2730.667 - 2746.270: 83.4800% ( 172) 00:47:58.833 2746.270 - 2761.874: 83.7452% ( 154) 00:47:58.833 2761.874 - 2777.478: 83.9759% ( 134) 00:47:58.833 2777.478 - 2793.082: 84.2032% ( 132) 00:47:58.833 2793.082 - 2808.686: 84.4374% ( 136) 00:47:58.833 2808.686 - 2824.290: 84.6578% ( 128) 00:47:58.833 2824.290 - 2839.893: 84.8765% ( 127) 00:47:58.833 2839.893 - 2855.497: 85.1192% ( 141) 00:47:58.833 2855.497 - 2871.101: 85.3121% ( 112) 00:47:58.833 2871.101 - 2886.705: 85.5032% ( 111) 00:47:58.833 2886.705 - 2902.309: 85.7167% ( 124) 00:47:58.833 2902.309 - 2917.912: 85.9199% ( 118) 00:47:58.833 2917.912 - 2933.516: 86.1438% ( 130) 00:47:58.833 2933.516 - 2949.120: 86.3349% ( 111) 00:47:58.833 2949.120 - 2964.724: 86.5467% ( 123) 00:47:58.833 2964.724 - 2980.328: 86.7585% ( 123) 00:47:58.833 2980.328 - 2995.931: 86.9617% ( 118) 00:47:58.833 2995.931 - 3011.535: 87.1873% ( 131) 00:47:58.833 3011.535 - 3027.139: 87.3612% ( 101) 00:47:58.833 3027.139 - 3042.743: 87.5368% ( 102) 00:47:58.833 3042.743 - 3058.347: 87.7400% ( 118) 00:47:58.833 3058.347 - 3073.950: 87.9225% ( 106) 00:47:58.833 3073.950 - 3089.554: 88.1016% ( 104) 00:47:58.833 3089.554 - 3105.158: 88.2893% ( 109) 00:47:58.833 3105.158 - 3120.762: 88.4821% ( 112) 00:47:58.833 3120.762 - 3136.366: 88.6733% ( 111) 00:47:58.833 3136.366 - 3151.970: 88.8437% ( 99) 00:47:58.833 3151.970 - 3167.573: 89.0349% ( 111) 00:47:58.833 3167.573 - 3183.177: 89.2191% ( 107) 00:47:58.833 3183.177 - 3198.781: 89.3999% ( 105) 00:47:58.833 3198.781 - 3214.385: 89.5618% ( 94) 00:47:58.833 3214.385 - 3229.989: 89.7271% ( 96) 00:47:58.833 3229.989 - 3245.592: 89.9044% ( 103) 00:47:58.833 3245.592 - 3261.196: 90.0646% ( 93) 00:47:58.833 3261.196 - 3276.800: 90.2350% ( 99) 00:47:58.833 3276.800 - 3292.404: 90.3745% ( 81) 00:47:58.833 3292.404 - 3308.008: 90.5433% ( 98) 00:47:58.833 3308.008 - 3323.611: 90.7103% ( 97) 00:47:58.833 3323.611 - 3339.215: 90.8498% ( 81) 00:47:58.833 3339.215 - 3354.819: 90.9978% ( 86) 00:47:58.833 3354.819 - 3370.423: 91.1408% ( 83) 00:47:58.833 3370.423 - 3386.027: 91.3181% ( 103) 00:47:58.833 3386.027 - 3401.630: 91.4628% ( 84) 00:47:58.833 3401.630 - 3417.234: 91.6160% ( 89) 00:47:58.833 3417.234 - 3432.838: 91.7538% ( 80) 00:47:58.833 3432.838 - 3448.442: 91.9225% ( 98) 00:47:58.833 3448.442 - 3464.046: 92.1136% ( 111) 00:47:58.833 3464.046 - 3479.650: 92.2566% ( 83) 00:47:58.833 3479.650 - 3495.253: 92.4184% ( 94) 00:47:58.833 3495.253 - 3510.857: 92.5596% ( 82) 00:47:58.833 3510.857 - 3526.461: 92.7008% ( 82) 00:47:58.833 3526.461 - 3542.065: 92.8386% ( 80) 00:47:58.833 3542.065 - 3557.669: 92.9488% ( 64) 00:47:58.833 3557.669 - 3573.272: 93.0710% ( 71) 00:47:58.833 3573.272 - 3588.876: 93.1933% ( 71) 00:47:58.833 3588.876 - 3604.480: 93.3190% ( 73) 00:47:58.833 3604.480 - 3620.084: 93.4292% ( 64) 00:47:58.833 3620.084 - 3635.688: 93.5308% ( 59) 00:47:58.833 3635.688 - 3651.291: 93.6444% ( 66) 00:47:58.833 3651.291 - 3666.895: 93.7477% ( 60) 00:47:58.833 3666.895 - 3682.499: 93.8459% ( 57) 00:47:58.833 3682.499 - 3698.103: 93.9526% ( 62) 00:47:58.833 3698.103 - 3713.707: 94.0646% ( 65) 00:47:58.833 3713.707 - 3729.310: 94.1627% ( 57) 00:47:58.833 3729.310 - 3744.914: 94.2488% ( 50) 00:47:58.833 3744.914 - 3760.518: 94.3573% ( 63) 00:47:58.833 3760.518 - 3776.122: 94.4520% ( 55) 00:47:58.833 3776.122 - 3791.726: 94.5536% ( 59) 00:47:58.833 3791.726 - 3807.330: 94.6380% ( 49) 00:47:58.833 3807.330 - 3822.933: 94.7189% ( 47) 00:47:58.833 3822.933 - 3838.537: 94.8188% ( 58) 00:47:58.833 3838.537 - 3854.141: 94.9066% ( 51) 00:47:58.833 3854.141 - 3869.745: 94.9755% ( 40) 00:47:58.833 3869.745 - 3885.349: 95.0529% ( 45) 00:47:58.833 3885.349 - 3900.952: 95.1373% ( 49) 00:47:58.833 3900.952 - 3916.556: 95.2251% ( 51) 00:47:58.833 3916.556 - 3932.160: 95.3043% ( 46) 00:47:58.833 3932.160 - 3947.764: 95.3715% ( 39) 00:47:58.833 3947.764 - 3963.368: 95.4524% ( 47) 00:47:58.833 3963.368 - 3978.971: 95.5316% ( 46) 00:47:58.833 3978.971 - 3994.575: 95.6074% ( 44) 00:47:58.833 3994.575 - 4025.783: 95.7365% ( 75) 00:47:58.833 4025.783 - 4056.990: 95.8984% ( 94) 00:47:58.833 4056.990 - 4088.198: 96.0276% ( 75) 00:47:58.833 4088.198 - 4119.406: 96.1464% ( 69) 00:47:58.833 4119.406 - 4150.613: 96.2738% ( 74) 00:47:58.833 4150.613 - 4181.821: 96.4029% ( 75) 00:47:58.833 4181.821 - 4213.029: 96.5149% ( 65) 00:47:58.833 4213.029 - 4244.236: 96.6388% ( 72) 00:47:58.833 4244.236 - 4275.444: 96.7490% ( 64) 00:47:58.833 4275.444 - 4306.651: 96.8713% ( 71) 00:47:58.833 4306.651 - 4337.859: 96.9677% ( 56) 00:47:58.833 4337.859 - 4369.067: 97.0814% ( 66) 00:47:58.833 4369.067 - 4400.274: 97.1967% ( 67) 00:47:58.833 4400.274 - 4431.482: 97.2966% ( 58) 00:47:58.833 4431.482 - 4462.690: 97.4120% ( 67) 00:47:58.833 4462.690 - 4493.897: 97.5136% ( 59) 00:47:58.833 4493.897 - 4525.105: 97.6083% ( 55) 00:47:58.833 4525.105 - 4556.312: 97.7185% ( 64) 00:47:58.833 4556.312 - 4587.520: 97.8166% ( 57) 00:47:58.833 4587.520 - 4618.728: 97.9389% ( 71) 00:47:58.833 4618.728 - 4649.935: 98.0146% ( 44) 00:47:58.834 4649.935 - 4681.143: 98.1197% ( 61) 00:47:58.834 4681.143 - 4712.350: 98.2316% ( 65) 00:47:58.834 4712.350 - 4743.558: 98.3108% ( 46) 00:47:58.834 4743.558 - 4774.766: 98.4072% ( 56) 00:47:58.834 4774.766 - 4805.973: 98.5088% ( 59) 00:47:58.834 4805.973 - 4837.181: 98.5984% ( 52) 00:47:58.834 4837.181 - 4868.389: 98.7068% ( 63) 00:47:58.834 4868.389 - 4899.596: 98.7998% ( 54) 00:47:58.834 4899.596 - 4930.804: 98.9083% ( 63) 00:47:58.834 4930.804 - 4962.011: 98.9892% ( 47) 00:47:58.834 4962.011 - 4993.219: 99.0839% ( 55) 00:47:58.834 4993.219 - 5024.427: 99.1769% ( 54) 00:47:58.834 5024.427 - 5055.634: 99.2699% ( 54) 00:47:58.834 5055.634 - 5086.842: 99.3543% ( 49) 00:47:58.834 5086.842 - 5118.050: 99.4335% ( 46) 00:47:58.834 5118.050 - 5149.257: 99.5110% ( 45) 00:47:58.834 5149.257 - 5180.465: 99.5712% ( 35) 00:47:58.834 5180.465 - 5211.672: 99.5988% ( 16) 00:47:58.834 5211.672 - 5242.880: 99.6143% ( 9) 00:47:58.834 5242.880 - 5274.088: 99.6177% ( 2) 00:47:58.834 5274.088 - 5305.295: 99.6212% ( 2) 00:47:58.834 5305.295 - 5336.503: 99.6229% ( 1) 00:47:58.834 5336.503 - 5367.710: 99.6246% ( 1) 00:47:58.834 5367.710 - 5398.918: 99.6281% ( 2) 00:47:58.834 5398.918 - 5430.126: 99.6315% ( 2) 00:47:58.834 5430.126 - 5461.333: 99.6350% ( 2) 00:47:58.834 5461.333 - 5492.541: 99.6384% ( 2) 00:47:58.834 5492.541 - 5523.749: 99.6418% ( 2) 00:47:58.834 5523.749 - 5554.956: 99.6453% ( 2) 00:47:58.834 5554.956 - 5586.164: 99.6487% ( 2) 00:47:58.834 5586.164 - 5617.371: 99.6522% ( 2) 00:47:58.834 5617.371 - 5648.579: 99.6539% ( 1) 00:47:58.834 5648.579 - 5679.787: 99.6573% ( 2) 00:47:58.834 5679.787 - 5710.994: 99.6608% ( 2) 00:47:58.834 5710.994 - 5742.202: 99.6625% ( 1) 00:47:58.834 5742.202 - 5773.410: 99.6659% ( 2) 00:47:58.834 5773.410 - 5804.617: 99.6677% ( 1) 00:47:58.834 5804.617 - 5835.825: 99.6711% ( 2) 00:47:58.834 5835.825 - 5867.032: 99.6728% ( 1) 00:47:58.834 5867.032 - 5898.240: 99.6763% ( 2) 00:47:58.834 5898.240 - 5929.448: 99.6780% ( 1) 00:47:58.834 5929.448 - 5960.655: 99.6797% ( 1) 00:47:58.834 5960.655 - 5991.863: 99.6814% ( 1) 00:47:58.834 5991.863 - 6023.070: 99.6832% ( 1) 00:47:58.834 6023.070 - 6054.278: 99.6849% ( 1) 00:47:58.834 6054.278 - 6085.486: 99.6866% ( 1) 00:47:58.834 6085.486 - 6116.693: 99.6883% ( 1) 00:47:58.834 6116.693 - 6147.901: 99.6901% ( 1) 00:47:58.834 6179.109 - 6210.316: 99.6918% ( 1) 00:47:58.834 6210.316 - 6241.524: 99.6935% ( 1) 00:47:58.834 6241.524 - 6272.731: 99.6952% ( 1) 00:47:58.834 6272.731 - 6303.939: 99.6969% ( 1) 00:47:58.834 6303.939 - 6335.147: 99.6987% ( 1) 00:47:58.834 6335.147 - 6366.354: 99.7004% ( 1) 00:47:58.834 6366.354 - 6397.562: 99.7021% ( 1) 00:47:58.834 6397.562 - 6428.770: 99.7038% ( 1) 00:47:58.834 6428.770 - 6459.977: 99.7056% ( 1) 00:47:58.834 6459.977 - 6491.185: 99.7073% ( 1) 00:47:58.834 6491.185 - 6522.392: 99.7090% ( 1) 00:47:58.834 6522.392 - 6553.600: 99.7107% ( 1) 00:47:58.834 6553.600 - 6584.808: 99.7124% ( 1) 00:47:58.834 6584.808 - 6616.015: 99.7142% ( 1) 00:47:58.834 6647.223 - 6678.430: 99.7159% ( 1) 00:47:58.834 6678.430 - 6709.638: 99.7176% ( 1) 00:47:58.834 6740.846 - 6772.053: 99.7193% ( 1) 00:47:58.834 6772.053 - 6803.261: 99.7211% ( 1) 00:47:58.834 6803.261 - 6834.469: 99.7228% ( 1) 00:47:58.834 6834.469 - 6865.676: 99.7245% ( 1) 00:47:58.834 6865.676 - 6896.884: 99.7262% ( 1) 00:47:58.834 6896.884 - 6928.091: 99.7279% ( 1) 00:47:58.834 6928.091 - 6959.299: 99.7297% ( 1) 00:47:58.834 6959.299 - 6990.507: 99.7314% ( 1) 00:47:58.834 6990.507 - 7021.714: 99.7331% ( 1) 00:47:58.834 7021.714 - 7052.922: 99.7348% ( 1) 00:47:58.834 7052.922 - 7084.130: 99.7365% ( 1) 00:47:58.834 7084.130 - 7115.337: 99.7383% ( 1) 00:47:58.834 7115.337 - 7146.545: 99.7400% ( 1) 00:47:58.834 7177.752 - 7208.960: 99.7417% ( 1) 00:47:58.834 7208.960 - 7240.168: 99.7434% ( 1) 00:47:58.834 7240.168 - 7271.375: 99.7452% ( 1) 00:47:58.834 7271.375 - 7302.583: 99.7469% ( 1) 00:47:58.834 7302.583 - 7333.790: 99.7486% ( 1) 00:47:58.834 7364.998 - 7396.206: 99.7503% ( 1) 00:47:58.834 7396.206 - 7427.413: 99.7520% ( 1) 00:47:58.834 7427.413 - 7458.621: 99.7538% ( 1) 00:47:58.834 7458.621 - 7489.829: 99.7555% ( 1) 00:47:58.834 7489.829 - 7521.036: 99.7572% ( 1) 00:47:58.834 7521.036 - 7552.244: 99.7589% ( 1) 00:47:58.834 7552.244 - 7583.451: 99.7607% ( 1) 00:47:58.834 7583.451 - 7614.659: 99.7624% ( 1) 00:47:58.834 7614.659 - 7645.867: 99.7641% ( 1) 00:47:58.834 7645.867 - 7677.074: 99.7658% ( 1) 00:47:58.834 7677.074 - 7708.282: 99.7675% ( 1) 00:47:58.834 7739.490 - 7770.697: 99.7693% ( 1) 00:47:58.834 7770.697 - 7801.905: 99.7710% ( 1) 00:47:58.834 7801.905 - 7833.112: 99.7727% ( 1) 00:47:58.834 7864.320 - 7895.528: 99.7744% ( 1) 00:47:58.834 7895.528 - 7926.735: 99.7762% ( 1) 00:47:58.834 7926.735 - 7957.943: 99.7779% ( 1) 00:47:58.834 7989.150 - 8051.566: 99.7796% ( 1) 00:47:58.834 12420.632 - 12483.048: 99.7813% ( 1) 00:47:58.834 12483.048 - 12545.463: 99.7830% ( 1) 00:47:58.834 12545.463 - 12607.878: 99.7865% ( 2) 00:47:58.834 12607.878 - 12670.293: 99.7951% ( 5) 00:47:58.834 12670.293 - 12732.709: 99.8037% ( 5) 00:47:58.834 12732.709 - 12795.124: 99.8123% ( 5) 00:47:58.834 12795.124 - 12857.539: 99.8192% ( 4) 00:47:58.834 12857.539 - 12919.954: 99.8278% ( 5) 00:47:58.834 12919.954 - 12982.370: 99.8364% ( 5) 00:47:58.834 12982.370 - 13044.785: 99.8450% ( 5) 00:47:58.834 13044.785 - 13107.200: 99.8536% ( 5) 00:47:58.834 13107.200 - 13169.615: 99.8588% ( 3) 00:47:58.834 13169.615 - 13232.030: 99.8674% ( 5) 00:47:58.834 13232.030 - 13294.446: 99.8743% ( 4) 00:47:58.834 13294.446 - 13356.861: 99.8829% ( 5) 00:47:58.834 13356.861 - 13419.276: 99.8898% ( 4) 00:47:58.834 13419.276 - 13481.691: 99.8967% ( 4) 00:47:58.834 13481.691 - 13544.107: 99.9053% ( 5) 00:47:58.834 13544.107 - 13606.522: 99.9139% ( 5) 00:47:58.834 13606.522 - 13668.937: 99.9225% ( 5) 00:47:58.834 13668.937 - 13731.352: 99.9277% ( 3) 00:47:58.834 13731.352 - 13793.768: 99.9346% ( 4) 00:47:58.834 13793.768 - 13856.183: 99.9415% ( 4) 00:47:58.834 13856.183 - 13918.598: 99.9501% ( 5) 00:47:58.834 13918.598 - 13981.013: 99.9570% ( 4) 00:47:58.834 13981.013 - 14043.429: 99.9604% ( 2) 00:47:58.834 14043.429 - 14105.844: 99.9638% ( 2) 00:47:58.834 14105.844 - 14168.259: 99.9656% ( 1) 00:47:58.834 14168.259 - 14230.674: 99.9690% ( 2) 00:47:58.834 14230.674 - 14293.090: 99.9724% ( 2) 00:47:58.834 14293.090 - 14355.505: 99.9742% ( 1) 00:47:58.834 14355.505 - 14417.920: 99.9759% ( 1) 00:47:58.834 14417.920 - 14480.335: 99.9793% ( 2) 00:47:58.834 14480.335 - 14542.750: 99.9828% ( 2) 00:47:58.834 14542.750 - 14605.166: 99.9862% ( 2) 00:47:58.834 14605.166 - 14667.581: 99.9879% ( 1) 00:47:58.834 14667.581 - 14729.996: 99.9914% ( 2) 00:47:58.834 14729.996 - 14792.411: 99.9948% ( 2) 00:47:58.834 14792.411 - 14854.827: 99.9983% ( 2) 00:47:58.834 14854.827 - 14917.242: 100.0000% ( 1) 00:47:58.834 00:47:58.834 12:11:30 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:47:58.834 00:47:58.834 real 0m2.764s 00:47:58.834 user 0m2.288s 00:47:58.834 sys 0m0.322s 00:47:58.834 12:11:30 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:58.834 12:11:30 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:47:58.834 ************************************ 00:47:58.834 END TEST nvme_perf 00:47:58.834 ************************************ 00:47:59.092 12:11:30 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:47:59.092 12:11:30 nvme -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:47:59.092 12:11:30 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:59.092 12:11:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:59.092 ************************************ 00:47:59.092 START TEST nvme_hello_world 00:47:59.092 ************************************ 00:47:59.092 12:11:30 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:47:59.350 Initializing NVMe Controllers 00:47:59.350 Attached to 0000:00:10.0 00:47:59.350 Namespace ID: 1 size: 5GB 00:47:59.350 Initialization complete. 00:47:59.350 INFO: using host memory buffer for IO 00:47:59.350 Hello world! 00:47:59.350 00:47:59.350 real 0m0.400s 00:47:59.350 user 0m0.144s 00:47:59.350 sys 0m0.158s 00:47:59.350 12:11:31 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:59.350 ************************************ 00:47:59.351 END TEST nvme_hello_world 00:47:59.351 ************************************ 00:47:59.351 12:11:31 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:47:59.351 12:11:31 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:47:59.351 12:11:31 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:47:59.351 12:11:31 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:59.351 12:11:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:59.351 ************************************ 00:47:59.351 START TEST nvme_sgl 00:47:59.351 ************************************ 00:47:59.351 12:11:31 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:47:59.608 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:47:59.608 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:47:59.608 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:47:59.866 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:47:59.866 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:47:59.866 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:47:59.866 NVMe Readv/Writev Request test 00:47:59.866 Attached to 0000:00:10.0 00:47:59.866 0000:00:10.0: build_io_request_2 test passed 00:47:59.866 0000:00:10.0: build_io_request_4 test passed 00:47:59.866 0000:00:10.0: build_io_request_5 test passed 00:47:59.866 0000:00:10.0: build_io_request_6 test passed 00:47:59.866 0000:00:10.0: build_io_request_7 test passed 00:47:59.866 0000:00:10.0: build_io_request_10 test passed 00:47:59.866 Cleaning up... 00:47:59.866 00:47:59.866 real 0m0.390s 00:47:59.866 user 0m0.164s 00:47:59.866 sys 0m0.150s 00:47:59.866 12:11:31 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:59.866 12:11:31 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:47:59.866 ************************************ 00:47:59.866 END TEST nvme_sgl 00:47:59.866 ************************************ 00:47:59.866 12:11:31 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:47:59.866 12:11:31 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:47:59.866 12:11:31 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:59.866 12:11:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:59.866 ************************************ 00:47:59.866 START TEST nvme_e2edp 00:47:59.866 ************************************ 00:47:59.866 12:11:31 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:48:00.131 NVMe Write/Read with End-to-End data protection test 00:48:00.131 Attached to 0000:00:10.0 00:48:00.131 Cleaning up... 00:48:00.131 00:48:00.131 real 0m0.375s 00:48:00.131 user 0m0.121s 00:48:00.131 sys 0m0.161s 00:48:00.131 12:11:32 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:00.131 12:11:32 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:48:00.131 ************************************ 00:48:00.131 END TEST nvme_e2edp 00:48:00.131 ************************************ 00:48:00.392 12:11:32 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:48:00.392 12:11:32 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:48:00.392 12:11:32 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:48:00.392 12:11:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:48:00.392 ************************************ 00:48:00.392 START TEST nvme_reserve 00:48:00.392 ************************************ 00:48:00.392 12:11:32 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:48:00.650 ===================================================== 00:48:00.651 NVMe Controller at PCI bus 0, device 16, function 0 00:48:00.651 ===================================================== 00:48:00.651 Reservations: Not Supported 00:48:00.651 Reservation test passed 00:48:00.651 00:48:00.651 real 0m0.309s 00:48:00.651 user 0m0.112s 00:48:00.651 sys 0m0.132s 00:48:00.651 12:11:32 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:00.651 12:11:32 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:48:00.651 ************************************ 00:48:00.651 END TEST nvme_reserve 00:48:00.651 ************************************ 00:48:00.651 12:11:32 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:48:00.651 12:11:32 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:48:00.651 12:11:32 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:48:00.651 12:11:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:48:00.651 ************************************ 00:48:00.651 START TEST nvme_err_injection 00:48:00.651 ************************************ 00:48:00.651 12:11:32 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:48:00.909 NVMe Error Injection test 00:48:00.909 Attached to 0000:00:10.0 00:48:00.909 0000:00:10.0: get features failed as expected 00:48:00.909 0000:00:10.0: get features successfully as expected 00:48:00.909 0000:00:10.0: read failed as expected 00:48:00.909 0000:00:10.0: read successfully as expected 00:48:00.909 Cleaning up... 00:48:00.909 00:48:00.909 real 0m0.355s 00:48:00.909 user 0m0.119s 00:48:00.909 sys 0m0.166s 00:48:00.909 12:11:32 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:00.909 12:11:32 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:48:00.909 ************************************ 00:48:00.909 END TEST nvme_err_injection 00:48:00.909 ************************************ 00:48:01.167 12:11:33 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:48:01.167 12:11:33 nvme -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:48:01.167 12:11:33 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:48:01.167 12:11:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:48:01.167 ************************************ 00:48:01.167 START TEST nvme_overhead 00:48:01.167 ************************************ 00:48:01.167 12:11:33 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:48:02.541 Initializing NVMe Controllers 00:48:02.541 Attached to 0000:00:10.0 00:48:02.541 Initialization complete. Launching workers. 00:48:02.541 submit (in ns) avg, min, max = 15728.9, 12867.6, 158957.1 00:48:02.541 complete (in ns) avg, min, max = 10479.1, 8996.2, 219352.4 00:48:02.541 00:48:02.541 Submit histogram 00:48:02.541 ================ 00:48:02.541 Range in us Cumulative Count 00:48:02.541 12.861 - 12.922: 0.0090% ( 1) 00:48:02.541 13.166 - 13.227: 0.0269% ( 2) 00:48:02.541 13.227 - 13.288: 0.0358% ( 1) 00:48:02.541 13.288 - 13.349: 0.0448% ( 1) 00:48:02.541 13.836 - 13.897: 0.0985% ( 6) 00:48:02.541 13.897 - 13.958: 0.3493% ( 28) 00:48:02.542 13.958 - 14.019: 1.5495% ( 134) 00:48:02.542 14.019 - 14.080: 4.6574% ( 347) 00:48:02.542 14.080 - 14.141: 9.4492% ( 535) 00:48:02.542 14.141 - 14.202: 14.5813% ( 573) 00:48:02.542 14.202 - 14.263: 18.4057% ( 427) 00:48:02.542 14.263 - 14.324: 21.1285% ( 304) 00:48:02.542 14.324 - 14.385: 23.9767% ( 318) 00:48:02.542 14.385 - 14.446: 27.9355% ( 442) 00:48:02.542 14.446 - 14.507: 34.4648% ( 729) 00:48:02.542 14.507 - 14.568: 42.6870% ( 918) 00:48:02.542 14.568 - 14.629: 50.1926% ( 838) 00:48:02.542 14.629 - 14.690: 55.9427% ( 642) 00:48:02.542 14.690 - 14.750: 60.4837% ( 507) 00:48:02.542 14.750 - 14.811: 63.7080% ( 360) 00:48:02.542 14.811 - 14.872: 66.6458% ( 328) 00:48:02.542 14.872 - 14.933: 68.8222% ( 243) 00:48:02.542 14.933 - 14.994: 70.4613% ( 183) 00:48:02.542 14.994 - 15.055: 71.5629% ( 123) 00:48:02.542 15.055 - 15.116: 72.2615% ( 78) 00:48:02.542 15.116 - 15.177: 72.8079% ( 61) 00:48:02.542 15.177 - 15.238: 73.1393% ( 37) 00:48:02.542 15.238 - 15.299: 73.4169% ( 31) 00:48:02.542 15.299 - 15.360: 73.5961% ( 20) 00:48:02.542 15.360 - 15.421: 73.7662% ( 19) 00:48:02.542 15.421 - 15.482: 73.8916% ( 14) 00:48:02.542 15.482 - 15.543: 74.0170% ( 14) 00:48:02.542 15.543 - 15.604: 74.1155% ( 11) 00:48:02.542 15.604 - 15.726: 74.2409% ( 14) 00:48:02.542 15.726 - 15.848: 74.4469% ( 23) 00:48:02.542 15.848 - 15.970: 74.6440% ( 22) 00:48:02.542 15.970 - 16.091: 74.7246% ( 9) 00:48:02.542 16.091 - 16.213: 74.7873% ( 7) 00:48:02.542 16.213 - 16.335: 74.8321% ( 5) 00:48:02.542 16.457 - 16.579: 74.8410% ( 1) 00:48:02.542 16.579 - 16.701: 74.8500% ( 1) 00:48:02.542 16.701 - 16.823: 74.8768% ( 3) 00:48:02.542 16.823 - 16.945: 74.8858% ( 1) 00:48:02.542 16.945 - 17.067: 74.8948% ( 1) 00:48:02.542 17.067 - 17.189: 74.9127% ( 2) 00:48:02.542 17.189 - 17.310: 74.9306% ( 2) 00:48:02.542 17.310 - 17.432: 74.9485% ( 2) 00:48:02.542 17.432 - 17.554: 74.9933% ( 5) 00:48:02.542 17.554 - 17.676: 75.0291% ( 4) 00:48:02.542 17.676 - 17.798: 75.1276% ( 11) 00:48:02.542 17.798 - 17.920: 76.3547% ( 137) 00:48:02.542 17.920 - 18.042: 81.1017% ( 530) 00:48:02.542 18.042 - 18.164: 84.8455% ( 418) 00:48:02.542 18.164 - 18.286: 87.1115% ( 253) 00:48:02.542 18.286 - 18.408: 88.7506% ( 183) 00:48:02.542 18.408 - 18.530: 89.9060% ( 129) 00:48:02.542 18.530 - 18.651: 91.4017% ( 167) 00:48:02.542 18.651 - 18.773: 92.5840% ( 132) 00:48:02.542 18.773 - 18.895: 93.2826% ( 78) 00:48:02.542 18.895 - 19.017: 93.7752% ( 55) 00:48:02.542 19.017 - 19.139: 94.0349% ( 29) 00:48:02.542 19.139 - 19.261: 94.3753% ( 38) 00:48:02.542 19.261 - 19.383: 94.6529% ( 31) 00:48:02.542 19.383 - 19.505: 94.9843% ( 37) 00:48:02.542 19.505 - 19.627: 95.5038% ( 58) 00:48:02.542 19.627 - 19.749: 95.8800% ( 42) 00:48:02.542 19.749 - 19.870: 96.1935% ( 35) 00:48:02.542 19.870 - 19.992: 96.3995% ( 23) 00:48:02.542 19.992 - 20.114: 96.5876% ( 21) 00:48:02.542 20.114 - 20.236: 96.7129% ( 14) 00:48:02.542 20.236 - 20.358: 96.8652% ( 17) 00:48:02.542 20.358 - 20.480: 97.0443% ( 20) 00:48:02.542 20.480 - 20.602: 97.2593% ( 24) 00:48:02.542 20.602 - 20.724: 97.3578% ( 11) 00:48:02.542 20.724 - 20.846: 97.4384% ( 9) 00:48:02.542 20.846 - 20.968: 97.6265% ( 21) 00:48:02.542 20.968 - 21.090: 97.7519% ( 14) 00:48:02.542 21.090 - 21.211: 97.8146% ( 7) 00:48:02.542 21.211 - 21.333: 97.8952% ( 9) 00:48:02.542 21.333 - 21.455: 97.9937% ( 11) 00:48:02.542 21.455 - 21.577: 98.0654% ( 8) 00:48:02.542 21.577 - 21.699: 98.1281% ( 7) 00:48:02.542 21.699 - 21.821: 98.1370% ( 1) 00:48:02.542 21.821 - 21.943: 98.1460% ( 1) 00:48:02.542 21.943 - 22.065: 98.2535% ( 12) 00:48:02.542 22.065 - 22.187: 98.2893% ( 4) 00:48:02.542 22.187 - 22.309: 98.3341% ( 5) 00:48:02.542 22.309 - 22.430: 98.3430% ( 1) 00:48:02.542 22.430 - 22.552: 98.3520% ( 1) 00:48:02.542 22.552 - 22.674: 98.3699% ( 2) 00:48:02.542 22.674 - 22.796: 98.3878% ( 2) 00:48:02.542 22.796 - 22.918: 98.4326% ( 5) 00:48:02.542 22.918 - 23.040: 98.4505% ( 2) 00:48:02.542 23.040 - 23.162: 98.4684% ( 2) 00:48:02.542 23.162 - 23.284: 98.4774% ( 1) 00:48:02.542 23.284 - 23.406: 98.4863% ( 1) 00:48:02.542 23.406 - 23.528: 98.5043% ( 2) 00:48:02.542 23.528 - 23.650: 98.5132% ( 1) 00:48:02.542 23.650 - 23.771: 98.5311% ( 2) 00:48:02.542 23.771 - 23.893: 98.5490% ( 2) 00:48:02.542 24.015 - 24.137: 98.5580% ( 1) 00:48:02.542 24.137 - 24.259: 98.5670% ( 1) 00:48:02.542 24.259 - 24.381: 98.5849% ( 2) 00:48:02.542 24.381 - 24.503: 98.6028% ( 2) 00:48:02.542 24.503 - 24.625: 98.6117% ( 1) 00:48:02.542 24.625 - 24.747: 98.6386% ( 3) 00:48:02.542 24.747 - 24.869: 98.6476% ( 1) 00:48:02.542 24.869 - 24.990: 98.6655% ( 2) 00:48:02.542 24.990 - 25.112: 98.6834% ( 2) 00:48:02.542 25.112 - 25.234: 98.6923% ( 1) 00:48:02.542 25.234 - 25.356: 98.7282% ( 4) 00:48:02.542 25.356 - 25.478: 98.7371% ( 1) 00:48:02.542 25.478 - 25.600: 98.7461% ( 1) 00:48:02.542 25.600 - 25.722: 98.7909% ( 5) 00:48:02.542 25.722 - 25.844: 98.7998% ( 1) 00:48:02.542 25.844 - 25.966: 98.8177% ( 2) 00:48:02.542 25.966 - 26.088: 98.8804% ( 7) 00:48:02.542 26.088 - 26.210: 98.9700% ( 10) 00:48:02.542 26.210 - 26.331: 99.0416% ( 8) 00:48:02.542 26.331 - 26.453: 99.1312% ( 10) 00:48:02.542 26.453 - 26.575: 99.2476% ( 13) 00:48:02.542 26.575 - 26.697: 99.3014% ( 6) 00:48:02.542 26.697 - 26.819: 99.3462% ( 5) 00:48:02.542 26.819 - 26.941: 99.3820% ( 4) 00:48:02.542 26.941 - 27.063: 99.4178% ( 4) 00:48:02.542 27.063 - 27.185: 99.4357% ( 2) 00:48:02.542 27.185 - 27.307: 99.4447% ( 1) 00:48:02.542 27.307 - 27.429: 99.4536% ( 1) 00:48:02.542 27.429 - 27.550: 99.4626% ( 1) 00:48:02.542 27.672 - 27.794: 99.4805% ( 2) 00:48:02.542 27.916 - 28.038: 99.4984% ( 2) 00:48:02.542 28.404 - 28.526: 99.5074% ( 1) 00:48:02.542 28.526 - 28.648: 99.5253% ( 2) 00:48:02.542 28.648 - 28.770: 99.5343% ( 1) 00:48:02.542 28.770 - 28.891: 99.5522% ( 2) 00:48:02.542 29.013 - 29.135: 99.5611% ( 1) 00:48:02.542 29.135 - 29.257: 99.5790% ( 2) 00:48:02.542 29.257 - 29.379: 99.5880% ( 1) 00:48:02.542 29.501 - 29.623: 99.5970% ( 1) 00:48:02.542 29.623 - 29.745: 99.6059% ( 1) 00:48:02.542 29.745 - 29.867: 99.6238% ( 2) 00:48:02.542 29.867 - 29.989: 99.6328% ( 1) 00:48:02.542 30.354 - 30.476: 99.6507% ( 2) 00:48:02.542 30.476 - 30.598: 99.6597% ( 1) 00:48:02.542 30.598 - 30.720: 99.6686% ( 1) 00:48:02.542 30.842 - 30.964: 99.6865% ( 2) 00:48:02.542 30.964 - 31.086: 99.6955% ( 1) 00:48:02.542 31.086 - 31.208: 99.7044% ( 1) 00:48:02.542 31.208 - 31.451: 99.7313% ( 3) 00:48:02.542 31.451 - 31.695: 99.7761% ( 5) 00:48:02.542 31.695 - 31.939: 99.7940% ( 2) 00:48:02.542 31.939 - 32.183: 99.8030% ( 1) 00:48:02.542 32.183 - 32.427: 99.8298% ( 3) 00:48:02.542 32.427 - 32.670: 99.8388% ( 1) 00:48:02.542 32.914 - 33.158: 99.8567% ( 2) 00:48:02.542 33.646 - 33.890: 99.8657% ( 1) 00:48:02.542 35.109 - 35.352: 99.8746% ( 1) 00:48:02.542 35.352 - 35.596: 99.8836% ( 1) 00:48:02.542 37.059 - 37.303: 99.9015% ( 2) 00:48:02.542 38.522 - 38.766: 99.9104% ( 1) 00:48:02.542 38.766 - 39.010: 99.9194% ( 1) 00:48:02.542 44.617 - 44.861: 99.9283% ( 1) 00:48:02.542 45.836 - 46.080: 99.9373% ( 1) 00:48:02.542 51.688 - 51.931: 99.9463% ( 1) 00:48:02.542 55.589 - 55.832: 99.9552% ( 1) 00:48:02.542 60.465 - 60.709: 99.9642% ( 1) 00:48:02.542 81.432 - 81.920: 99.9731% ( 1) 00:48:02.542 81.920 - 82.408: 99.9821% ( 1) 00:48:02.542 122.392 - 122.880: 99.9910% ( 1) 00:48:02.542 157.989 - 158.964: 100.0000% ( 1) 00:48:02.542 00:48:02.542 Complete histogram 00:48:02.542 ================== 00:48:02.542 Range in us Cumulative Count 00:48:02.542 8.960 - 9.021: 0.1702% ( 19) 00:48:02.542 9.021 - 9.082: 3.1438% ( 332) 00:48:02.542 9.082 - 9.143: 9.7358% ( 736) 00:48:02.542 9.143 - 9.204: 15.3157% ( 623) 00:48:02.542 9.204 - 9.265: 19.2029% ( 434) 00:48:02.542 9.265 - 9.326: 22.2302% ( 338) 00:48:02.542 9.326 - 9.387: 29.5387% ( 816) 00:48:02.542 9.387 - 9.448: 40.4120% ( 1214) 00:48:02.542 9.448 - 9.509: 49.4940% ( 1014) 00:48:02.542 9.509 - 9.570: 56.8383% ( 820) 00:48:02.542 9.570 - 9.630: 61.6570% ( 538) 00:48:02.542 9.630 - 9.691: 65.1232% ( 387) 00:48:02.542 9.691 - 9.752: 67.9355% ( 314) 00:48:02.542 9.752 - 9.813: 69.7179% ( 199) 00:48:02.542 9.813 - 9.874: 71.0255% ( 146) 00:48:02.542 9.874 - 9.935: 72.0197% ( 111) 00:48:02.542 9.935 - 9.996: 72.8437% ( 92) 00:48:02.542 9.996 - 10.057: 73.3721% ( 59) 00:48:02.542 10.057 - 10.118: 73.7304% ( 40) 00:48:02.542 10.118 - 10.179: 74.1155% ( 43) 00:48:02.542 10.179 - 10.240: 74.3574% ( 27) 00:48:02.542 10.240 - 10.301: 74.5992% ( 27) 00:48:02.542 10.301 - 10.362: 74.7873% ( 21) 00:48:02.542 10.362 - 10.423: 74.8679% ( 9) 00:48:02.543 10.423 - 10.484: 74.9843% ( 13) 00:48:02.543 10.484 - 10.545: 75.0470% ( 7) 00:48:02.543 10.545 - 10.606: 75.1008% ( 6) 00:48:02.543 10.606 - 10.667: 75.1635% ( 7) 00:48:02.543 10.667 - 10.728: 75.2082% ( 5) 00:48:02.543 10.728 - 10.789: 75.2172% ( 1) 00:48:02.543 10.789 - 10.850: 75.2441% ( 3) 00:48:02.543 10.850 - 10.910: 75.2799% ( 4) 00:48:02.543 10.910 - 10.971: 75.2888% ( 1) 00:48:02.543 10.971 - 11.032: 75.2978% ( 1) 00:48:02.543 11.032 - 11.093: 75.3068% ( 1) 00:48:02.543 11.093 - 11.154: 75.3157% ( 1) 00:48:02.543 11.154 - 11.215: 75.3247% ( 1) 00:48:02.543 11.215 - 11.276: 75.3515% ( 3) 00:48:02.543 11.276 - 11.337: 75.3695% ( 2) 00:48:02.543 11.337 - 11.398: 75.4232% ( 6) 00:48:02.543 11.398 - 11.459: 75.4322% ( 1) 00:48:02.543 11.520 - 11.581: 75.5038% ( 8) 00:48:02.543 11.581 - 11.642: 75.5128% ( 1) 00:48:02.543 11.642 - 11.703: 75.5307% ( 2) 00:48:02.543 11.703 - 11.764: 75.5575% ( 3) 00:48:02.543 11.764 - 11.825: 75.6023% ( 5) 00:48:02.543 11.825 - 11.886: 75.6202% ( 2) 00:48:02.543 11.886 - 11.947: 75.6561% ( 4) 00:48:02.543 11.947 - 12.008: 75.6829% ( 3) 00:48:02.543 12.008 - 12.069: 75.6919% ( 1) 00:48:02.543 12.069 - 12.130: 75.7188% ( 3) 00:48:02.543 12.130 - 12.190: 75.7546% ( 4) 00:48:02.543 12.190 - 12.251: 75.7635% ( 1) 00:48:02.543 12.251 - 12.312: 75.8621% ( 11) 00:48:02.543 12.312 - 12.373: 76.5517% ( 77) 00:48:02.543 12.373 - 12.434: 77.5638% ( 113) 00:48:02.543 12.434 - 12.495: 78.3251% ( 85) 00:48:02.543 12.495 - 12.556: 79.0864% ( 85) 00:48:02.543 12.556 - 12.617: 79.9642% ( 98) 00:48:02.543 12.617 - 12.678: 82.2212% ( 252) 00:48:02.543 12.678 - 12.739: 84.5052% ( 255) 00:48:02.543 12.739 - 12.800: 86.5920% ( 233) 00:48:02.543 12.800 - 12.861: 88.7237% ( 238) 00:48:02.543 12.861 - 12.922: 90.6135% ( 211) 00:48:02.543 12.922 - 12.983: 91.9301% ( 147) 00:48:02.543 12.983 - 13.044: 93.2736% ( 150) 00:48:02.543 13.044 - 13.105: 94.0439% ( 86) 00:48:02.543 13.105 - 13.166: 94.7783% ( 82) 00:48:02.543 13.166 - 13.227: 95.2082% ( 48) 00:48:02.543 13.227 - 13.288: 95.5755% ( 41) 00:48:02.543 13.288 - 13.349: 95.9069% ( 37) 00:48:02.543 13.349 - 13.410: 96.1666% ( 29) 00:48:02.543 13.410 - 13.470: 96.4980% ( 37) 00:48:02.543 13.470 - 13.531: 96.7309% ( 26) 00:48:02.543 13.531 - 13.592: 96.8921% ( 18) 00:48:02.543 13.592 - 13.653: 97.0712% ( 20) 00:48:02.543 13.653 - 13.714: 97.2324% ( 18) 00:48:02.543 13.714 - 13.775: 97.3041% ( 8) 00:48:02.543 13.775 - 13.836: 97.3668% ( 7) 00:48:02.543 13.836 - 13.897: 97.4742% ( 12) 00:48:02.543 13.897 - 13.958: 97.5280% ( 6) 00:48:02.543 13.958 - 14.019: 97.6355% ( 12) 00:48:02.543 14.019 - 14.080: 97.7609% ( 14) 00:48:02.543 14.080 - 14.141: 97.8504% ( 10) 00:48:02.543 14.141 - 14.202: 97.9489% ( 11) 00:48:02.543 14.202 - 14.263: 97.9937% ( 5) 00:48:02.543 14.263 - 14.324: 98.0564% ( 7) 00:48:02.543 14.324 - 14.385: 98.0923% ( 4) 00:48:02.543 14.385 - 14.446: 98.1460% ( 6) 00:48:02.543 14.446 - 14.507: 98.1549% ( 1) 00:48:02.543 14.507 - 14.568: 98.2087% ( 6) 00:48:02.543 14.568 - 14.629: 98.2266% ( 2) 00:48:02.543 14.629 - 14.690: 98.2445% ( 2) 00:48:02.543 14.690 - 14.750: 98.3341% ( 10) 00:48:02.543 14.750 - 14.811: 98.3699% ( 4) 00:48:02.543 14.811 - 14.872: 98.3878% ( 2) 00:48:02.543 14.994 - 15.055: 98.4147% ( 3) 00:48:02.543 15.055 - 15.116: 98.4326% ( 2) 00:48:02.543 15.116 - 15.177: 98.4416% ( 1) 00:48:02.543 15.177 - 15.238: 98.4595% ( 2) 00:48:02.543 15.299 - 15.360: 98.4863% ( 3) 00:48:02.543 15.421 - 15.482: 98.4953% ( 1) 00:48:02.543 15.482 - 15.543: 98.5222% ( 3) 00:48:02.543 15.543 - 15.604: 98.5401% ( 2) 00:48:02.543 15.604 - 15.726: 98.5670% ( 3) 00:48:02.543 15.726 - 15.848: 98.5759% ( 1) 00:48:02.543 15.848 - 15.970: 98.5938% ( 2) 00:48:02.543 16.091 - 16.213: 98.6207% ( 3) 00:48:02.543 16.213 - 16.335: 98.6296% ( 1) 00:48:02.543 16.335 - 16.457: 98.6476% ( 2) 00:48:02.543 16.457 - 16.579: 98.6655% ( 2) 00:48:02.543 16.579 - 16.701: 98.6834% ( 2) 00:48:02.543 16.701 - 16.823: 98.6923% ( 1) 00:48:02.543 16.823 - 16.945: 98.7013% ( 1) 00:48:02.543 16.945 - 17.067: 98.7103% ( 1) 00:48:02.543 17.067 - 17.189: 98.7192% ( 1) 00:48:02.543 17.310 - 17.432: 98.7550% ( 4) 00:48:02.543 17.554 - 17.676: 98.7730% ( 2) 00:48:02.543 17.798 - 17.920: 98.7819% ( 1) 00:48:02.543 17.920 - 18.042: 98.8088% ( 3) 00:48:02.543 18.651 - 18.773: 98.8177% ( 1) 00:48:02.543 18.773 - 18.895: 98.8267% ( 1) 00:48:02.543 19.017 - 19.139: 98.8356% ( 1) 00:48:02.543 19.261 - 19.383: 98.8536% ( 2) 00:48:02.543 19.383 - 19.505: 98.8625% ( 1) 00:48:02.543 19.505 - 19.627: 98.8894% ( 3) 00:48:02.543 19.627 - 19.749: 98.9073% ( 2) 00:48:02.543 20.236 - 20.358: 98.9163% ( 1) 00:48:02.543 20.358 - 20.480: 98.9252% ( 1) 00:48:02.543 20.480 - 20.602: 98.9610% ( 4) 00:48:02.543 20.602 - 20.724: 98.9879% ( 3) 00:48:02.543 20.724 - 20.846: 99.0148% ( 3) 00:48:02.543 20.846 - 20.968: 99.0775% ( 7) 00:48:02.543 20.968 - 21.090: 99.1760% ( 11) 00:48:02.543 21.090 - 21.211: 99.2566% ( 9) 00:48:02.543 21.211 - 21.333: 99.3551% ( 11) 00:48:02.543 21.333 - 21.455: 99.4178% ( 7) 00:48:02.543 21.455 - 21.577: 99.4716% ( 6) 00:48:02.543 21.577 - 21.699: 99.4984% ( 3) 00:48:02.543 21.699 - 21.821: 99.5163% ( 2) 00:48:02.543 21.943 - 22.065: 99.5343% ( 2) 00:48:02.543 22.065 - 22.187: 99.5611% ( 3) 00:48:02.543 22.187 - 22.309: 99.5701% ( 1) 00:48:02.543 23.162 - 23.284: 99.5790% ( 1) 00:48:02.543 23.406 - 23.528: 99.6059% ( 3) 00:48:02.543 24.015 - 24.137: 99.6149% ( 1) 00:48:02.543 24.503 - 24.625: 99.6328% ( 2) 00:48:02.543 24.747 - 24.869: 99.6507% ( 2) 00:48:02.543 24.990 - 25.112: 99.6597% ( 1) 00:48:02.543 25.112 - 25.234: 99.6776% ( 2) 00:48:02.543 25.234 - 25.356: 99.6865% ( 1) 00:48:02.543 25.356 - 25.478: 99.6955% ( 1) 00:48:02.543 25.600 - 25.722: 99.7044% ( 1) 00:48:02.543 26.210 - 26.331: 99.7223% ( 2) 00:48:02.543 26.453 - 26.575: 99.7313% ( 1) 00:48:02.543 26.575 - 26.697: 99.7403% ( 1) 00:48:02.543 26.697 - 26.819: 99.7582% ( 2) 00:48:02.543 26.819 - 26.941: 99.8030% ( 5) 00:48:02.543 26.941 - 27.063: 99.8119% ( 1) 00:48:02.543 27.063 - 27.185: 99.8209% ( 1) 00:48:02.543 27.307 - 27.429: 99.8298% ( 1) 00:48:02.543 27.916 - 28.038: 99.8477% ( 2) 00:48:02.543 28.038 - 28.160: 99.8567% ( 1) 00:48:02.543 28.282 - 28.404: 99.8657% ( 1) 00:48:02.543 28.526 - 28.648: 99.8746% ( 1) 00:48:02.543 29.379 - 29.501: 99.8836% ( 1) 00:48:02.543 29.745 - 29.867: 99.8925% ( 1) 00:48:02.543 31.086 - 31.208: 99.9015% ( 1) 00:48:02.543 34.865 - 35.109: 99.9104% ( 1) 00:48:02.543 35.109 - 35.352: 99.9194% ( 1) 00:48:02.543 35.840 - 36.084: 99.9373% ( 2) 00:48:02.543 46.568 - 46.811: 99.9463% ( 1) 00:48:02.543 49.981 - 50.225: 99.9552% ( 1) 00:48:02.543 63.878 - 64.366: 99.9642% ( 1) 00:48:02.543 75.581 - 76.069: 99.9731% ( 1) 00:48:02.543 79.482 - 79.970: 99.9821% ( 1) 00:48:02.543 216.503 - 217.478: 99.9910% ( 1) 00:48:02.543 218.453 - 219.429: 100.0000% ( 1) 00:48:02.543 00:48:02.543 00:48:02.543 real 0m1.339s 00:48:02.543 user 0m1.127s 00:48:02.543 sys 0m0.125s 00:48:02.543 12:11:34 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:02.543 12:11:34 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:48:02.543 ************************************ 00:48:02.543 END TEST nvme_overhead 00:48:02.543 ************************************ 00:48:02.543 12:11:34 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:48:02.543 12:11:34 nvme -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:48:02.543 12:11:34 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:48:02.543 12:11:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:48:02.543 ************************************ 00:48:02.543 START TEST nvme_arbitration 00:48:02.543 ************************************ 00:48:02.544 12:11:34 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:48:05.833 Initializing NVMe Controllers 00:48:05.833 Attached to 0000:00:10.0 00:48:05.833 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:48:05.833 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:48:05.833 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:48:05.833 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:48:05.833 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:48:05.833 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:48:05.833 Initialization complete. Launching workers. 00:48:05.833 Starting thread on core 1 with urgent priority queue 00:48:05.833 Starting thread on core 2 with urgent priority queue 00:48:05.833 Starting thread on core 3 with urgent priority queue 00:48:05.833 Starting thread on core 0 with urgent priority queue 00:48:05.833 QEMU NVMe Ctrl (12340 ) core 0: 1066.67 IO/s 93.75 secs/100000 ios 00:48:05.833 QEMU NVMe Ctrl (12340 ) core 1: 1024.00 IO/s 97.66 secs/100000 ios 00:48:05.833 QEMU NVMe Ctrl (12340 ) core 2: 490.67 IO/s 203.80 secs/100000 ios 00:48:05.833 QEMU NVMe Ctrl (12340 ) core 3: 512.00 IO/s 195.31 secs/100000 ios 00:48:05.833 ======================================================== 00:48:05.833 00:48:05.833 00:48:05.833 real 0m3.459s 00:48:05.833 user 0m9.408s 00:48:05.834 sys 0m0.124s 00:48:05.834 12:11:37 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:05.834 12:11:37 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:48:05.834 ************************************ 00:48:05.834 END TEST nvme_arbitration 00:48:05.834 ************************************ 00:48:06.092 12:11:37 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:48:06.092 12:11:37 nvme -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:48:06.092 12:11:37 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:48:06.092 12:11:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:48:06.092 ************************************ 00:48:06.092 START TEST nvme_single_aen 00:48:06.092 ************************************ 00:48:06.092 12:11:37 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:48:06.350 Asynchronous Event Request test 00:48:06.350 Attached to 0000:00:10.0 00:48:06.350 Reset controller to setup AER completions for this process 00:48:06.350 Registering asynchronous event callbacks... 00:48:06.350 Getting orig temperature thresholds of all controllers 00:48:06.350 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:48:06.350 Setting all controllers temperature threshold low to trigger AER 00:48:06.350 Waiting for all controllers temperature threshold to be set lower 00:48:06.350 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:48:06.350 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:48:06.350 Waiting for all controllers to trigger AER and reset threshold 00:48:06.350 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:48:06.350 Cleaning up... 00:48:06.350 00:48:06.350 real 0m0.291s 00:48:06.350 user 0m0.078s 00:48:06.350 sys 0m0.157s 00:48:06.350 12:11:38 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:06.350 12:11:38 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:48:06.350 ************************************ 00:48:06.350 END TEST nvme_single_aen 00:48:06.350 ************************************ 00:48:06.350 12:11:38 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:48:06.350 12:11:38 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:48:06.350 12:11:38 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:48:06.350 12:11:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:48:06.350 ************************************ 00:48:06.350 START TEST nvme_doorbell_aers 00:48:06.350 ************************************ 00:48:06.350 12:11:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # nvme_doorbell_aers 00:48:06.350 12:11:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:48:06.350 12:11:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:48:06.350 12:11:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:48:06.350 12:11:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:48:06.350 12:11:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1512 -- # bdfs=() 00:48:06.350 12:11:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1512 -- # local bdfs 00:48:06.350 12:11:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:48:06.350 12:11:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:48:06.350 12:11:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:48:06.350 12:11:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:48:06.350 12:11:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 00:48:06.350 12:11:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:48:06.350 12:11:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:48:06.608 [2024-06-10 12:11:38.627982] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173571) is not found. Dropping the request. 00:48:16.580 Executing: test_write_invalid_db 00:48:16.580 Waiting for AER completion... 00:48:16.580 Failure: test_write_invalid_db 00:48:16.580 00:48:16.580 Executing: test_invalid_db_write_overflow_sq 00:48:16.580 Waiting for AER completion... 00:48:16.580 Failure: test_invalid_db_write_overflow_sq 00:48:16.580 00:48:16.580 Executing: test_invalid_db_write_overflow_cq 00:48:16.580 Waiting for AER completion... 00:48:16.580 Failure: test_invalid_db_write_overflow_cq 00:48:16.580 00:48:16.580 00:48:16.580 real 0m10.122s 00:48:16.580 user 0m7.343s 00:48:16.580 sys 0m2.731s 00:48:16.580 12:11:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:16.580 12:11:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:48:16.580 ************************************ 00:48:16.580 END TEST nvme_doorbell_aers 00:48:16.580 ************************************ 00:48:16.580 12:11:48 nvme -- nvme/nvme.sh@97 -- # uname 00:48:16.580 12:11:48 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:48:16.580 12:11:48 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:48:16.580 12:11:48 nvme -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:48:16.580 12:11:48 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:48:16.580 12:11:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:48:16.580 ************************************ 00:48:16.580 START TEST nvme_multi_aen 00:48:16.580 ************************************ 00:48:16.580 12:11:48 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:48:16.935 [2024-06-10 12:11:48.758530] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173571) is not found. Dropping the request. 00:48:16.935 [2024-06-10 12:11:48.758770] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173571) is not found. Dropping the request. 00:48:16.935 [2024-06-10 12:11:48.758819] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173571) is not found. Dropping the request. 00:48:16.935 Child process pid: 173754 00:48:17.193 [Child] Asynchronous Event Request test 00:48:17.193 [Child] Attached to 0000:00:10.0 00:48:17.193 [Child] Registering asynchronous event callbacks... 00:48:17.193 [Child] Getting orig temperature thresholds of all controllers 00:48:17.193 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:48:17.193 [Child] Waiting for all controllers to trigger AER and reset threshold 00:48:17.193 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:48:17.193 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:48:17.193 [Child] Cleaning up... 00:48:17.193 Asynchronous Event Request test 00:48:17.193 Attached to 0000:00:10.0 00:48:17.193 Reset controller to setup AER completions for this process 00:48:17.193 Registering asynchronous event callbacks... 00:48:17.193 Getting orig temperature thresholds of all controllers 00:48:17.193 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:48:17.193 Setting all controllers temperature threshold low to trigger AER 00:48:17.193 Waiting for all controllers temperature threshold to be set lower 00:48:17.193 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:48:17.193 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:48:17.193 Waiting for all controllers to trigger AER and reset threshold 00:48:17.193 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:48:17.193 Cleaning up... 00:48:17.193 00:48:17.193 real 0m0.750s 00:48:17.193 user 0m0.232s 00:48:17.193 sys 0m0.333s 00:48:17.193 12:11:49 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:17.193 ************************************ 00:48:17.193 END TEST nvme_multi_aen 00:48:17.193 ************************************ 00:48:17.193 12:11:49 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:48:17.451 12:11:49 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:48:17.451 12:11:49 nvme -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:48:17.451 12:11:49 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:48:17.451 12:11:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:48:17.451 ************************************ 00:48:17.451 START TEST nvme_startup 00:48:17.451 ************************************ 00:48:17.451 12:11:49 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:48:17.709 Initializing NVMe Controllers 00:48:17.709 Attached to 0000:00:10.0 00:48:17.709 Initialization complete. 00:48:17.709 Time used:206565.906 (us). 00:48:17.709 00:48:17.709 real 0m0.323s 00:48:17.709 user 0m0.141s 00:48:17.709 sys 0m0.135s 00:48:17.709 12:11:49 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:17.709 ************************************ 00:48:17.709 12:11:49 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:48:17.709 END TEST nvme_startup 00:48:17.709 ************************************ 00:48:17.709 12:11:49 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:48:17.709 12:11:49 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:48:17.709 12:11:49 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:48:17.709 12:11:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:48:17.709 ************************************ 00:48:17.709 START TEST nvme_multi_secondary 00:48:17.709 ************************************ 00:48:17.709 12:11:49 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # nvme_multi_secondary 00:48:17.709 12:11:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=173831 00:48:17.709 12:11:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:48:17.709 12:11:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=173832 00:48:17.709 12:11:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:48:17.709 12:11:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:48:20.995 Initializing NVMe Controllers 00:48:20.995 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:48:20.995 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:48:20.995 Initialization complete. Launching workers. 00:48:20.995 ======================================================== 00:48:20.995 Latency(us) 00:48:20.995 Device Information : IOPS MiB/s Average min max 00:48:20.995 PCIE (0000:00:10.0) NSID 1 from core 2: 15381.00 60.08 1039.61 173.95 20505.19 00:48:20.995 ======================================================== 00:48:20.995 Total : 15381.00 60.08 1039.61 173.95 20505.19 00:48:20.995 00:48:21.253 12:11:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 173831 00:48:21.512 Initializing NVMe Controllers 00:48:21.512 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:48:21.512 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:48:21.512 Initialization complete. Launching workers. 00:48:21.512 ======================================================== 00:48:21.512 Latency(us) 00:48:21.512 Device Information : IOPS MiB/s Average min max 00:48:21.512 PCIE (0000:00:10.0) NSID 1 from core 1: 31881.65 124.54 501.58 171.23 3350.67 00:48:21.512 ======================================================== 00:48:21.512 Total : 31881.65 124.54 501.58 171.23 3350.67 00:48:21.512 00:48:23.416 Initializing NVMe Controllers 00:48:23.416 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:48:23.416 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:48:23.416 Initialization complete. Launching workers. 00:48:23.416 ======================================================== 00:48:23.416 Latency(us) 00:48:23.416 Device Information : IOPS MiB/s Average min max 00:48:23.416 PCIE (0000:00:10.0) NSID 1 from core 0: 40018.99 156.32 399.50 164.66 5059.94 00:48:23.416 ======================================================== 00:48:23.416 Total : 40018.99 156.32 399.50 164.66 5059.94 00:48:23.416 00:48:23.416 12:11:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 173832 00:48:23.416 12:11:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=173902 00:48:23.416 12:11:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=173903 00:48:23.416 12:11:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:48:23.416 12:11:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:48:23.417 12:11:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:48:26.721 Initializing NVMe Controllers 00:48:26.721 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:48:26.721 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:48:26.721 Initialization complete. Launching workers. 00:48:26.721 ======================================================== 00:48:26.721 Latency(us) 00:48:26.721 Device Information : IOPS MiB/s Average min max 00:48:26.721 PCIE (0000:00:10.0) NSID 1 from core 0: 32148.87 125.58 497.30 167.78 4608.72 00:48:26.721 ======================================================== 00:48:26.721 Total : 32148.87 125.58 497.30 167.78 4608.72 00:48:26.721 00:48:26.979 Initializing NVMe Controllers 00:48:26.979 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:48:26.979 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:48:26.979 Initialization complete. Launching workers. 00:48:26.979 ======================================================== 00:48:26.979 Latency(us) 00:48:26.979 Device Information : IOPS MiB/s Average min max 00:48:26.979 PCIE (0000:00:10.0) NSID 1 from core 1: 33583.07 131.18 476.10 160.68 7512.48 00:48:26.979 ======================================================== 00:48:26.979 Total : 33583.07 131.18 476.10 160.68 7512.48 00:48:26.979 00:48:28.883 Initializing NVMe Controllers 00:48:28.883 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:48:28.883 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:48:28.883 Initialization complete. Launching workers. 00:48:28.883 ======================================================== 00:48:28.883 Latency(us) 00:48:28.883 Device Information : IOPS MiB/s Average min max 00:48:28.883 PCIE (0000:00:10.0) NSID 1 from core 2: 16931.20 66.14 944.30 165.29 17455.67 00:48:28.883 ======================================================== 00:48:28.883 Total : 16931.20 66.14 944.30 165.29 17455.67 00:48:28.883 00:48:28.883 12:12:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 173902 00:48:28.883 12:12:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 173903 00:48:28.883 00:48:28.883 real 0m11.207s 00:48:28.883 user 0m18.723s 00:48:28.883 sys 0m1.100s 00:48:28.883 12:12:00 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:28.883 ************************************ 00:48:28.883 END TEST nvme_multi_secondary 00:48:28.883 ************************************ 00:48:28.883 12:12:00 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:48:28.883 12:12:00 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:48:28.883 12:12:00 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:48:28.883 12:12:00 nvme -- common/autotest_common.sh@1088 -- # [[ -e /proc/173128 ]] 00:48:28.883 12:12:00 nvme -- common/autotest_common.sh@1089 -- # kill 173128 00:48:28.883 12:12:00 nvme -- common/autotest_common.sh@1090 -- # wait 173128 00:48:28.883 [2024-06-10 12:12:00.883884] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173753) is not found. Dropping the request. 00:48:28.883 [2024-06-10 12:12:00.884044] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173753) is not found. Dropping the request. 00:48:28.883 [2024-06-10 12:12:00.884082] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173753) is not found. Dropping the request. 00:48:28.883 [2024-06-10 12:12:00.884139] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173753) is not found. Dropping the request. 00:48:29.142 12:12:01 nvme -- common/autotest_common.sh@1092 -- # rm -f /var/run/spdk_stub0 00:48:29.142 12:12:01 nvme -- common/autotest_common.sh@1096 -- # echo 2 00:48:29.142 12:12:01 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:48:29.142 12:12:01 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:48:29.142 12:12:01 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:48:29.142 12:12:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:48:29.142 ************************************ 00:48:29.142 START TEST bdev_nvme_reset_stuck_adm_cmd 00:48:29.142 ************************************ 00:48:29.142 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:48:29.400 * Looking for test storage... 00:48:29.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # bdfs=() 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # local bdfs 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # bdfs=() 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # local bdfs 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1526 -- # echo 0000:00:10.0 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=174050 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 174050 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@830 -- # '[' -z 174050 ']' 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local max_retries=100 00:48:29.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # xtrace_disable 00:48:29.400 12:12:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:48:29.400 [2024-06-10 12:12:01.432187] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:48:29.400 [2024-06-10 12:12:01.432418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174050 ] 00:48:29.724 [2024-06-10 12:12:01.661020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:48:29.980 [2024-06-10 12:12:01.885214] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:48:29.981 [2024-06-10 12:12:01.885265] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:48:29.981 [2024-06-10 12:12:01.885370] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:48:29.981 [2024-06-10 12:12:01.885375] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@863 -- # return 0 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:48:30.912 nvme0n1 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_QoF3q.txt 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:48:30.912 true 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1718021522 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=174082 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:48:30.912 12:12:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:48:33.445 [2024-06-10 12:12:04.889118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:48:33.445 [2024-06-10 12:12:04.889520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:48:33.445 [2024-06-10 12:12:04.889565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:48:33.445 [2024-06-10 12:12:04.889629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:33.445 [2024-06-10 12:12:04.891613] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 174082 00:48:33.445 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 174082 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 174082 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_QoF3q.txt 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:48:33.445 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_QoF3q.txt 00:48:33.446 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 174050 00:48:33.446 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@949 -- # '[' -z 174050 ']' 00:48:33.446 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # kill -0 174050 00:48:33.446 12:12:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # uname 00:48:33.446 12:12:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:48:33.446 12:12:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 174050 00:48:33.446 12:12:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:48:33.446 12:12:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:48:33.446 12:12:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # echo 'killing process with pid 174050' 00:48:33.446 killing process with pid 174050 00:48:33.446 12:12:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # kill 174050 00:48:33.446 12:12:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # wait 174050 00:48:35.973 12:12:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:48:35.973 12:12:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:48:35.973 00:48:35.973 real 0m6.736s 00:48:35.973 user 0m23.218s 00:48:35.973 sys 0m0.765s 00:48:35.973 12:12:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:35.973 12:12:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:48:35.973 ************************************ 00:48:35.973 END TEST bdev_nvme_reset_stuck_adm_cmd 00:48:35.973 ************************************ 00:48:35.973 12:12:07 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:48:35.973 12:12:07 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:48:35.973 12:12:07 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:48:35.973 12:12:07 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:48:35.973 12:12:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:48:35.973 ************************************ 00:48:35.973 START TEST nvme_fio 00:48:35.973 ************************************ 00:48:35.973 12:12:07 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # nvme_fio_test 00:48:35.973 12:12:07 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:48:35.973 12:12:07 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:48:35.973 12:12:07 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:48:35.973 12:12:07 nvme.nvme_fio -- common/autotest_common.sh@1512 -- # bdfs=() 00:48:35.973 12:12:07 nvme.nvme_fio -- common/autotest_common.sh@1512 -- # local bdfs 00:48:35.973 12:12:07 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:48:35.973 12:12:07 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:48:35.973 12:12:07 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:48:36.231 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:48:36.231 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 00:48:36.231 12:12:08 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:48:36.231 12:12:08 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:48:36.231 12:12:08 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:48:36.231 12:12:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:48:36.231 12:12:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:48:36.489 12:12:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:48:36.489 12:12:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:48:36.747 12:12:08 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:48:36.747 12:12:08 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:48:36.747 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1359 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:48:36.747 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:48:36.747 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:36.747 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # local sanitizers 00:48:36.747 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:48:36.747 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # shift 00:48:36.747 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local asan_lib= 00:48:36.747 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:48:36.747 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:48:36.747 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # grep libasan 00:48:36.747 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:48:36.747 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:48:36.747 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:48:36.747 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # break 00:48:36.747 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:48:36.747 12:12:08 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:48:37.006 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:48:37.006 fio-3.35 00:48:37.006 Starting 1 thread 00:48:40.294 00:48:40.294 test: (groupid=0, jobs=1): err= 0: pid=174241: Mon Jun 10 12:12:11 2024 00:48:40.294 read: IOPS=16.4k, BW=63.9MiB/s (67.0MB/s)(128MiB/2001msec) 00:48:40.294 slat (usec): min=4, max=133, avg= 6.08, stdev= 1.81 00:48:40.294 clat (usec): min=311, max=13012, avg=3870.30, stdev=796.58 00:48:40.294 lat (usec): min=317, max=13026, avg=3876.38, stdev=797.39 00:48:40.294 clat percentiles (usec): 00:48:40.294 | 1.00th=[ 2245], 5.00th=[ 2900], 10.00th=[ 3064], 20.00th=[ 3228], 00:48:40.294 | 30.00th=[ 3359], 40.00th=[ 3621], 50.00th=[ 3884], 60.00th=[ 4015], 00:48:40.294 | 70.00th=[ 4146], 80.00th=[ 4359], 90.00th=[ 4686], 95.00th=[ 4948], 00:48:40.294 | 99.00th=[ 6980], 99.50th=[ 7570], 99.90th=[ 8455], 99.95th=[ 8717], 00:48:40.294 | 99.99th=[ 9241] 00:48:40.294 bw ( KiB/s): min=59272, max=69928, per=100.00%, avg=65653.33, stdev=5631.71, samples=3 00:48:40.294 iops : min=14818, max=17482, avg=16413.33, stdev=1407.93, samples=3 00:48:40.294 write: IOPS=16.4k, BW=64.0MiB/s (67.1MB/s)(128MiB/2001msec); 0 zone resets 00:48:40.294 slat (usec): min=4, max=106, avg= 6.40, stdev= 2.02 00:48:40.294 clat (usec): min=359, max=16332, avg=3912.96, stdev=933.69 00:48:40.294 lat (usec): min=366, max=16344, avg=3919.35, stdev=934.46 00:48:40.294 clat percentiles (usec): 00:48:40.294 | 1.00th=[ 2343], 5.00th=[ 2933], 10.00th=[ 3097], 20.00th=[ 3261], 00:48:40.294 | 30.00th=[ 3392], 40.00th=[ 3654], 50.00th=[ 3916], 60.00th=[ 4047], 00:48:40.294 | 70.00th=[ 4146], 80.00th=[ 4359], 90.00th=[ 4752], 95.00th=[ 5080], 00:48:40.294 | 99.00th=[ 7373], 99.50th=[ 8455], 99.90th=[12780], 99.95th=[16057], 00:48:40.294 | 99.99th=[16188] 00:48:40.294 bw ( KiB/s): min=58808, max=69888, per=99.85%, avg=65461.33, stdev=5866.02, samples=3 00:48:40.294 iops : min=14702, max=17472, avg=16365.33, stdev=1466.50, samples=3 00:48:40.294 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:48:40.294 lat (msec) : 2=0.48%, 4=56.92%, 10=42.45%, 20=0.10% 00:48:40.294 cpu : usr=99.90%, sys=0.05%, ctx=5, majf=0, minf=35 00:48:40.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:48:40.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:40.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:48:40.294 issued rwts: total=32732,32795,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:40.294 latency : target=0, window=0, percentile=100.00%, depth=128 00:48:40.294 00:48:40.294 Run status group 0 (all jobs): 00:48:40.294 READ: bw=63.9MiB/s (67.0MB/s), 63.9MiB/s-63.9MiB/s (67.0MB/s-67.0MB/s), io=128MiB (134MB), run=2001-2001msec 00:48:40.294 WRITE: bw=64.0MiB/s (67.1MB/s), 64.0MiB/s-64.0MiB/s (67.1MB/s-67.1MB/s), io=128MiB (134MB), run=2001-2001msec 00:48:40.294 ----------------------------------------------------- 00:48:40.294 Suppressions used: 00:48:40.294 count bytes template 00:48:40.294 1 32 /usr/src/fio/parse.c 00:48:40.294 ----------------------------------------------------- 00:48:40.294 00:48:40.294 12:12:12 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:48:40.294 12:12:12 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:48:40.294 00:48:40.294 real 0m4.323s 00:48:40.294 user 0m3.563s 00:48:40.294 sys 0m0.455s 00:48:40.294 12:12:12 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:40.294 12:12:12 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:48:40.295 ************************************ 00:48:40.295 END TEST nvme_fio 00:48:40.295 ************************************ 00:48:40.295 00:48:40.295 real 0m49.306s 00:48:40.295 user 2m11.011s 00:48:40.295 sys 0m10.377s 00:48:40.295 12:12:12 nvme -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:40.553 12:12:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:48:40.553 ************************************ 00:48:40.553 END TEST nvme 00:48:40.553 ************************************ 00:48:40.553 12:12:12 -- spdk/autotest.sh@221 -- # [[ 0 -eq 1 ]] 00:48:40.553 12:12:12 -- spdk/autotest.sh@225 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:48:40.553 12:12:12 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:48:40.553 12:12:12 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:48:40.553 12:12:12 -- common/autotest_common.sh@10 -- # set +x 00:48:40.553 ************************************ 00:48:40.553 START TEST nvme_scc 00:48:40.553 ************************************ 00:48:40.553 12:12:12 nvme_scc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:48:40.553 * Looking for test storage... 00:48:40.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:48:40.553 12:12:12 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:48:40.553 12:12:12 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:48:40.553 12:12:12 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:48:40.553 12:12:12 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:48:40.553 12:12:12 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:40.553 12:12:12 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:40.553 12:12:12 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:40.553 12:12:12 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:40.553 12:12:12 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:48:40.553 12:12:12 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:48:40.553 12:12:12 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:48:40.553 12:12:12 nvme_scc -- paths/export.sh@5 -- # export PATH 00:48:40.553 12:12:12 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:48:40.553 12:12:12 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:48:40.553 12:12:12 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:48:40.553 12:12:12 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:48:40.553 12:12:12 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:48:40.553 12:12:12 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:48:40.553 12:12:12 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:48:40.553 12:12:12 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:48:40.553 12:12:12 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:48:40.554 12:12:12 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:48:40.554 12:12:12 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:40.554 12:12:12 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:48:40.554 12:12:12 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:48:40.554 12:12:12 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:48:40.554 12:12:12 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:48:40.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:48:40.812 Waiting for block devices as requested 00:48:41.073 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:48:41.073 12:12:13 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:48:41.073 12:12:13 nvme_scc -- scripts/common.sh@15 -- # local i 00:48:41.073 12:12:13 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:48:41.073 12:12:13 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:48:41.073 12:12:13 nvme_scc -- scripts/common.sh@24 -- # return 0 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:48:41.073 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:48:41.074 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:48:41.075 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.076 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.336 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:48:41.337 12:12:13 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@206 -- # echo nvme0 00:48:41.337 12:12:13 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:48:41.337 12:12:13 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:48:41.337 12:12:13 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:48:41.337 12:12:13 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:48:41.595 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:48:41.854 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:48:42.791 12:12:14 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:48:42.791 12:12:14 nvme_scc -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:48:42.791 12:12:14 nvme_scc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:48:42.791 12:12:14 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:48:42.791 ************************************ 00:48:42.791 START TEST nvme_simple_copy 00:48:42.791 ************************************ 00:48:42.791 12:12:14 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:48:43.050 Initializing NVMe Controllers 00:48:43.050 Attaching to 0000:00:10.0 00:48:43.050 Controller supports SCC. Attached to 0000:00:10.0 00:48:43.050 Namespace ID: 1 size: 5GB 00:48:43.050 Initialization complete. 00:48:43.050 00:48:43.050 Controller QEMU NVMe Ctrl (12340 ) 00:48:43.050 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:48:43.050 Namespace Block Size:4096 00:48:43.050 Writing LBAs 0 to 63 with Random Data 00:48:43.050 Copied LBAs from 0 - 63 to the Destination LBA 256 00:48:43.050 LBAs matching Written Data: 64 00:48:43.050 ************************************ 00:48:43.050 END TEST nvme_simple_copy 00:48:43.050 ************************************ 00:48:43.050 00:48:43.050 real 0m0.374s 00:48:43.050 user 0m0.177s 00:48:43.050 sys 0m0.098s 00:48:43.050 12:12:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:43.050 12:12:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:48:43.050 ************************************ 00:48:43.050 END TEST nvme_scc 00:48:43.050 ************************************ 00:48:43.050 00:48:43.050 real 0m2.657s 00:48:43.050 user 0m0.893s 00:48:43.050 sys 0m1.596s 00:48:43.050 12:12:15 nvme_scc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:43.050 12:12:15 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:48:43.357 12:12:15 -- spdk/autotest.sh@227 -- # [[ 0 -eq 1 ]] 00:48:43.357 12:12:15 -- spdk/autotest.sh@230 -- # [[ 0 -eq 1 ]] 00:48:43.357 12:12:15 -- spdk/autotest.sh@233 -- # [[ '' -eq 1 ]] 00:48:43.357 12:12:15 -- spdk/autotest.sh@236 -- # [[ 0 -eq 1 ]] 00:48:43.357 12:12:15 -- spdk/autotest.sh@240 -- # [[ '' -eq 1 ]] 00:48:43.357 12:12:15 -- spdk/autotest.sh@244 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:48:43.357 12:12:15 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:48:43.357 12:12:15 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:48:43.357 12:12:15 -- common/autotest_common.sh@10 -- # set +x 00:48:43.357 ************************************ 00:48:43.357 START TEST nvme_rpc 00:48:43.357 ************************************ 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:48:43.357 * Looking for test storage... 00:48:43.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:48:43.357 12:12:15 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:43.357 12:12:15 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@1523 -- # bdfs=() 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@1523 -- # local bdfs 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@1512 -- # bdfs=() 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@1512 -- # local bdfs 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@1513 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@1526 -- # echo 0000:00:10.0 00:48:43.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:43.357 12:12:15 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:48:43.357 12:12:15 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:48:43.357 12:12:15 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=174725 00:48:43.357 12:12:15 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:48:43.357 12:12:15 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 174725 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@830 -- # '[' -z 174725 ']' 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:48:43.357 12:12:15 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:48:43.617 [2024-06-10 12:12:15.424055] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:48:43.617 [2024-06-10 12:12:15.424695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174725 ] 00:48:43.617 [2024-06-10 12:12:15.640600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:48:43.877 [2024-06-10 12:12:15.930789] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:48:43.877 [2024-06-10 12:12:15.930796] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:48:44.812 12:12:16 nvme_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:48:44.812 12:12:16 nvme_rpc -- common/autotest_common.sh@863 -- # return 0 00:48:44.812 12:12:16 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:48:45.071 Nvme0n1 00:48:45.330 12:12:17 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:48:45.330 12:12:17 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:48:45.589 request: 00:48:45.589 { 00:48:45.589 "bdev_name": "Nvme0n1", 00:48:45.589 "filename": "non_existing_file", 00:48:45.589 "method": "bdev_nvme_apply_firmware", 00:48:45.589 "req_id": 1 00:48:45.589 } 00:48:45.589 Got JSON-RPC error response 00:48:45.589 response: 00:48:45.589 { 00:48:45.589 "code": -32603, 00:48:45.589 "message": "open file failed." 00:48:45.589 } 00:48:45.589 12:12:17 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:48:45.589 12:12:17 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:48:45.589 12:12:17 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:48:45.848 12:12:17 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:48:45.848 12:12:17 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 174725 00:48:45.848 12:12:17 nvme_rpc -- common/autotest_common.sh@949 -- # '[' -z 174725 ']' 00:48:45.848 12:12:17 nvme_rpc -- common/autotest_common.sh@953 -- # kill -0 174725 00:48:45.848 12:12:17 nvme_rpc -- common/autotest_common.sh@954 -- # uname 00:48:45.848 12:12:17 nvme_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:48:45.848 12:12:17 nvme_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 174725 00:48:45.848 12:12:17 nvme_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:48:45.848 12:12:17 nvme_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:48:45.848 12:12:17 nvme_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 174725' 00:48:45.848 killing process with pid 174725 00:48:45.848 12:12:17 nvme_rpc -- common/autotest_common.sh@968 -- # kill 174725 00:48:45.848 12:12:17 nvme_rpc -- common/autotest_common.sh@973 -- # wait 174725 00:48:49.133 ************************************ 00:48:49.133 END TEST nvme_rpc 00:48:49.133 ************************************ 00:48:49.133 00:48:49.133 real 0m5.367s 00:48:49.133 user 0m10.114s 00:48:49.133 sys 0m0.744s 00:48:49.133 12:12:20 nvme_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:49.133 12:12:20 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:48:49.133 12:12:20 -- spdk/autotest.sh@245 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:48:49.133 12:12:20 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:48:49.133 12:12:20 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:48:49.133 12:12:20 -- common/autotest_common.sh@10 -- # set +x 00:48:49.133 ************************************ 00:48:49.133 START TEST nvme_rpc_timeouts 00:48:49.133 ************************************ 00:48:49.133 12:12:20 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:48:49.133 * Looking for test storage... 00:48:49.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:48:49.133 12:12:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:49.133 12:12:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_174810 00:48:49.133 12:12:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_174810 00:48:49.133 12:12:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=174836 00:48:49.133 12:12:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:48:49.133 12:12:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:48:49.133 12:12:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 174836 00:48:49.133 12:12:20 nvme_rpc_timeouts -- common/autotest_common.sh@830 -- # '[' -z 174836 ']' 00:48:49.133 12:12:20 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:49.133 12:12:20 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local max_retries=100 00:48:49.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:49.133 12:12:20 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:49.133 12:12:20 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # xtrace_disable 00:48:49.133 12:12:20 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:48:49.133 [2024-06-10 12:12:20.756108] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:48:49.133 [2024-06-10 12:12:20.756300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174836 ] 00:48:49.133 [2024-06-10 12:12:20.939780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:48:49.133 [2024-06-10 12:12:21.169447] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:48:49.133 [2024-06-10 12:12:21.169464] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:48:50.069 12:12:22 nvme_rpc_timeouts -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:48:50.069 Checking default timeout settings: 00:48:50.069 12:12:22 nvme_rpc_timeouts -- common/autotest_common.sh@863 -- # return 0 00:48:50.069 12:12:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:48:50.069 12:12:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:48:50.331 Making settings changes with rpc: 00:48:50.331 12:12:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:48:50.331 12:12:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:48:50.897 Check default vs. modified settings: 00:48:50.897 12:12:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:48:50.897 12:12:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:48:51.156 12:12:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:48:51.156 12:12:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:48:51.156 12:12:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_174810 00:48:51.156 12:12:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:48:51.156 12:12:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:48:51.156 12:12:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:48:51.156 12:12:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:48:51.156 12:12:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_174810 00:48:51.156 12:12:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:48:51.156 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:48:51.156 Setting action_on_timeout is changed as expected. 00:48:51.156 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:48:51.156 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:48:51.156 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:48:51.156 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_174810 00:48:51.156 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:48:51.156 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:48:51.156 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:48:51.156 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_174810 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:48:51.157 Setting timeout_us is changed as expected. 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_174810 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_174810 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:48:51.157 Setting timeout_admin_us is changed as expected. 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_174810 /tmp/settings_modified_174810 00:48:51.157 12:12:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 174836 00:48:51.157 12:12:23 nvme_rpc_timeouts -- common/autotest_common.sh@949 -- # '[' -z 174836 ']' 00:48:51.157 12:12:23 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # kill -0 174836 00:48:51.157 12:12:23 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # uname 00:48:51.157 12:12:23 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:48:51.157 12:12:23 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 174836 00:48:51.157 12:12:23 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:48:51.157 killing process with pid 174836 00:48:51.157 12:12:23 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:48:51.157 12:12:23 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # echo 'killing process with pid 174836' 00:48:51.157 12:12:23 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # kill 174836 00:48:51.157 12:12:23 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # wait 174836 00:48:54.440 RPC TIMEOUT SETTING TEST PASSED. 00:48:54.440 12:12:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:48:54.440 ************************************ 00:48:54.440 END TEST nvme_rpc_timeouts 00:48:54.440 ************************************ 00:48:54.440 00:48:54.440 real 0m5.280s 00:48:54.440 user 0m10.020s 00:48:54.440 sys 0m0.719s 00:48:54.440 12:12:25 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:54.440 12:12:25 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:48:54.440 12:12:25 -- spdk/autotest.sh@247 -- # uname -s 00:48:54.440 12:12:25 -- spdk/autotest.sh@247 -- # '[' Linux = Linux ']' 00:48:54.440 12:12:25 -- spdk/autotest.sh@248 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:48:54.440 12:12:25 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:48:54.440 12:12:25 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:48:54.440 12:12:25 -- common/autotest_common.sh@10 -- # set +x 00:48:54.440 ************************************ 00:48:54.440 START TEST sw_hotplug 00:48:54.440 ************************************ 00:48:54.440 12:12:25 sw_hotplug -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:48:54.440 * Looking for test storage... 00:48:54.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:48:54.440 12:12:26 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:48:54.440 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:48:54.440 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:48:55.375 12:12:27 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:48:55.375 12:12:27 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:48:55.375 12:12:27 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:48:55.375 12:12:27 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@230 -- # local class 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@15 -- # local i 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@325 -- # (( 1 )) 00:48:55.375 12:12:27 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:48:55.375 12:12:27 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=1 00:48:55.375 12:12:27 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:48:55.375 12:12:27 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:48:55.943 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:48:55.943 Waiting for block devices as requested 00:48:55.943 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:48:56.202 12:12:28 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED=0000:00:10.0 00:48:56.202 12:12:28 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:48:56.520 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:48:56.520 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:48:56.778 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:48:57.714 12:12:29 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:48:57.714 12:12:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:57.714 12:12:29 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:48:57.714 12:12:29 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:48:57.714 12:12:29 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=175418 00:48:57.714 12:12:29 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:48:57.714 12:12:29 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning 00:48:57.714 12:12:29 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:48:57.714 12:12:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:48:57.714 12:12:29 sw_hotplug -- common/autotest_common.sh@706 -- # local cmd_es=0 00:48:57.714 12:12:29 sw_hotplug -- common/autotest_common.sh@708 -- # [[ -t 0 ]] 00:48:57.714 12:12:29 sw_hotplug -- common/autotest_common.sh@708 -- # exec 00:48:57.714 12:12:29 sw_hotplug -- common/autotest_common.sh@710 -- # local time=0 TIMEFORMAT=%2R 00:48:57.714 12:12:29 sw_hotplug -- common/autotest_common.sh@716 -- # remove_attach_helper 3 6 false 00:48:57.714 12:12:29 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:48:57.714 12:12:29 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:48:57.714 12:12:29 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:48:57.714 12:12:29 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:48:57.714 12:12:29 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:48:57.972 Initializing NVMe Controllers 00:48:57.972 Attaching to 0000:00:10.0 00:48:57.972 Attached to 0000:00:10.0 00:48:57.972 Initialization complete. Starting I/O... 00:48:57.972 QEMU NVMe Ctrl (12340 ): 2 I/Os completed (+2) 00:48:57.972 00:48:58.905 QEMU NVMe Ctrl (12340 ): 1576 I/Os completed (+1574) 00:48:58.905 00:49:00.314 QEMU NVMe Ctrl (12340 ): 3336 I/Os completed (+1760) 00:49:00.314 00:49:00.881 QEMU NVMe Ctrl (12340 ): 5491 I/Os completed (+2155) 00:49:00.881 00:49:02.259 QEMU NVMe Ctrl (12340 ): 8081 I/Os completed (+2590) 00:49:02.259 00:49:03.190 QEMU NVMe Ctrl (12340 ): 10585 I/Os completed (+2504) 00:49:03.190 00:49:03.795 12:12:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:49:03.795 12:12:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:49:03.795 12:12:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:49:03.795 [2024-06-10 12:12:35.678782] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:49:03.795 Controller removed: QEMU NVMe Ctrl (12340 ) 00:49:03.795 [2024-06-10 12:12:35.680609] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:03.795 [2024-06-10 12:12:35.680853] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:03.795 [2024-06-10 12:12:35.680913] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:03.795 [2024-06-10 12:12:35.681051] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:03.795 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:49:03.795 [2024-06-10 12:12:35.692001] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:03.795 [2024-06-10 12:12:35.692287] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:03.795 [2024-06-10 12:12:35.692425] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:03.795 [2024-06-10 12:12:35.692563] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:03.795 12:12:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:49:03.795 12:12:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:49:03.795 12:12:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:49:03.795 12:12:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:49:03.795 12:12:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:49:04.053 12:12:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:49:04.053 00:49:04.053 12:12:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:49:04.053 12:12:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:49:04.053 Attaching to 0000:00:10.0 00:49:04.053 Attached to 0000:00:10.0 00:49:04.987 QEMU NVMe Ctrl (12340 ): 2628 I/Os completed (+2628) 00:49:04.987 00:49:05.922 QEMU NVMe Ctrl (12340 ): 4936 I/Os completed (+2308) 00:49:05.922 00:49:07.297 QEMU NVMe Ctrl (12340 ): 7464 I/Os completed (+2528) 00:49:07.297 00:49:07.931 QEMU NVMe Ctrl (12340 ): 9760 I/Os completed (+2296) 00:49:07.931 00:49:08.865 QEMU NVMe Ctrl (12340 ): 12109 I/Os completed (+2349) 00:49:08.865 00:49:10.238 QEMU NVMe Ctrl (12340 ): 14642 I/Os completed (+2533) 00:49:10.238 00:49:10.238 12:12:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:49:10.238 12:12:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:49:10.238 12:12:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:49:10.238 12:12:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:49:10.238 [2024-06-10 12:12:41.934633] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:49:10.238 Controller removed: QEMU NVMe Ctrl (12340 ) 00:49:10.238 [2024-06-10 12:12:41.936647] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:10.238 [2024-06-10 12:12:41.936849] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:10.238 [2024-06-10 12:12:41.936917] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:10.238 [2024-06-10 12:12:41.937041] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:10.238 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:49:10.238 [2024-06-10 12:12:41.943912] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:10.238 [2024-06-10 12:12:41.944075] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:10.238 [2024-06-10 12:12:41.944180] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:10.238 [2024-06-10 12:12:41.944230] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:10.238 12:12:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:49:10.238 12:12:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:49:10.238 12:12:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:49:10.238 12:12:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:49:10.238 12:12:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:49:10.238 12:12:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:49:10.238 12:12:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:49:10.238 12:12:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:49:10.238 Attaching to 0000:00:10.0 00:49:10.238 Attached to 0000:00:10.0 00:49:11.173 QEMU NVMe Ctrl (12340 ): 1388 I/Os completed (+1388) 00:49:11.173 00:49:12.108 QEMU NVMe Ctrl (12340 ): 3844 I/Os completed (+2456) 00:49:12.108 00:49:13.045 QEMU NVMe Ctrl (12340 ): 6584 I/Os completed (+2740) 00:49:13.045 00:49:13.980 QEMU NVMe Ctrl (12340 ): 9005 I/Os completed (+2421) 00:49:13.980 00:49:14.916 QEMU NVMe Ctrl (12340 ): 11241 I/Os completed (+2236) 00:49:14.916 00:49:16.294 QEMU NVMe Ctrl (12340 ): 13712 I/Os completed (+2471) 00:49:16.294 00:49:16.294 12:12:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:49:16.294 12:12:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:49:16.294 12:12:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:49:16.294 12:12:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:49:16.294 [2024-06-10 12:12:48.207074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:49:16.294 Controller removed: QEMU NVMe Ctrl (12340 ) 00:49:16.294 [2024-06-10 12:12:48.208719] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:16.294 [2024-06-10 12:12:48.208881] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:16.294 [2024-06-10 12:12:48.208944] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:16.294 [2024-06-10 12:12:48.213111] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:16.294 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:49:16.294 [2024-06-10 12:12:48.215737] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:16.294 [2024-06-10 12:12:48.215879] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:16.294 [2024-06-10 12:12:48.215929] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:16.294 [2024-06-10 12:12:48.216048] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:16.294 12:12:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:49:16.294 12:12:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:49:16.294 12:12:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:49:16.294 12:12:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:49:16.294 12:12:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:49:16.552 12:12:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:49:16.552 12:12:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:49:16.552 12:12:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:49:16.552 Attaching to 0000:00:10.0 00:49:16.552 Attached to 0000:00:10.0 00:49:16.552 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:49:16.552 [2024-06-10 12:12:48.458724] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:49:23.161 12:12:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:49:23.161 12:12:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:49:23.161 12:12:54 sw_hotplug -- common/autotest_common.sh@716 -- # time=24.78 00:49:23.161 12:12:54 sw_hotplug -- common/autotest_common.sh@717 -- # echo 24.78 00:49:23.161 12:12:54 sw_hotplug -- common/autotest_common.sh@719 -- # return 0 00:49:23.161 12:12:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=24.78 00:49:23.161 12:12:54 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 24.78 1 00:49:23.161 remove_attach_helper took 24.78s to complete (handling 1 nvme drive(s)) 12:12:54 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:49:28.428 12:13:00 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 175418 00:49:28.428 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (175418) - No such process 00:49:28.428 12:13:00 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 175418 00:49:28.428 12:13:00 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:49:28.428 12:13:00 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:49:28.428 12:13:00 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:49:28.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:28.428 12:13:00 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=175749 00:49:28.428 12:13:00 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:49:28.428 12:13:00 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:49:28.428 12:13:00 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 175749 00:49:28.428 12:13:00 sw_hotplug -- common/autotest_common.sh@830 -- # '[' -z 175749 ']' 00:49:28.428 12:13:00 sw_hotplug -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:28.428 12:13:00 sw_hotplug -- common/autotest_common.sh@835 -- # local max_retries=100 00:49:28.428 12:13:00 sw_hotplug -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:28.428 12:13:00 sw_hotplug -- common/autotest_common.sh@839 -- # xtrace_disable 00:49:28.428 12:13:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:28.686 [2024-06-10 12:13:00.562218] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:49:28.686 [2024-06-10 12:13:00.562857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175749 ] 00:49:28.945 [2024-06-10 12:13:00.748588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:29.203 [2024-06-10 12:13:01.007868] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:49:30.137 12:13:01 sw_hotplug -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:49:30.137 12:13:01 sw_hotplug -- common/autotest_common.sh@863 -- # return 0 00:49:30.137 12:13:01 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:49:30.137 12:13:01 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:30.137 12:13:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:30.137 12:13:01 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:30.137 12:13:01 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:49:30.137 12:13:01 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:49:30.137 12:13:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:49:30.137 12:13:01 sw_hotplug -- common/autotest_common.sh@706 -- # local cmd_es=0 00:49:30.137 12:13:01 sw_hotplug -- common/autotest_common.sh@708 -- # [[ -t 0 ]] 00:49:30.137 12:13:01 sw_hotplug -- common/autotest_common.sh@708 -- # exec 00:49:30.137 12:13:01 sw_hotplug -- common/autotest_common.sh@710 -- # local time=0 TIMEFORMAT=%2R 00:49:30.137 12:13:01 sw_hotplug -- common/autotest_common.sh@716 -- # remove_attach_helper 3 6 true 00:49:30.137 12:13:01 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:49:30.137 12:13:01 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:49:30.137 12:13:01 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:49:30.137 12:13:01 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:49:30.137 12:13:01 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:49:36.694 12:13:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:49:36.694 12:13:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:49:36.694 12:13:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:49:36.694 12:13:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:49:36.694 12:13:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:36.694 12:13:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:36.694 12:13:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:36.694 12:13:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:36.694 12:13:07 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:36.694 12:13:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:36.694 12:13:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:36.694 12:13:07 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:36.694 [2024-06-10 12:13:07.947480] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:49:36.694 [2024-06-10 12:13:07.949493] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:36.694 [2024-06-10 12:13:07.949575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:49:36.694 [2024-06-10 12:13:07.949606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:36.694 [2024-06-10 12:13:07.949647] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:36.694 [2024-06-10 12:13:07.949669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:49:36.694 [2024-06-10 12:13:07.949694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:36.694 [2024-06-10 12:13:07.949735] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:36.694 [2024-06-10 12:13:07.949760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:49:36.694 [2024-06-10 12:13:07.949781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:36.694 [2024-06-10 12:13:07.949807] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:36.694 [2024-06-10 12:13:07.949827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:49:36.694 [2024-06-10 12:13:07.949853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:36.694 12:13:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:36.694 12:13:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:36.694 12:13:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:36.694 12:13:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:36.694 12:13:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:36.694 12:13:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:36.694 12:13:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:36.694 12:13:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:36.694 12:13:08 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:36.694 12:13:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:36.694 12:13:08 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:36.694 12:13:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:49:36.694 12:13:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:49:36.694 12:13:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:49:36.694 12:13:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:49:36.694 12:13:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:49:36.694 12:13:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:49:36.694 12:13:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:49:36.695 12:13:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:43.259 12:13:14 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:43.259 12:13:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:43.259 12:13:14 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:43.259 12:13:14 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:43.259 12:13:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:43.259 12:13:14 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:43.259 [2024-06-10 12:13:14.847639] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:49:43.259 [2024-06-10 12:13:14.849455] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:43.259 [2024-06-10 12:13:14.849614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:49:43.259 [2024-06-10 12:13:14.849739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:43.259 [2024-06-10 12:13:14.849802] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:43.259 [2024-06-10 12:13:14.849910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:49:43.259 [2024-06-10 12:13:14.850006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:43.259 [2024-06-10 12:13:14.850051] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:43.259 [2024-06-10 12:13:14.850221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:49:43.259 [2024-06-10 12:13:14.850264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:43.259 [2024-06-10 12:13:14.850326] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:43.259 [2024-06-10 12:13:14.850461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:49:43.259 [2024-06-10 12:13:14.850521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:43.259 12:13:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:43.518 12:13:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:43.518 12:13:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:43.518 12:13:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:43.518 12:13:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:43.518 12:13:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:43.518 12:13:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:43.518 12:13:15 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:43.518 12:13:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:43.518 12:13:15 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:43.518 12:13:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:49:43.518 12:13:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:49:43.518 12:13:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:49:43.518 12:13:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:49:43.518 12:13:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:49:43.518 12:13:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:49:43.776 12:13:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:49:43.776 12:13:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:50.336 12:13:21 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:50.336 12:13:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:50.336 12:13:21 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:50.336 12:13:21 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:50.336 12:13:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:50.336 12:13:21 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:50.336 [2024-06-10 12:13:21.747804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:49:50.336 [2024-06-10 12:13:21.749876] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:50.336 [2024-06-10 12:13:21.749940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:49:50.336 [2024-06-10 12:13:21.749974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.336 [2024-06-10 12:13:21.750007] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:50.336 [2024-06-10 12:13:21.750052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:49:50.336 [2024-06-10 12:13:21.750081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.336 [2024-06-10 12:13:21.750103] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:50.336 [2024-06-10 12:13:21.750130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:49:50.336 [2024-06-10 12:13:21.750152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.336 [2024-06-10 12:13:21.750181] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:50.336 [2024-06-10 12:13:21.750213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:49:50.336 [2024-06-10 12:13:21.750247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:50.336 12:13:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:50.336 12:13:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:50.336 12:13:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:50.336 12:13:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:50.336 12:13:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:50.336 12:13:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:50.336 12:13:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:50.336 12:13:22 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:50.336 12:13:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:50.336 12:13:22 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:50.336 12:13:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:49:50.336 12:13:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:49:50.336 12:13:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:49:50.336 12:13:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:49:50.336 12:13:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:49:50.595 12:13:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:49:50.595 12:13:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:49:50.595 12:13:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:57.155 12:13:28 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:57.155 12:13:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:57.155 12:13:28 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:49:57.155 12:13:28 sw_hotplug -- common/autotest_common.sh@716 -- # time=26.75 00:49:57.155 12:13:28 sw_hotplug -- common/autotest_common.sh@717 -- # echo 26.75 00:49:57.155 12:13:28 sw_hotplug -- common/autotest_common.sh@719 -- # return 0 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=26.75 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 26.75 1 00:49:57.155 remove_attach_helper took 26.75s to complete (handling 1 nvme drive(s)) 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:49:57.155 12:13:28 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:57.155 12:13:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:57.155 12:13:28 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:49:57.155 12:13:28 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:57.155 12:13:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:57.155 12:13:28 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:49:57.155 12:13:28 sw_hotplug -- common/autotest_common.sh@706 -- # local cmd_es=0 00:49:57.155 12:13:28 sw_hotplug -- common/autotest_common.sh@708 -- # [[ -t 0 ]] 00:49:57.155 12:13:28 sw_hotplug -- common/autotest_common.sh@708 -- # exec 00:49:57.155 12:13:28 sw_hotplug -- common/autotest_common.sh@710 -- # local time=0 TIMEFORMAT=%2R 00:49:57.155 12:13:28 sw_hotplug -- common/autotest_common.sh@716 -- # remove_attach_helper 3 6 true 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:49:57.155 12:13:28 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:50:03.775 12:13:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:50:03.775 12:13:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:50:03.775 12:13:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:50:03.775 12:13:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:50:03.775 12:13:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:03.775 12:13:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:03.775 12:13:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:03.775 12:13:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:03.775 12:13:34 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:03.775 12:13:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:03.775 12:13:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:03.775 12:13:34 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:03.775 [2024-06-10 12:13:34.727764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:50:03.775 [2024-06-10 12:13:34.729887] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:50:03.775 [2024-06-10 12:13:34.729948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:50:03.775 [2024-06-10 12:13:34.729982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:03.775 [2024-06-10 12:13:34.730034] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:50:03.775 [2024-06-10 12:13:34.730057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:50:03.775 [2024-06-10 12:13:34.730089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:03.775 [2024-06-10 12:13:34.730111] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:50:03.775 [2024-06-10 12:13:34.730141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:50:03.775 [2024-06-10 12:13:34.730162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:03.775 [2024-06-10 12:13:34.730201] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:50:03.775 [2024-06-10 12:13:34.730222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:50:03.775 [2024-06-10 12:13:34.730249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:03.775 12:13:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:03.775 12:13:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:03.775 12:13:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:03.775 12:13:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:03.775 12:13:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:03.775 12:13:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:03.775 12:13:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:03.775 12:13:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:03.775 12:13:35 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:03.775 12:13:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:03.775 12:13:35 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:03.775 12:13:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:50:03.775 12:13:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:50:03.775 12:13:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:50:03.775 12:13:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:50:03.775 12:13:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:50:03.775 12:13:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:50:03.775 12:13:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:50:03.775 12:13:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:10.336 12:13:41 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:10.336 12:13:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:10.336 12:13:41 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:10.336 [2024-06-10 12:13:41.628210] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:10.336 [2024-06-10 12:13:41.630216] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:50:10.336 [2024-06-10 12:13:41.630431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:50:10.336 [2024-06-10 12:13:41.630582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.336 [2024-06-10 12:13:41.630739] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:50:10.336 [2024-06-10 12:13:41.630874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:50:10.336 [2024-06-10 12:13:41.630982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.336 [2024-06-10 12:13:41.631112] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:10.336 [2024-06-10 12:13:41.631225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:50:10.336 [2024-06-10 12:13:41.631347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.336 [2024-06-10 12:13:41.631462] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:50:10.336 12:13:41 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:10.336 [2024-06-10 12:13:41.631588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:50:10.336 12:13:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:10.336 [2024-06-10 12:13:41.631697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:10.336 12:13:41 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:50:10.336 12:13:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:50:16.897 12:13:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:50:16.897 12:13:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:50:16.897 12:13:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:50:16.897 12:13:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:16.897 12:13:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:16.897 12:13:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:16.897 12:13:47 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:16.897 12:13:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:16.897 12:13:47 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:16.897 12:13:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:50:16.897 12:13:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:50:16.897 12:13:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:50:16.897 12:13:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:50:16.897 12:13:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:50:16.897 12:13:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:16.897 12:13:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:16.897 12:13:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:16.897 12:13:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:16.897 12:13:47 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:16.897 12:13:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:16.897 12:13:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:16.897 12:13:48 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:16.897 [2024-06-10 12:13:48.028384] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:50:16.897 [2024-06-10 12:13:48.030888] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:50:16.897 [2024-06-10 12:13:48.030970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:50:16.897 [2024-06-10 12:13:48.031017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:16.897 [2024-06-10 12:13:48.031054] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:50:16.897 [2024-06-10 12:13:48.031088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:50:16.897 [2024-06-10 12:13:48.031118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:16.897 [2024-06-10 12:13:48.031151] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:50:16.897 [2024-06-10 12:13:48.031177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:50:16.897 [2024-06-10 12:13:48.031209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:16.897 [2024-06-10 12:13:48.031240] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:50:16.897 [2024-06-10 12:13:48.031272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:50:16.897 [2024-06-10 12:13:48.031306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:16.897 12:13:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:16.897 12:13:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:16.897 12:13:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:16.897 12:13:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:16.897 12:13:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:16.897 12:13:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:16.897 12:13:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:16.897 12:13:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:16.897 12:13:48 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:16.897 12:13:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:16.897 12:13:48 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:16.897 12:13:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:50:16.897 12:13:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:50:16.897 12:13:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:50:16.897 12:13:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:50:16.897 12:13:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:50:16.897 12:13:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:50:16.897 12:13:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:50:16.897 12:13:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:50:23.503 12:13:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:50:23.503 12:13:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:50:23.503 12:13:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:50:23.503 12:13:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:23.503 12:13:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:23.503 12:13:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:23.503 12:13:54 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:23.503 12:13:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:23.503 12:13:54 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:23.503 12:13:54 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:50:23.503 12:13:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:50:23.503 12:13:54 sw_hotplug -- common/autotest_common.sh@716 -- # time=26.18 00:50:23.503 12:13:54 sw_hotplug -- common/autotest_common.sh@717 -- # echo 26.18 00:50:23.503 12:13:54 sw_hotplug -- common/autotest_common.sh@719 -- # return 0 00:50:23.504 12:13:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=26.18 00:50:23.504 12:13:54 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 26.18 1 00:50:23.504 remove_attach_helper took 26.18s to complete (handling 1 nvme drive(s)) 12:13:54 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:50:23.504 12:13:54 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 175749 00:50:23.504 12:13:54 sw_hotplug -- common/autotest_common.sh@949 -- # '[' -z 175749 ']' 00:50:23.504 12:13:54 sw_hotplug -- common/autotest_common.sh@953 -- # kill -0 175749 00:50:23.504 12:13:54 sw_hotplug -- common/autotest_common.sh@954 -- # uname 00:50:23.504 12:13:54 sw_hotplug -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:50:23.504 12:13:54 sw_hotplug -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 175749 00:50:23.504 12:13:54 sw_hotplug -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:50:23.504 12:13:54 sw_hotplug -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:50:23.504 killing process with pid 175749 00:50:23.504 12:13:54 sw_hotplug -- common/autotest_common.sh@967 -- # echo 'killing process with pid 175749' 00:50:23.504 12:13:54 sw_hotplug -- common/autotest_common.sh@968 -- # kill 175749 00:50:23.504 12:13:54 sw_hotplug -- common/autotest_common.sh@973 -- # wait 175749 00:50:26.088 12:13:57 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:50:26.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:50:26.088 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:50:27.021 ************************************ 00:50:27.021 END TEST sw_hotplug 00:50:27.021 ************************************ 00:50:27.021 00:50:27.021 real 1m32.899s 00:50:27.021 user 1m6.928s 00:50:27.021 sys 0m16.519s 00:50:27.021 12:13:58 sw_hotplug -- common/autotest_common.sh@1125 -- # xtrace_disable 00:50:27.021 12:13:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:27.021 12:13:58 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:50:27.021 12:13:58 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:50:27.021 12:13:58 -- spdk/autotest.sh@264 -- # timing_exit lib 00:50:27.021 12:13:58 -- common/autotest_common.sh@729 -- # xtrace_disable 00:50:27.021 12:13:58 -- common/autotest_common.sh@10 -- # set +x 00:50:27.021 12:13:58 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:50:27.021 12:13:58 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:50:27.021 12:13:58 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:50:27.021 12:13:58 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:50:27.021 12:13:58 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:50:27.021 12:13:58 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:50:27.021 12:13:58 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:50:27.021 12:13:58 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:50:27.021 12:13:58 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:50:27.021 12:13:58 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:50:27.021 12:13:58 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:50:27.021 12:13:58 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:50:27.021 12:13:58 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:50:27.021 12:13:58 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:50:27.021 12:13:58 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:50:27.021 12:13:58 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:50:27.021 12:13:58 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:50:27.021 12:13:58 -- spdk/autotest.sh@379 -- # [[ 1 -eq 1 ]] 00:50:27.021 12:13:58 -- spdk/autotest.sh@380 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:50:27.021 12:13:58 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:50:27.021 12:13:58 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:50:27.021 12:13:58 -- common/autotest_common.sh@10 -- # set +x 00:50:27.021 ************************************ 00:50:27.021 START TEST blockdev_raid5f 00:50:27.021 ************************************ 00:50:27.021 12:13:58 blockdev_raid5f -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:50:27.021 * Looking for test storage... 00:50:27.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@674 -- # uname -s 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@682 -- # test_type=raid5f 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@683 -- # crypto_device= 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@684 -- # dek= 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@685 -- # env_ctx= 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@690 -- # [[ raid5f == bdev ]] 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@690 -- # [[ raid5f == crypto_* ]] 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=176633 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 176633 00:50:27.021 12:13:59 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:50:27.021 12:13:59 blockdev_raid5f -- common/autotest_common.sh@830 -- # '[' -z 176633 ']' 00:50:27.021 12:13:59 blockdev_raid5f -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:27.021 12:13:59 blockdev_raid5f -- common/autotest_common.sh@835 -- # local max_retries=100 00:50:27.021 12:13:59 blockdev_raid5f -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:27.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:27.021 12:13:59 blockdev_raid5f -- common/autotest_common.sh@839 -- # xtrace_disable 00:50:27.021 12:13:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:50:27.279 [2024-06-10 12:13:59.136348] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:50:27.279 [2024-06-10 12:13:59.136764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176633 ] 00:50:27.279 [2024-06-10 12:13:59.313991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:27.845 [2024-06-10 12:13:59.608089] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:50:28.411 12:14:00 blockdev_raid5f -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:50:28.411 12:14:00 blockdev_raid5f -- common/autotest_common.sh@863 -- # return 0 00:50:28.411 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:50:28.411 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@726 -- # setup_raid5f_conf 00:50:28.411 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@280 -- # rpc_cmd 00:50:28.411 12:14:00 blockdev_raid5f -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:28.411 12:14:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:50:28.669 Malloc0 00:50:28.669 Malloc1 00:50:28.669 Malloc2 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:28.669 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:28.669 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@740 -- # cat 00:50:28.669 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:28.669 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:28.669 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:28.669 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:50:28.669 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:50:28.669 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:28.669 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:50:28.669 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@749 -- # jq -r .name 00:50:28.669 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "9c71a987-8ab3-4ea3-b7b9-a7bfab0f104f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9c71a987-8ab3-4ea3-b7b9-a7bfab0f104f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "9c71a987-8ab3-4ea3-b7b9-a7bfab0f104f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "4af5f490-b7ac-494a-8922-8ba802fd5874",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "bdbd0e51-65aa-4897-933e-6f7e1d2e8133",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "831f4412-5776-4dd3-86d7-5c0e058f9287",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:50:28.669 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:50:28.669 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@752 -- # hello_world_bdev=raid5f 00:50:28.669 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:50:28.669 12:14:00 blockdev_raid5f -- bdev/blockdev.sh@754 -- # killprocess 176633 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@949 -- # '[' -z 176633 ']' 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@953 -- # kill -0 176633 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@954 -- # uname 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:50:28.669 12:14:00 blockdev_raid5f -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 176633 00:50:28.927 12:14:00 blockdev_raid5f -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:50:28.927 12:14:00 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:50:28.927 12:14:00 blockdev_raid5f -- common/autotest_common.sh@967 -- # echo 'killing process with pid 176633' 00:50:28.927 killing process with pid 176633 00:50:28.927 12:14:00 blockdev_raid5f -- common/autotest_common.sh@968 -- # kill 176633 00:50:28.927 12:14:00 blockdev_raid5f -- common/autotest_common.sh@973 -- # wait 176633 00:50:32.209 12:14:03 blockdev_raid5f -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:50:32.209 12:14:03 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:50:32.209 12:14:03 blockdev_raid5f -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:50:32.209 12:14:03 blockdev_raid5f -- common/autotest_common.sh@1106 -- # xtrace_disable 00:50:32.209 12:14:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:50:32.209 ************************************ 00:50:32.209 START TEST bdev_hello_world 00:50:32.209 ************************************ 00:50:32.209 12:14:03 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:50:32.209 [2024-06-10 12:14:03.668889] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:50:32.209 [2024-06-10 12:14:03.669153] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176703 ] 00:50:32.209 [2024-06-10 12:14:03.851472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:32.209 [2024-06-10 12:14:04.097958] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:50:32.774 [2024-06-10 12:14:04.779719] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:50:32.774 [2024-06-10 12:14:04.779827] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:50:32.774 [2024-06-10 12:14:04.779869] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:50:32.774 [2024-06-10 12:14:04.780479] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:50:32.774 [2024-06-10 12:14:04.780679] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:50:32.774 [2024-06-10 12:14:04.780722] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:50:32.774 [2024-06-10 12:14:04.780813] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:50:32.774 00:50:32.774 [2024-06-10 12:14:04.780904] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:50:34.674 ************************************ 00:50:34.674 END TEST bdev_hello_world 00:50:34.674 ************************************ 00:50:34.674 00:50:34.674 real 0m3.066s 00:50:34.674 user 0m2.660s 00:50:34.674 sys 0m0.277s 00:50:34.674 12:14:06 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # xtrace_disable 00:50:34.674 12:14:06 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:50:34.674 12:14:06 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:50:34.674 12:14:06 blockdev_raid5f -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:50:34.675 12:14:06 blockdev_raid5f -- common/autotest_common.sh@1106 -- # xtrace_disable 00:50:34.675 12:14:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:50:34.675 ************************************ 00:50:34.675 START TEST bdev_bounds 00:50:34.675 ************************************ 00:50:34.675 12:14:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1124 -- # bdev_bounds '' 00:50:34.675 12:14:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=176763 00:50:34.675 12:14:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:50:34.675 12:14:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:50:34.675 12:14:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 176763' 00:50:34.675 Process bdevio pid: 176763 00:50:34.675 12:14:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 176763 00:50:34.675 12:14:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@830 -- # '[' -z 176763 ']' 00:50:34.675 12:14:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:34.675 12:14:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local max_retries=100 00:50:34.675 12:14:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:34.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:34.675 12:14:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # xtrace_disable 00:50:34.675 12:14:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:50:34.932 [2024-06-10 12:14:06.809980] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:50:34.932 [2024-06-10 12:14:06.810446] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176763 ] 00:50:35.190 [2024-06-10 12:14:06.999444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:50:35.190 [2024-06-10 12:14:07.195435] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:50:35.190 [2024-06-10 12:14:07.195600] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:50:35.190 [2024-06-10 12:14:07.195602] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:50:35.757 12:14:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:50:35.757 12:14:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@863 -- # return 0 00:50:35.757 12:14:07 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:50:36.016 I/O targets: 00:50:36.016 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:50:36.016 00:50:36.016 00:50:36.016 CUnit - A unit testing framework for C - Version 2.1-3 00:50:36.016 http://cunit.sourceforge.net/ 00:50:36.016 00:50:36.016 00:50:36.016 Suite: bdevio tests on: raid5f 00:50:36.016 Test: blockdev write read block ...passed 00:50:36.016 Test: blockdev write zeroes read block ...passed 00:50:36.016 Test: blockdev write zeroes read no split ...passed 00:50:36.016 Test: blockdev write zeroes read split ...passed 00:50:36.016 Test: blockdev write zeroes read split partial ...passed 00:50:36.016 Test: blockdev reset ...passed 00:50:36.016 Test: blockdev write read 8 blocks ...passed 00:50:36.016 Test: blockdev write read size > 128k ...passed 00:50:36.276 Test: blockdev write read invalid size ...passed 00:50:36.276 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:50:36.276 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:50:36.276 Test: blockdev write read max offset ...passed 00:50:36.276 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:50:36.276 Test: blockdev writev readv 8 blocks ...passed 00:50:36.276 Test: blockdev writev readv 30 x 1block ...passed 00:50:36.276 Test: blockdev writev readv block ...passed 00:50:36.276 Test: blockdev writev readv size > 128k ...passed 00:50:36.276 Test: blockdev writev readv size > 128k in two iovs ...passed 00:50:36.276 Test: blockdev comparev and writev ...passed 00:50:36.276 Test: blockdev nvme passthru rw ...passed 00:50:36.276 Test: blockdev nvme passthru vendor specific ...passed 00:50:36.276 Test: blockdev nvme admin passthru ...passed 00:50:36.277 Test: blockdev copy ...passed 00:50:36.277 00:50:36.277 Run Summary: Type Total Ran Passed Failed Inactive 00:50:36.277 suites 1 1 n/a 0 0 00:50:36.277 tests 23 23 23 0 0 00:50:36.277 asserts 130 130 130 0 n/a 00:50:36.277 00:50:36.277 Elapsed time = 0.608 seconds 00:50:36.277 0 00:50:36.277 12:14:08 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 176763 00:50:36.277 12:14:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@949 -- # '[' -z 176763 ']' 00:50:36.277 12:14:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@953 -- # kill -0 176763 00:50:36.277 12:14:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # uname 00:50:36.277 12:14:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:50:36.277 12:14:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 176763 00:50:36.277 12:14:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:50:36.277 killing process with pid 176763 00:50:36.277 12:14:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:50:36.277 12:14:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@967 -- # echo 'killing process with pid 176763' 00:50:36.277 12:14:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # kill 176763 00:50:36.277 12:14:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # wait 176763 00:50:38.180 12:14:09 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:50:38.180 00:50:38.180 real 0m3.122s 00:50:38.180 user 0m7.419s 00:50:38.180 sys 0m0.338s 00:50:38.180 ************************************ 00:50:38.180 END TEST bdev_bounds 00:50:38.180 ************************************ 00:50:38.180 12:14:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # xtrace_disable 00:50:38.180 12:14:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:50:38.180 12:14:09 blockdev_raid5f -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:50:38.180 12:14:09 blockdev_raid5f -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:50:38.180 12:14:09 blockdev_raid5f -- common/autotest_common.sh@1106 -- # xtrace_disable 00:50:38.180 12:14:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:50:38.180 ************************************ 00:50:38.180 START TEST bdev_nbd 00:50:38.180 ************************************ 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1124 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('raid5f') 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:50:38.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=1 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0') 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('raid5f') 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=176832 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 176832 /var/tmp/spdk-nbd.sock 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@830 -- # '[' -z 176832 ']' 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local max_retries=100 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # xtrace_disable 00:50:38.180 12:14:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:50:38.180 [2024-06-10 12:14:09.987887] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:50:38.180 [2024-06-10 12:14:09.988368] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:50:38.180 [2024-06-10 12:14:10.174216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:38.438 [2024-06-10 12:14:10.387719] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:50:39.003 12:14:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:50:39.003 12:14:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@863 -- # return 0 00:50:39.003 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:50:39.003 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:39.003 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:50:39.003 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:50:39.003 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:50:39.003 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:39.003 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:50:39.003 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:50:39.003 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:50:39.003 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:50:39.003 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:50:39.003 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:50:39.003 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:50:39.260 1+0 records in 00:50:39.260 1+0 records out 00:50:39.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291469 s, 14.1 MB/s 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:50:39.260 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:50:39.826 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:50:39.826 { 00:50:39.826 "nbd_device": "/dev/nbd0", 00:50:39.826 "bdev_name": "raid5f" 00:50:39.826 } 00:50:39.826 ]' 00:50:39.826 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:50:39.826 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:50:39.826 { 00:50:39.826 "nbd_device": "/dev/nbd0", 00:50:39.826 "bdev_name": "raid5f" 00:50:39.826 } 00:50:39.826 ]' 00:50:39.826 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:50:39.826 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:50:39.826 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:39.826 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:50:39.826 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:50:39.826 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:50:39.826 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:50:39.826 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:50:40.084 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:50:40.084 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:50:40.084 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:50:40.084 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:50:40.084 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:50:40.084 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:50:40.084 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:50:40.084 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:50:40.084 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:50:40.084 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:40.084 12:14:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:50:40.084 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:50:40.084 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:50:40.084 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:50:40.342 /dev/nbd0 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:50:40.342 1+0 records in 00:50:40.342 1+0 records out 00:50:40.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027263 s, 15.0 MB/s 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:50:40.342 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:50:40.600 { 00:50:40.600 "nbd_device": "/dev/nbd0", 00:50:40.600 "bdev_name": "raid5f" 00:50:40.600 } 00:50:40.600 ]' 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:50:40.600 { 00:50:40.600 "nbd_device": "/dev/nbd0", 00:50:40.600 "bdev_name": "raid5f" 00:50:40.600 } 00:50:40.600 ]' 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:50:40.600 256+0 records in 00:50:40.600 256+0 records out 00:50:40.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00669843 s, 157 MB/s 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:50:40.600 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:50:40.858 256+0 records in 00:50:40.858 256+0 records out 00:50:40.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0348525 s, 30.1 MB/s 00:50:40.858 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:50:40.858 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:50:40.858 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:50:40.858 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:50:40.858 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:50:40.858 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:50:40.858 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:50:40.858 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:50:40.858 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:50:40.858 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:50:40.858 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:50:40.858 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:40.858 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:50:40.858 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:50:40.858 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:50:40.858 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:50:40.858 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:50:41.116 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:50:41.116 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:50:41.116 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:50:41.116 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:50:41.116 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:50:41.116 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:50:41.116 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:50:41.116 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:50:41.116 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:50:41.116 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:41.116 12:14:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:50:41.373 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:50:41.374 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:50:41.374 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:50:41.374 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:50:41.374 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:50:41.374 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:50:41.374 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:50:41.374 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:50:41.374 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:50:41.374 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:50:41.374 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:50:41.374 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:50:41.374 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:50:41.374 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:41.374 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:50:41.374 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:50:41.374 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:50:41.374 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:50:41.631 malloc_lvol_verify 00:50:41.631 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:50:41.888 4b0ae890-eb09-445d-9c16-3a0db16c5e4d 00:50:41.888 12:14:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:50:42.146 028fdd62-f917-4662-be24-3e00258c402e 00:50:42.146 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:50:42.403 /dev/nbd0 00:50:42.403 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:50:42.403 mke2fs 1.46.5 (30-Dec-2021) 00:50:42.403 00:50:42.403 Filesystem too small for a journal 00:50:42.403 Discarding device blocks: 0/1024 done 00:50:42.403 Creating filesystem with 1024 4k blocks and 1024 inodes 00:50:42.403 00:50:42.403 Allocating group tables: 0/1 done 00:50:42.403 Writing inode tables: 0/1 done 00:50:42.403 Writing superblocks and filesystem accounting information: 0/1 done 00:50:42.403 00:50:42.403 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:50:42.403 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:50:42.403 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:42.403 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:50:42.403 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:50:42.403 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:50:42.403 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:50:42.403 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 176832 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@949 -- # '[' -z 176832 ']' 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@953 -- # kill -0 176832 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # uname 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 176832 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@967 -- # echo 'killing process with pid 176832' 00:50:42.663 killing process with pid 176832 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # kill 176832 00:50:42.663 12:14:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # wait 176832 00:50:44.564 ************************************ 00:50:44.564 END TEST bdev_nbd 00:50:44.564 ************************************ 00:50:44.564 12:14:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:50:44.564 00:50:44.565 real 0m6.435s 00:50:44.565 user 0m8.774s 00:50:44.565 sys 0m1.448s 00:50:44.565 12:14:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:50:44.565 12:14:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:50:44.565 12:14:16 blockdev_raid5f -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:50:44.565 12:14:16 blockdev_raid5f -- bdev/blockdev.sh@764 -- # '[' raid5f = nvme ']' 00:50:44.565 12:14:16 blockdev_raid5f -- bdev/blockdev.sh@764 -- # '[' raid5f = gpt ']' 00:50:44.565 12:14:16 blockdev_raid5f -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:50:44.565 12:14:16 blockdev_raid5f -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:50:44.565 12:14:16 blockdev_raid5f -- common/autotest_common.sh@1106 -- # xtrace_disable 00:50:44.565 12:14:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:50:44.565 ************************************ 00:50:44.565 START TEST bdev_fio 00:50:44.565 ************************************ 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1124 -- # fio_test_suite '' 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:50:44.565 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1279 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local workload=verify 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local bdev_type=AIO 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local env_context= 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local fio_dir=/usr/src/fio 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -z verify ']' 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1294 -- # '[' -n '' ']' 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1298 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1300 -- # cat 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1312 -- # '[' verify == verify ']' 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # cat 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1322 -- # '[' AIO == AIO ']' 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # /usr/src/fio/fio --version 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # echo serialize_overlap=1 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid5f]' 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid5f 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1106 -- # xtrace_disable 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:50:44.565 ************************************ 00:50:44.565 START TEST bdev_fio_rw_verify 00:50:44.565 ************************************ 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1355 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # local sanitizers 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # shift 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local asan_lib= 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # grep libasan 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # break 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:50:44.565 12:14:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:50:44.823 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:50:44.823 fio-3.35 00:50:44.823 Starting 1 thread 00:50:57.013 00:50:57.013 job_raid5f: (groupid=0, jobs=1): err= 0: pid=177078: Mon Jun 10 12:14:27 2024 00:50:57.013 read: IOPS=9555, BW=37.3MiB/s (39.1MB/s)(373MiB/10001msec) 00:50:57.014 slat (usec): min=18, max=331, avg=25.42, stdev= 3.93 00:50:57.014 clat (usec): min=11, max=546, avg=169.11, stdev=63.42 00:50:57.014 lat (usec): min=33, max=575, avg=194.53, stdev=64.47 00:50:57.014 clat percentiles (usec): 00:50:57.014 | 50.000th=[ 169], 99.000th=[ 306], 99.900th=[ 383], 99.990th=[ 457], 00:50:57.014 | 99.999th=[ 545] 00:50:57.014 write: IOPS=9990, BW=39.0MiB/s (40.9MB/s)(386MiB/9882msec); 0 zone resets 00:50:57.014 slat (usec): min=8, max=193, avg=21.24, stdev= 4.88 00:50:57.014 clat (usec): min=64, max=2027, avg=379.33, stdev=64.73 00:50:57.014 lat (usec): min=82, max=2047, avg=400.57, stdev=66.88 00:50:57.014 clat percentiles (usec): 00:50:57.014 | 50.000th=[ 375], 99.000th=[ 562], 99.900th=[ 676], 99.990th=[ 1418], 00:50:57.014 | 99.999th=[ 2024] 00:50:57.014 bw ( KiB/s): min=33776, max=46144, per=98.45%, avg=39345.68, stdev=3537.35, samples=19 00:50:57.014 iops : min= 8444, max=11536, avg=9836.42, stdev=884.34, samples=19 00:50:57.014 lat (usec) : 20=0.01%, 50=0.01%, 100=7.81%, 250=34.90%, 500=56.02% 00:50:57.014 lat (usec) : 750=1.26%, 1000=0.01% 00:50:57.014 lat (msec) : 2=0.01%, 4=0.01% 00:50:57.014 cpu : usr=99.59%, sys=0.35%, ctx=113, majf=0, minf=6779 00:50:57.014 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:50:57.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:50:57.014 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:50:57.014 issued rwts: total=95560,98729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:50:57.014 latency : target=0, window=0, percentile=100.00%, depth=8 00:50:57.014 00:50:57.014 Run status group 0 (all jobs): 00:50:57.014 READ: bw=37.3MiB/s (39.1MB/s), 37.3MiB/s-37.3MiB/s (39.1MB/s-39.1MB/s), io=373MiB (391MB), run=10001-10001msec 00:50:57.014 WRITE: bw=39.0MiB/s (40.9MB/s), 39.0MiB/s-39.0MiB/s (40.9MB/s-40.9MB/s), io=386MiB (404MB), run=9882-9882msec 00:50:57.271 ----------------------------------------------------- 00:50:57.271 Suppressions used: 00:50:57.271 count bytes template 00:50:57.271 1 7 /usr/src/fio/parse.c 00:50:57.271 205 19680 /usr/src/fio/iolog.c 00:50:57.271 1 904 libcrypto.so 00:50:57.272 ----------------------------------------------------- 00:50:57.272 00:50:57.530 ************************************ 00:50:57.530 END TEST bdev_fio_rw_verify 00:50:57.530 ************************************ 00:50:57.530 00:50:57.530 real 0m12.862s 00:50:57.530 user 0m13.597s 00:50:57.530 sys 0m0.754s 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1279 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local workload=trim 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local bdev_type= 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local env_context= 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local fio_dir=/usr/src/fio 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -z trim ']' 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1294 -- # '[' -n '' ']' 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1298 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1300 -- # cat 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1312 -- # '[' trim == verify ']' 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' trim == trim ']' 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # echo rw=trimwrite 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "9c71a987-8ab3-4ea3-b7b9-a7bfab0f104f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9c71a987-8ab3-4ea3-b7b9-a7bfab0f104f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "9c71a987-8ab3-4ea3-b7b9-a7bfab0f104f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "4af5f490-b7ac-494a-8922-8ba802fd5874",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "bdbd0e51-65aa-4897-933e-6f7e1d2e8133",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "831f4412-5776-4dd3-86d7-5c0e058f9287",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:50:57.530 /home/vagrant/spdk_repo/spdk 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:50:57.530 00:50:57.530 real 0m13.071s 00:50:57.530 user 0m13.724s 00:50:57.530 sys 0m0.834s 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:50:57.530 12:14:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:50:57.530 ************************************ 00:50:57.530 END TEST bdev_fio 00:50:57.530 ************************************ 00:50:57.530 12:14:29 blockdev_raid5f -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:50:57.530 12:14:29 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:50:57.530 12:14:29 blockdev_raid5f -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:50:57.530 12:14:29 blockdev_raid5f -- common/autotest_common.sh@1106 -- # xtrace_disable 00:50:57.530 12:14:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:50:57.530 ************************************ 00:50:57.530 START TEST bdev_verify 00:50:57.530 ************************************ 00:50:57.530 12:14:29 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:50:57.788 [2024-06-10 12:14:29.599430] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:50:57.788 [2024-06-10 12:14:29.599752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177246 ] 00:50:57.788 [2024-06-10 12:14:29.763103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:50:58.046 [2024-06-10 12:14:29.970220] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:50:58.046 [2024-06-10 12:14:29.970224] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:50:58.634 Running I/O for 5 seconds... 00:51:03.898 00:51:03.898 Latency(us) 00:51:03.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:03.898 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:51:03.898 Verification LBA range: start 0x0 length 0x2000 00:51:03.898 raid5f : 5.02 7367.41 28.78 0.00 0.00 25989.90 89.23 27712.37 00:51:03.898 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:51:03.898 Verification LBA range: start 0x2000 length 0x2000 00:51:03.898 raid5f : 5.01 7353.67 28.73 0.00 0.00 26059.49 255.51 28086.86 00:51:03.898 =================================================================================================================== 00:51:03.898 Total : 14721.08 57.50 0.00 0.00 26024.65 89.23 28086.86 00:51:05.801 ************************************ 00:51:05.801 END TEST bdev_verify 00:51:05.801 ************************************ 00:51:05.801 00:51:05.801 real 0m7.810s 00:51:05.801 user 0m14.316s 00:51:05.801 sys 0m0.252s 00:51:05.801 12:14:37 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:51:05.801 12:14:37 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:51:05.801 12:14:37 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:51:05.801 12:14:37 blockdev_raid5f -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:51:05.801 12:14:37 blockdev_raid5f -- common/autotest_common.sh@1106 -- # xtrace_disable 00:51:05.801 12:14:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:05.801 ************************************ 00:51:05.801 START TEST bdev_verify_big_io 00:51:05.801 ************************************ 00:51:05.801 12:14:37 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:51:05.801 [2024-06-10 12:14:37.495419] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:51:05.801 [2024-06-10 12:14:37.495850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177355 ] 00:51:05.801 [2024-06-10 12:14:37.684302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:51:06.060 [2024-06-10 12:14:37.903879] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:51:06.060 [2024-06-10 12:14:37.903883] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:51:06.628 Running I/O for 5 seconds... 00:51:11.900 00:51:11.900 Latency(us) 00:51:11.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:11.900 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:51:11.900 Verification LBA range: start 0x0 length 0x200 00:51:11.900 raid5f : 5.39 399.91 24.99 0.00 0.00 7915518.09 280.87 421427.69 00:51:11.900 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:51:11.900 Verification LBA range: start 0x200 length 0x200 00:51:11.900 raid5f : 5.42 351.46 21.97 0.00 0.00 8990810.57 209.68 489335.47 00:51:11.900 =================================================================================================================== 00:51:11.900 Total : 751.37 46.96 0.00 0.00 8419669.01 209.68 489335.47 00:51:13.852 ************************************ 00:51:13.852 END TEST bdev_verify_big_io 00:51:13.852 ************************************ 00:51:13.852 00:51:13.852 real 0m8.338s 00:51:13.852 user 0m15.287s 00:51:13.852 sys 0m0.272s 00:51:13.852 12:14:45 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:51:13.852 12:14:45 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:51:13.852 12:14:45 blockdev_raid5f -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:51:13.852 12:14:45 blockdev_raid5f -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:51:13.852 12:14:45 blockdev_raid5f -- common/autotest_common.sh@1106 -- # xtrace_disable 00:51:13.852 12:14:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:13.852 ************************************ 00:51:13.852 START TEST bdev_write_zeroes 00:51:13.852 ************************************ 00:51:13.853 12:14:45 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:51:13.853 [2024-06-10 12:14:45.866111] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:51:13.853 [2024-06-10 12:14:45.866565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177463 ] 00:51:14.112 [2024-06-10 12:14:46.040307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:14.371 [2024-06-10 12:14:46.255266] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:51:14.939 Running I/O for 1 seconds... 00:51:15.875 00:51:15.875 Latency(us) 00:51:15.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:15.875 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:51:15.875 raid5f : 1.00 26812.30 104.74 0.00 0.00 4759.48 1271.71 6210.32 00:51:15.875 =================================================================================================================== 00:51:15.875 Total : 26812.30 104.74 0.00 0.00 4759.48 1271.71 6210.32 00:51:17.782 00:51:17.782 real 0m3.816s 00:51:17.782 user 0m3.479s 00:51:17.782 sys 0m0.212s 00:51:17.782 ************************************ 00:51:17.782 END TEST bdev_write_zeroes 00:51:17.782 ************************************ 00:51:17.782 12:14:49 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:51:17.782 12:14:49 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:51:17.782 12:14:49 blockdev_raid5f -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:51:17.782 12:14:49 blockdev_raid5f -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:51:17.782 12:14:49 blockdev_raid5f -- common/autotest_common.sh@1106 -- # xtrace_disable 00:51:17.782 12:14:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:17.782 ************************************ 00:51:17.782 START TEST bdev_json_nonenclosed 00:51:17.782 ************************************ 00:51:17.782 12:14:49 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:51:17.782 [2024-06-10 12:14:49.766473] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:51:17.782 [2024-06-10 12:14:49.766948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177533 ] 00:51:18.040 [2024-06-10 12:14:49.948251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:18.299 [2024-06-10 12:14:50.162501] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:51:18.299 [2024-06-10 12:14:50.162802] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:51:18.299 [2024-06-10 12:14:50.162944] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:51:18.299 [2024-06-10 12:14:50.163056] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:51:18.868 ************************************ 00:51:18.868 END TEST bdev_json_nonenclosed 00:51:18.868 ************************************ 00:51:18.868 00:51:18.868 real 0m0.972s 00:51:18.868 user 0m0.721s 00:51:18.868 sys 0m0.150s 00:51:18.868 12:14:50 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:51:18.868 12:14:50 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:51:18.868 12:14:50 blockdev_raid5f -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:51:18.868 12:14:50 blockdev_raid5f -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:51:18.868 12:14:50 blockdev_raid5f -- common/autotest_common.sh@1106 -- # xtrace_disable 00:51:18.868 12:14:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:18.868 ************************************ 00:51:18.868 START TEST bdev_json_nonarray 00:51:18.868 ************************************ 00:51:18.868 12:14:50 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:51:18.868 [2024-06-10 12:14:50.784265] Starting SPDK v24.09-pre git sha1 d88da79a3 / DPDK 24.03.0 initialization... 00:51:18.868 [2024-06-10 12:14:50.784579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177571 ] 00:51:19.127 [2024-06-10 12:14:50.944082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:19.127 [2024-06-10 12:14:51.157012] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:51:19.127 [2024-06-10 12:14:51.157286] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:51:19.127 [2024-06-10 12:14:51.157434] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:51:19.127 [2024-06-10 12:14:51.157537] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:51:19.695 00:51:19.695 real 0m0.908s 00:51:19.695 user 0m0.670s 00:51:19.695 sys 0m0.137s 00:51:19.695 ************************************ 00:51:19.695 END TEST bdev_json_nonarray 00:51:19.695 ************************************ 00:51:19.695 12:14:51 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # xtrace_disable 00:51:19.695 12:14:51 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:51:19.695 12:14:51 blockdev_raid5f -- bdev/blockdev.sh@787 -- # [[ raid5f == bdev ]] 00:51:19.695 12:14:51 blockdev_raid5f -- bdev/blockdev.sh@794 -- # [[ raid5f == gpt ]] 00:51:19.695 12:14:51 blockdev_raid5f -- bdev/blockdev.sh@798 -- # [[ raid5f == crypto_sw ]] 00:51:19.695 12:14:51 blockdev_raid5f -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:51:19.695 12:14:51 blockdev_raid5f -- bdev/blockdev.sh@811 -- # cleanup 00:51:19.695 12:14:51 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:51:19.695 12:14:51 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:51:19.695 12:14:51 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:51:19.695 12:14:51 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:51:19.695 12:14:51 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:51:19.695 12:14:51 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:51:19.695 ************************************ 00:51:19.695 END TEST blockdev_raid5f 00:51:19.695 ************************************ 00:51:19.695 00:51:19.695 real 0m52.751s 00:51:19.695 user 1m11.935s 00:51:19.695 sys 0m4.814s 00:51:19.695 12:14:51 blockdev_raid5f -- common/autotest_common.sh@1125 -- # xtrace_disable 00:51:19.695 12:14:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:19.695 12:14:51 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:51:19.695 12:14:51 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:51:19.695 12:14:51 -- common/autotest_common.sh@723 -- # xtrace_disable 00:51:19.695 12:14:51 -- common/autotest_common.sh@10 -- # set +x 00:51:19.695 12:14:51 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:51:19.695 12:14:51 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:51:19.695 12:14:51 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:51:19.695 12:14:51 -- common/autotest_common.sh@10 -- # set +x 00:51:21.595 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:51:21.595 Waiting for block devices as requested 00:51:21.595 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:51:22.162 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:51:22.420 Cleaning 00:51:22.420 Removing: /var/run/dpdk/spdk0/config 00:51:22.420 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:51:22.420 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:51:22.420 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:51:22.420 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:51:22.420 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:51:22.420 Removing: /var/run/dpdk/spdk0/hugepage_info 00:51:22.420 Removing: /dev/shm/spdk_tgt_trace.pid112105 00:51:22.420 Removing: /var/run/dpdk/spdk0 00:51:22.420 Removing: /var/run/dpdk/spdk_pid111838 00:51:22.420 Removing: /var/run/dpdk/spdk_pid112105 00:51:22.420 Removing: /var/run/dpdk/spdk_pid112371 00:51:22.420 Removing: /var/run/dpdk/spdk_pid112496 00:51:22.420 Removing: /var/run/dpdk/spdk_pid112562 00:51:22.420 Removing: /var/run/dpdk/spdk_pid112708 00:51:22.420 Removing: /var/run/dpdk/spdk_pid112731 00:51:22.420 Removing: /var/run/dpdk/spdk_pid112899 00:51:22.420 Removing: /var/run/dpdk/spdk_pid113178 00:51:22.420 Removing: /var/run/dpdk/spdk_pid113367 00:51:22.420 Removing: /var/run/dpdk/spdk_pid113493 00:51:22.420 Removing: /var/run/dpdk/spdk_pid113604 00:51:22.420 Removing: /var/run/dpdk/spdk_pid113739 00:51:22.420 Removing: /var/run/dpdk/spdk_pid113853 00:51:22.420 Removing: /var/run/dpdk/spdk_pid113906 00:51:22.420 Removing: /var/run/dpdk/spdk_pid113959 00:51:22.420 Removing: /var/run/dpdk/spdk_pid114037 00:51:22.420 Removing: /var/run/dpdk/spdk_pid114167 00:51:22.420 Removing: /var/run/dpdk/spdk_pid114709 00:51:22.420 Removing: /var/run/dpdk/spdk_pid114801 00:51:22.420 Removing: /var/run/dpdk/spdk_pid114885 00:51:22.420 Removing: /var/run/dpdk/spdk_pid114909 00:51:22.420 Removing: /var/run/dpdk/spdk_pid115078 00:51:22.420 Removing: /var/run/dpdk/spdk_pid115099 00:51:22.420 Removing: /var/run/dpdk/spdk_pid115281 00:51:22.420 Removing: /var/run/dpdk/spdk_pid115301 00:51:22.420 Removing: /var/run/dpdk/spdk_pid115377 00:51:22.420 Removing: /var/run/dpdk/spdk_pid115407 00:51:22.420 Removing: /var/run/dpdk/spdk_pid115483 00:51:22.420 Removing: /var/run/dpdk/spdk_pid115506 00:51:22.420 Removing: /var/run/dpdk/spdk_pid115716 00:51:22.420 Removing: /var/run/dpdk/spdk_pid115766 00:51:22.420 Removing: /var/run/dpdk/spdk_pid115814 00:51:22.420 Removing: /var/run/dpdk/spdk_pid115894 00:51:22.420 Removing: /var/run/dpdk/spdk_pid115990 00:51:22.420 Removing: /var/run/dpdk/spdk_pid116042 00:51:22.420 Removing: /var/run/dpdk/spdk_pid116131 00:51:22.420 Removing: /var/run/dpdk/spdk_pid116196 00:51:22.420 Removing: /var/run/dpdk/spdk_pid116259 00:51:22.420 Removing: /var/run/dpdk/spdk_pid116323 00:51:22.679 Removing: /var/run/dpdk/spdk_pid116380 00:51:22.679 Removing: /var/run/dpdk/spdk_pid116445 00:51:22.679 Removing: /var/run/dpdk/spdk_pid116508 00:51:22.679 Removing: /var/run/dpdk/spdk_pid116571 00:51:22.679 Removing: /var/run/dpdk/spdk_pid116631 00:51:22.679 Removing: /var/run/dpdk/spdk_pid116694 00:51:22.679 Removing: /var/run/dpdk/spdk_pid116755 00:51:22.679 Removing: /var/run/dpdk/spdk_pid116816 00:51:22.679 Removing: /var/run/dpdk/spdk_pid116879 00:51:22.679 Removing: /var/run/dpdk/spdk_pid116937 00:51:22.679 Removing: /var/run/dpdk/spdk_pid116993 00:51:22.679 Removing: /var/run/dpdk/spdk_pid117066 00:51:22.679 Removing: /var/run/dpdk/spdk_pid117129 00:51:22.679 Removing: /var/run/dpdk/spdk_pid117190 00:51:22.679 Removing: /var/run/dpdk/spdk_pid117258 00:51:22.679 Removing: /var/run/dpdk/spdk_pid117321 00:51:22.679 Removing: /var/run/dpdk/spdk_pid117385 00:51:22.679 Removing: /var/run/dpdk/spdk_pid117476 00:51:22.679 Removing: /var/run/dpdk/spdk_pid117619 00:51:22.679 Removing: /var/run/dpdk/spdk_pid117809 00:51:22.679 Removing: /var/run/dpdk/spdk_pid117913 00:51:22.679 Removing: /var/run/dpdk/spdk_pid117987 00:51:22.679 Removing: /var/run/dpdk/spdk_pid119269 00:51:22.679 Removing: /var/run/dpdk/spdk_pid119503 00:51:22.679 Removing: /var/run/dpdk/spdk_pid119722 00:51:22.679 Removing: /var/run/dpdk/spdk_pid119861 00:51:22.679 Removing: /var/run/dpdk/spdk_pid120029 00:51:22.679 Removing: /var/run/dpdk/spdk_pid120122 00:51:22.679 Removing: /var/run/dpdk/spdk_pid120162 00:51:22.679 Removing: /var/run/dpdk/spdk_pid120200 00:51:22.679 Removing: /var/run/dpdk/spdk_pid120678 00:51:22.679 Removing: /var/run/dpdk/spdk_pid120784 00:51:22.679 Removing: /var/run/dpdk/spdk_pid120908 00:51:22.679 Removing: /var/run/dpdk/spdk_pid120985 00:51:22.679 Removing: /var/run/dpdk/spdk_pid122372 00:51:22.679 Removing: /var/run/dpdk/spdk_pid122752 00:51:22.679 Removing: /var/run/dpdk/spdk_pid122957 00:51:22.679 Removing: /var/run/dpdk/spdk_pid123950 00:51:22.679 Removing: /var/run/dpdk/spdk_pid124327 00:51:22.679 Removing: /var/run/dpdk/spdk_pid124531 00:51:22.679 Removing: /var/run/dpdk/spdk_pid125516 00:51:22.679 Removing: /var/run/dpdk/spdk_pid126080 00:51:22.679 Removing: /var/run/dpdk/spdk_pid126291 00:51:22.679 Removing: /var/run/dpdk/spdk_pid128513 00:51:22.679 Removing: /var/run/dpdk/spdk_pid129087 00:51:22.679 Removing: /var/run/dpdk/spdk_pid129302 00:51:22.679 Removing: /var/run/dpdk/spdk_pid131529 00:51:22.679 Removing: /var/run/dpdk/spdk_pid132038 00:51:22.679 Removing: /var/run/dpdk/spdk_pid132258 00:51:22.679 Removing: /var/run/dpdk/spdk_pid134506 00:51:22.679 Removing: /var/run/dpdk/spdk_pid135277 00:51:22.679 Removing: /var/run/dpdk/spdk_pid135494 00:51:22.679 Removing: /var/run/dpdk/spdk_pid137947 00:51:22.679 Removing: /var/run/dpdk/spdk_pid138514 00:51:22.679 Removing: /var/run/dpdk/spdk_pid138741 00:51:22.679 Removing: /var/run/dpdk/spdk_pid141202 00:51:22.679 Removing: /var/run/dpdk/spdk_pid141756 00:51:22.679 Removing: /var/run/dpdk/spdk_pid141979 00:51:22.679 Removing: /var/run/dpdk/spdk_pid144484 00:51:22.679 Removing: /var/run/dpdk/spdk_pid145371 00:51:22.679 Removing: /var/run/dpdk/spdk_pid145607 00:51:22.679 Removing: /var/run/dpdk/spdk_pid145828 00:51:22.679 Removing: /var/run/dpdk/spdk_pid146387 00:51:22.679 Removing: /var/run/dpdk/spdk_pid147346 00:51:22.938 Removing: /var/run/dpdk/spdk_pid147850 00:51:22.938 Removing: /var/run/dpdk/spdk_pid148738 00:51:22.938 Removing: /var/run/dpdk/spdk_pid149332 00:51:22.938 Removing: /var/run/dpdk/spdk_pid150310 00:51:22.938 Removing: /var/run/dpdk/spdk_pid150858 00:51:22.938 Removing: /var/run/dpdk/spdk_pid153784 00:51:22.938 Removing: /var/run/dpdk/spdk_pid154534 00:51:22.938 Removing: /var/run/dpdk/spdk_pid155114 00:51:22.938 Removing: /var/run/dpdk/spdk_pid158299 00:51:22.938 Removing: /var/run/dpdk/spdk_pid159160 00:51:22.938 Removing: /var/run/dpdk/spdk_pid159787 00:51:22.938 Removing: /var/run/dpdk/spdk_pid161195 00:51:22.938 Removing: /var/run/dpdk/spdk_pid161713 00:51:22.938 Removing: /var/run/dpdk/spdk_pid162995 00:51:22.938 Removing: /var/run/dpdk/spdk_pid163521 00:51:22.938 Removing: /var/run/dpdk/spdk_pid164813 00:51:22.938 Removing: /var/run/dpdk/spdk_pid165334 00:51:22.938 Removing: /var/run/dpdk/spdk_pid166196 00:51:22.938 Removing: /var/run/dpdk/spdk_pid166262 00:51:22.938 Removing: /var/run/dpdk/spdk_pid166323 00:51:22.938 Removing: /var/run/dpdk/spdk_pid166393 00:51:22.938 Removing: /var/run/dpdk/spdk_pid166539 00:51:22.938 Removing: /var/run/dpdk/spdk_pid166686 00:51:22.938 Removing: /var/run/dpdk/spdk_pid166924 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167206 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167238 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167301 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167329 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167361 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167395 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167430 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167462 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167502 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167529 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167571 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167602 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167637 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167670 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167711 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167742 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167778 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167810 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167850 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167879 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167940 00:51:22.938 Removing: /var/run/dpdk/spdk_pid167973 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168016 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168105 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168159 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168191 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168237 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168272 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168294 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168369 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168396 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168447 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168481 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168509 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168534 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168572 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168596 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168632 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168660 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168706 00:51:22.938 Removing: /var/run/dpdk/spdk_pid168762 00:51:23.198 Removing: /var/run/dpdk/spdk_pid168790 00:51:23.198 Removing: /var/run/dpdk/spdk_pid168844 00:51:23.198 Removing: /var/run/dpdk/spdk_pid168875 00:51:23.198 Removing: /var/run/dpdk/spdk_pid168899 00:51:23.198 Removing: /var/run/dpdk/spdk_pid168967 00:51:23.198 Removing: /var/run/dpdk/spdk_pid168994 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169049 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169078 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169108 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169132 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169160 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169191 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169222 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169247 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169345 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169454 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169616 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169648 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169705 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169769 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169814 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169847 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169877 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169933 00:51:23.198 Removing: /var/run/dpdk/spdk_pid169965 00:51:23.198 Removing: /var/run/dpdk/spdk_pid170055 00:51:23.198 Removing: /var/run/dpdk/spdk_pid170115 00:51:23.198 Removing: /var/run/dpdk/spdk_pid170179 00:51:23.198 Removing: /var/run/dpdk/spdk_pid170455 00:51:23.198 Removing: /var/run/dpdk/spdk_pid170587 00:51:23.198 Removing: /var/run/dpdk/spdk_pid170634 00:51:23.198 Removing: /var/run/dpdk/spdk_pid170732 00:51:23.198 Removing: /var/run/dpdk/spdk_pid170831 00:51:23.198 Removing: /var/run/dpdk/spdk_pid170875 00:51:23.198 Removing: /var/run/dpdk/spdk_pid171140 00:51:23.198 Removing: /var/run/dpdk/spdk_pid171254 00:51:23.198 Removing: /var/run/dpdk/spdk_pid171357 00:51:23.198 Removing: /var/run/dpdk/spdk_pid171424 00:51:23.198 Removing: /var/run/dpdk/spdk_pid171462 00:51:23.198 Removing: /var/run/dpdk/spdk_pid171548 00:51:23.198 Removing: /var/run/dpdk/spdk_pid171982 00:51:23.198 Removing: /var/run/dpdk/spdk_pid172036 00:51:23.198 Removing: /var/run/dpdk/spdk_pid172359 00:51:23.198 Removing: /var/run/dpdk/spdk_pid172463 00:51:23.198 Removing: /var/run/dpdk/spdk_pid172572 00:51:23.198 Removing: /var/run/dpdk/spdk_pid172641 00:51:23.198 Removing: /var/run/dpdk/spdk_pid172674 00:51:23.198 Removing: /var/run/dpdk/spdk_pid172712 00:51:23.198 Removing: /var/run/dpdk/spdk_pid174050 00:51:23.198 Removing: /var/run/dpdk/spdk_pid174208 00:51:23.198 Removing: /var/run/dpdk/spdk_pid174212 00:51:23.198 Removing: /var/run/dpdk/spdk_pid174230 00:51:23.198 Removing: /var/run/dpdk/spdk_pid174725 00:51:23.198 Removing: /var/run/dpdk/spdk_pid174836 00:51:23.198 Removing: /var/run/dpdk/spdk_pid175749 00:51:23.198 Removing: /var/run/dpdk/spdk_pid176633 00:51:23.198 Removing: /var/run/dpdk/spdk_pid176703 00:51:23.198 Removing: /var/run/dpdk/spdk_pid176763 00:51:23.198 Removing: /var/run/dpdk/spdk_pid177059 00:51:23.198 Removing: /var/run/dpdk/spdk_pid177246 00:51:23.198 Removing: /var/run/dpdk/spdk_pid177355 00:51:23.198 Removing: /var/run/dpdk/spdk_pid177463 00:51:23.198 Removing: /var/run/dpdk/spdk_pid177533 00:51:23.198 Removing: /var/run/dpdk/spdk_pid177571 00:51:23.198 Clean 00:51:23.456 12:14:55 -- common/autotest_common.sh@1450 -- # return 0 00:51:23.456 12:14:55 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:51:23.456 12:14:55 -- common/autotest_common.sh@729 -- # xtrace_disable 00:51:23.456 12:14:55 -- common/autotest_common.sh@10 -- # set +x 00:51:23.456 12:14:55 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:51:23.456 12:14:55 -- common/autotest_common.sh@729 -- # xtrace_disable 00:51:23.456 12:14:55 -- common/autotest_common.sh@10 -- # set +x 00:51:23.457 12:14:55 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:51:23.457 12:14:55 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:51:23.457 12:14:55 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:51:23.457 12:14:55 -- spdk/autotest.sh@395 -- # hash lcov 00:51:23.457 12:14:55 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:51:23.457 12:14:55 -- spdk/autotest.sh@397 -- # hostname 00:51:23.457 12:14:55 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:51:23.715 geninfo: WARNING: invalid characters removed from testname! 00:52:10.402 12:15:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:14.587 12:15:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:17.153 12:15:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:20.437 12:15:52 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:23.737 12:15:55 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:26.333 12:15:58 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:29.675 12:16:01 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:52:29.675 12:16:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:52:29.675 12:16:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:52:29.675 12:16:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:29.675 12:16:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:29.675 12:16:01 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:52:29.675 12:16:01 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:52:29.675 12:16:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:52:29.675 12:16:01 -- paths/export.sh@5 -- $ export PATH 00:52:29.675 12:16:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:52:29.675 12:16:01 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:52:29.675 12:16:01 -- common/autobuild_common.sh@437 -- $ date +%s 00:52:29.675 12:16:01 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718021761.XXXXXX 00:52:29.675 12:16:01 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718021761.gPMTfA 00:52:29.675 12:16:01 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:52:29.675 12:16:01 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:52:29.675 12:16:01 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:52:29.675 12:16:01 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:52:29.675 12:16:01 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:52:29.675 12:16:01 -- common/autobuild_common.sh@453 -- $ get_config_params 00:52:29.675 12:16:01 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:52:29.675 12:16:01 -- common/autotest_common.sh@10 -- $ set +x 00:52:29.675 12:16:01 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:52:29.675 12:16:01 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:52:29.675 12:16:01 -- pm/common@17 -- $ local monitor 00:52:29.675 12:16:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:52:29.675 12:16:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:52:29.675 12:16:01 -- pm/common@25 -- $ sleep 1 00:52:29.675 12:16:01 -- pm/common@21 -- $ date +%s 00:52:29.675 12:16:01 -- pm/common@21 -- $ date +%s 00:52:29.675 12:16:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1718021761 00:52:29.675 12:16:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1718021761 00:52:29.675 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1718021761_collect-vmstat.pm.log 00:52:29.675 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1718021761_collect-cpu-load.pm.log 00:52:30.611 12:16:02 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:52:30.611 12:16:02 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:52:30.611 12:16:02 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:52:30.611 12:16:02 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:52:30.611 12:16:02 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:52:30.611 12:16:02 -- spdk/autopackage.sh@19 -- $ timing_finish 00:52:30.611 12:16:02 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:52:30.611 12:16:02 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:52:30.611 12:16:02 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:52:30.611 12:16:02 -- spdk/autopackage.sh@20 -- $ exit 0 00:52:30.611 12:16:02 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:52:30.611 12:16:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:52:30.611 12:16:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:52:30.611 12:16:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:52:30.611 12:16:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:52:30.611 12:16:02 -- pm/common@44 -- $ pid=179134 00:52:30.611 12:16:02 -- pm/common@50 -- $ kill -TERM 179134 00:52:30.611 12:16:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:52:30.611 12:16:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:52:30.611 12:16:02 -- pm/common@44 -- $ pid=179136 00:52:30.611 12:16:02 -- pm/common@50 -- $ kill -TERM 179136 00:52:30.611 + [[ -n 2150 ]] 00:52:30.611 + sudo kill 2150 00:52:30.618 [Pipeline] } 00:52:30.631 [Pipeline] // timeout 00:52:30.635 [Pipeline] } 00:52:30.648 [Pipeline] // stage 00:52:30.652 [Pipeline] } 00:52:30.663 [Pipeline] // catchError 00:52:30.670 [Pipeline] stage 00:52:30.671 [Pipeline] { (Stop VM) 00:52:30.681 [Pipeline] sh 00:52:30.956 + vagrant halt 00:52:35.143 ==> default: Halting domain... 00:52:45.147 [Pipeline] sh 00:52:45.428 + vagrant destroy -f 00:52:49.616 ==> default: Removing domain... 00:52:49.630 [Pipeline] sh 00:52:49.911 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest/output 00:52:49.921 [Pipeline] } 00:52:49.942 [Pipeline] // stage 00:52:49.948 [Pipeline] } 00:52:49.967 [Pipeline] // dir 00:52:49.974 [Pipeline] } 00:52:49.993 [Pipeline] // wrap 00:52:49.999 [Pipeline] } 00:52:50.014 [Pipeline] // catchError 00:52:50.025 [Pipeline] stage 00:52:50.027 [Pipeline] { (Epilogue) 00:52:50.042 [Pipeline] sh 00:52:50.324 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:53:16.872 [Pipeline] catchError 00:53:16.874 [Pipeline] { 00:53:16.890 [Pipeline] sh 00:53:17.211 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:53:17.470 Artifacts sizes are good 00:53:17.479 [Pipeline] } 00:53:17.497 [Pipeline] // catchError 00:53:17.505 [Pipeline] archiveArtifacts 00:53:17.510 Archiving artifacts 00:53:18.007 [Pipeline] cleanWs 00:53:18.019 [WS-CLEANUP] Deleting project workspace... 00:53:18.019 [WS-CLEANUP] Deferred wipeout is used... 00:53:18.026 [WS-CLEANUP] done 00:53:18.028 [Pipeline] } 00:53:18.046 [Pipeline] // stage 00:53:18.052 [Pipeline] } 00:53:18.071 [Pipeline] // node 00:53:18.076 [Pipeline] End of Pipeline 00:53:18.114 Finished: SUCCESS